Conversation
Relacy helped identify a race in the existing MPSC algo. I am having a hard time exactly explaining what's going on, but in the newly added unit test (the non Relacy one), I am able to observe three different odd behaviors - a consumer consuming the same elemment in an finite loop, apparently due to the internal next pointers pointing in some sort of cycle - consumer returning &__nil_! - consumer never able to consume a produced value (node is lost) With the non-relacy unit test, in the existing algo, if I insert a random sleep of 0-10 microseconds in push_back after __back_ is exchanged, I can observe one of the above behaviors nearly every single time. The most common was the first behavior. The existing algo claims it came from Dmitry Vyukov's implementation, though one key difference is that the existing one uses an atomic pointer to a Node for the "nil" object, whereas Dmitry's stores an actual Node object embedded in the queue. I re-implemented the version in stdexec exactly as it appears on Dmitry's website (which I had to dig up on archive.org), and it passes newly added Relacy (exploring many thread interleavings) and non-Relacy unit tests. I originally tracked down a bug in timed_thread_scheduler.cpp, where sometimes `STDEXEC_ASSERT(op->command_ == command_type::command_type::stop);` failed.
|
cc @maikel - can you check this over, since I think you implemented the original version of the MPSC queue? |
|
Dmitry's website appears to have stopped working a few months ago, though the original link is on archive.org: https://web.archive.org/web/20221127081044/https://www.1024cores.net/home/lock-free-algorithms/queues/intrusive-mpsc-node-based-queue (Dmitry is also the author of Relacy). |
|
Also of note, I originally noticed the race/bug in timed_thread_scheduler while running the exec unit tests through with a version of TSAN I am trying to upstream which fuzzes the scheduler with random sleeps, and this is another success of that approach I think: llvm/llvm-project#178836 |
|
@maikel as an experiment to help narrow down where the race leading to the queue state becoming corrupt, I added a mutex in two places: https://gist.github.com/ccotter/37b0d3cb3ad88f8c4fea4b2adbb5dce7 ... Relacy says this queue is correct... |
|
Yes after revisiting the old code I agree that the management of nil is the culprit here. We should simplify the implementation towards dmitrys original one. The only thing I would check is whether your usage of memory orderings and atomics can be improved |
|
/ok to test 0a9df03 |
Relacy helped identify a race in the existing MPSC algo. I am having a hard time exactly explaining what's going on, but in the newly added unit test (the non Relacy one), I am able to observe three different odd behaviors
With the non-relacy unit test, in the existing algo, if I insert a random sleep of 0-10 microseconds in push_back after _back is exchanged, I can observe one of the above behaviors nearly every single time. The most common was the first behavior.
The existing algo claims it came from Dmitry Vyukov's implementation, though one key difference is that the existing one uses an atomic pointer to a Node for the "nil" object, whereas Dmitry's stores an actual Node object embedded in the queue.
I re-implemented the version in stdexec exactly as it appears on Dmitry's website (which I had to dig up on archive.org), and it passes newly added Relacy (exploring many thread interleavings) and non-Relacy unit tests.
I originally tracked down a bug in timed_thread_scheduler.cpp, where sometimes
STDEXEC_ASSERT(op->command_ == command_type::command_type::stop);failed.