Shared memory sync/update question

# T.J. Crowder (6 years ago)

Can anyone on the list help me with the shared memory details question described here: Does postMessage or yielding to the event loop or similar sync shared memory?

It seems like Lars T. Hansen's Mandlebrot example here also expects memory to have been synchronized/updated as of the receipt of a postMessage message (when a worker is done, it triggers a workerDone message making the main thread redisplay), but there's an Atomics.store involved there.

Just trying to get a clear idea of when (and whether) one can reliably, in-specification trust that a thread will see updates from other threads without using Atomics.load or similar to do every single read. Hansen's Mandlebrot example uses a lock on a single element via compareExchange. My experiment that never sees a stale read doesn't use Atomics at all (but see the first link above for details).

Thanks in advance,

-- T.J. Crowder

# Gus Caplan (6 years ago)

Shared memory doesn't need to "sync", its the same memory. Atomics exists to provide primitives to help with timing (mostly race conditions). For example, if you want to update an index in a SharedArrayBuffer (SAB) so that another thread may read it, the safe way is to use Atomics.wait in the thread that will read from the SAB, and Atomics.store plus Atomics.notify in the thread that writes to the SAB. ----

# T.J. Crowder (6 years ago)

On Sun, Sep 23, 2018 at 5:12 AM Gus Caplan <me at gus.host> wrote:

Shared memory doesn't need to "sync", its the same memory. Atomics exists to provide primitives to help with timing (mostly race conditions).

Thanks. Perhaps I'm being misled by my experience with the Java memory model, which is of course different to JavaScript's. But my understanding is that a CPU (and possibly the JS VM itself) may cache a copy of shared memory for each thread to improve parallel processing. So while it's the same memory in theory, there are local thread-specific copies of parts of it which can (and in Java, do) mean a thread can write a value to a shared variable and another thread won't see that new value without synchronization of some kind to force flushing those caches, etc. The (fantastic) articles by Lin Clark you linked on SO don't talk about that at all. Are you saying that doesn't happen with JavaScript's memory model? That seems unlikely to me, I'd want a canonical source on it. But I could imagine (and this is pure invention) that using an Atomics method on a buffer, in addition to ensuring no reordering etc., also triggers a cache-flushing event or similar.

-- T.J. Crowder

# Allen Wirfs-Brock (6 years ago)
# T.J. Crowder (6 years ago)

On Sun, Sep 23, 2018 at 5:04 PM Allen Wirfs-Brock <allen at wirfs-brock.com> wrote:

Does this help? tc39.github.io/ecma262/#sec-shared-memory-guidelines

Thanks. Oh, I've read that, of course. I wouldn't be surprised if the answer is in there, but I'm afraid I'm not getting it if so. I'm prepared to believe I'm just being dense (in fact, it seems likely), if someone wants to point out a speciific bit... :-)

-- T.J. Crowder

# kai zhu (6 years ago)

J., i'm not a sql-expert, but the big-picture ux-problem you have with memory-consistency might be solved by transactions in sql.js (sqlite3 compiled to asm.js - or not if this solution is too heavy) [1]. the latest build is of sqlite3 v3.22.0, and includes full-text-search support, which is a sorely lacking basic-feature in frontend-development in this day and age.

[1] live web-demo of sql.js using web-workers kripken.github.io/sql.js/GUI, kripken.github.io/sql.js/GUI

kai zhu kaizhu256 at gmail.com <mailto:kaizhu256 at gmail.com>

# T.J. Crowder (6 years ago)

On Mon, Sep 24, 2018 at 8:42 AM kai zhu <kaizhu256 at gmail.com> wrote:

hi T.J., i'm not a sql-expert, but the big-picture ux-problem you have with memory-consistency might be solved by ...

Thanks, but this isn't a UX or SQL issue; my goal here is to develop a solid understanding of the JavaScript memory model in multiple threads and shared memory. In particular, whether there is a per-thread CPU or VM caching issue and, if so, what to do to ensure correctness (other than using Atomics.load for each and every read, which would be silly and -- from Lars T. Hansen's Mandlebrot example -- clearly isn't necessary). I have a good understanding of these issues in other environments and want to bring my understanding of them in the JS environment up-to-scratch.

-- T.J. Crowder

# Lars Hansen (6 years ago)

In a browser, postMessage send and receive was always intended to create a synchronization edge in the same way a write-read pair is. tc39.github.io/ecmascript_sharedmem/shmem.html#WebBrowserEmbedding

Not sure where this prose ended up when the spec was transfered to the es262 document.

# T.J. Crowder (6 years ago)

On Mon, Oct 1, 2018 at 4:42 AM Lars Hansen <lhansen at mozilla.com> wrote:

In a browser, postMessage send and receive was always intended to create a synchronization edge in the same way a write-read pair is. tc39.github.io/ecmascript_sharedmem/shmem.html#WebBrowserEmbedding

Not sure where this prose ended up when the spec was transfered to the es262 document.

Thanks!

Looks like it's at least partially here: tc39.github.io/ecma262/#sec-host-synchronizes-with

So, a question (well, two questions) just for those of us not deeply versed in the terminology of the Memory Model section. Given:

  1. Thread A sends a 1k shared block to Thread B via postMessage
  2. Thread B writes to various locations in that block directly (not via Atomics.store)
  3. Thread B does a postMessage to Thread A (without referencing the block in the postMessage)
  4. Thread A receives the message and reads data from the block (not via Atomics.load)

...am I correct that in Step 4 it's guaranteed that thread A will reliably see the writes to that block by Thread B from Step 2, because the postMessage was a "synchronization edge" ensuring (amongst other things) that CPU L1d caches are up-to-date, etc.?

Similarly, if (!) I'm reading it correctly, in your Mandlebrot example, you have an Atomics.wait on a single location in a shared block, and when the thread wakes up it seems to assume other data in the block (not in the wait range) can reliably be read directly. That's also a "synchronization edge"?

Thank you again, I appreciate your taking the time.

-- T.J. Crowder

# Lars Hansen (6 years ago)

On Tue, Oct 2, 2018 at 2:20 PM, T.J. Crowder < tj.crowder at farsightsoftware.com> wrote:

On Mon, Oct 1, 2018 at 4:42 AM Lars Hansen <lhansen at mozilla.com> wrote:

In a browser, postMessage send and receive was always intended to create a synchronization edge in the same way a write-read pair is. tc39.github.io/ecmascript_sharedmem/shmem. html#WebBrowserEmbedding

Not sure where this prose ended up when the spec was transfered to the es262 document.

Thanks!

Looks like it's at least partially here: tc39.github.io/ecma262/#sec-host-synchronizes-with

So, a question (well, two questions) just for those of us not deeply versed in the terminology of the Memory Model section. Given:

  1. Thread A sends a 1k shared block to Thread B via postMessage
  2. Thread B writes to various locations in that block directly (not via Atomics.store)
  3. Thread B does a postMessage to Thread A (without referencing the block in the postMessage)
  4. Thread A receives the message and reads data from the block (not via Atomics.load)

...am I correct that in Step 4 it's guaranteed that thread A will reliably see the writes to that block by Thread B from Step 2, because the postMessage was a "synchronization edge" ensuring (amongst other things) that CPU L1d caches are up-to-date, etc.?

Yes, that was the intent of that language. The writes to the memory should happen-before the postMessage and the receive-message should happen-before the reads.

Similarly, if (!) I'm reading it correctly, in your Mandlebrot example, you have an Atomics.wait on a single location in a shared block, and when the thread wakes up it seems to assume other data in the block (not in the wait range) can reliably be read directly. That's also a "synchronization edge"?

Yes, same argument. The writes happen-before the wake, and the wakeup of the wait happens-before the reads.

All of this is by intent so as to allow data to be written and read with cheap unsynchronized writes and reads, and then for the (comparatively expensive) synchronization to ensure proper observability.

# T.J. Crowder (6 years ago)

On Thu, Oct 11, 2018 at 11:00 AM Lars Hansen <lhansen at mozilla.com> wrote:

  1. Thread A sends a 1k shared block to Thread B via postMessage
  2. Thread B writes to various locations in that block directly (not via Atomics.store)
  3. Thread B does a postMessage to Thread A (without referencing the block in the postMessage)
  4. Thread A receives the message and reads data from the block (not via Atomics.load)

...am I correct that in Step 4 it's guaranteed that thread A will reliably see the writes to that block by Thread B from Step 2, because the postMessage was a "synchronization edge" ensuring (amongst other things) that CPU L1d caches are up-to-date, etc.?

Yes, that was the intent of that language. The writes to the memory should happen-before the postMessage and the receive-message should happen-before the reads.

Similarly, if (!) I'm reading it correctly, in your Mandlebrot example, you have an Atomics.wait on a single location in a shared block, and when the thread wakes up it seems to assume other data in the block (not in the wait range) can reliably be read directly. That's also a "synchronization edge"?

Yes, same argument. The writes happen-before the wake, and the wakeup of the wait happens-before the reads.

All of this is by intent so as to allow data to be written and read with cheap unsynchronized writes and reads, and then for the (comparatively expensive) synchronization to ensure proper observability.

Fantastic, that's exactly what I was hoping.

Thank you again!

-- T.J. Crowder