Leo Dutra (2016-11-02T14:46:07.000Z)
This is not only easier, this is faster.

We have a single memory allocation and all the threads share it.
Considering your case:

let a = {};

async alpha() { a.text += 'hello' }
async beta() { a.text + ' world' }

parallel.all( a, b ) // or thread.parallel( a, b )
.then(x -> console.log(x))

Will output ' worldhello' or 'hello world' like any multithread platform
will.



*Leo Dutra, **on **Facebook <http://www.facebook.com/leodutra.br>
**and LinkedIn
<https://www.linkedin.com/in/leobr>*

2016-11-02 12:39 GMT-02:00 Leo Dutra <leodutra.br at gmail.com>:

> If it is accepted, yes... Atomics should take care of racing.
>
> We have a lighter context than most functional languages with this
> proposal: we reassure JavaScript memory is totally mutable (against
> Erlang/Rust/Haskell).
>
> This keeps the job easier. I was thinking in the worst possibility and we
> have the easier.
>
> Lock, immutability and not shared mutation are the special cases.
> For these cases, Atomic and Mutex constructs will fit.
>
>
> *Leo Dutra, **on **Facebook <http://www.facebook.com/leodutra.br> **and LinkedIn
> <https://www.linkedin.com/in/leobr>*
>
> 2016-11-02 12:27 GMT-02:00 Bradley Meck <bradley.meck at gmail.com>:
>
>> Consider:
>>
>> ```
>> let a = {};
>>
>> alpha: parallel {
>>   a.text = 'hello';
>> }
>> beta: parallel {
>>   a.text += ' world';
>> }
>> console.log(a);
>> ```
>>
>> This has racing:
>> * around `a.text` between `alpha:` and `beta:`.
>> * around `console.log` since `a` could be 1 of 3 values depending on how
>> threads are scheduled.
>>
>> I am stating that such racing/shared mutation should be prevented.
>> Workers do this by message passing and ownership of transferable data.
>> There could be other mechanics for synchronization, but I don't see a
>> simplistic solution. Things like having a read-only view of data partially
>> helps, but atomics are most likely the proper way to do this if you don't
>> want message passing and ownership semantics.
>>
>> On Wed, Nov 2, 2016 at 9:09 AM, Leo Dutra <leodutra.br at gmail.com> wrote:
>>
>>> ​There's nothing about threading that is not problem with Event loop.
>>> I'd say there's even less problems.
>>>
>>> The proposal is a seamless behaviour, equals to what we have now.
>>>
>>> Message passing is not a problem of JS developer in the case, but a
>>> V8/WhateverMonkey problem.
>>>
>>> Changing a value inside a multithread async MUST behave in the same way
>>> of a change inside a single threaded async. The same way, non-referenced
>>> variables SHALL NOT be scoped in the thread. This is not Java with
>>> volatiles. This is the plain old JS with clojures, openess and loose bare
>>> metal control.
>>>
>>> Thread interruption is a bad practice anyway. And we could have a Mutex
>>> class for the specific case or another idea.
>>>
>>> Workers are evented and started, not pooled and easy to use.
>>> ​
>>>
>>>
>>> *Leo Dutra, **on **Facebook <http://www.facebook.com/leodutra.br> **and LinkedIn
>>> <https://www.linkedin.com/in/leobr>*
>>>
>>> 2016-11-02 11:57 GMT-02:00 Bradley Meck <bradley.meck at gmail.com>:
>>>
>>>> We need to be careful about this, I would never condone adding
>>>> threading that could share variables that were not intended to be
>>>> multi-threaded, as such variable access outside of your `parallelize`
>>>> construct/syntax would need to be message passing when talking to something
>>>> that is not already written as a parallel structure. A notable thing here
>>>> is that Shared Memory and Atomics that are in ECMA Stage 2 :
>>>> https://github.com/tc39/ecmascript_sharedmem which would probably need
>>>> to land prior to me condoning any shared mutable state.
>>>>
>>>> Historically, all JS implementations are based upon a job queueing
>>>> system described by the Event Loop. This is very different from parallelism
>>>> which could have shared mutable state. All code is guaranteed to have
>>>> exclusive access to variables in scope until it finishes running, and that
>>>> the content of those variables will not change from preemption (there are
>>>> cases where this is not true in the browser with a live DOM). There are
>>>> alternative discussion recently on Workers :
>>>> https://esdiscuss.org/topic/standardize-es-worker . I might look there
>>>> first.
>>>>
>>>> In particular, I would suggest taking a look at problems of
>>>> synchronization, locking, and preemption breaking existing code a bit
>>>> rather than just stating that green threads are the way to go.
>>>>
>>>> On Wed, Nov 2, 2016 at 8:45 AM, Leo Dutra <leodutra.br at gmail.com>
>>>> wrote:
>>>>
>>>>> ECMA introduced Promises and async-await in JS. This improves coding
>>>>> in an amazing way, reducing the control developers need to wrap an AJAX
>>>>> call or async I/O.
>>>>>
>>>>> JavaScript used to be script and not a language. Classes, workers,
>>>>> sound control, GL rendering, Node.js modules (with OS conversation),
>>>>> incredible GC strategies and compilation on V8 and Mozilla "monkeys"... the
>>>>> list goes on and on.
>>>>>
>>>>> Almost all the features provided by old mature platforms, like Java,
>>>>> .NET and etc. For browsers, the newest JS features provide consistent tools
>>>>> for productivity and quality code.
>>>>>
>>>>> But there's a huge step to accomplish.
>>>>>
>>>>> ECMA introduced workers. Node.js came up with streams, native process
>>>>> spawn and libuv thread pool. This is a lot, but not enough.
>>>>>
>>>>> All I hear about Node.js is how it is great for quick message I/O and
>>>>> bad for aggregations and impossible for parallel tasking. Again, we have
>>>>> workers and processes, but not green threads.
>>>>>
>>>>> I invite you to take a quick look at Akka and OTP (Erlang). More than
>>>>> it, I will argument: workers and process spawn are the latent desire for
>>>>> parallel and starting one of these are not "cheap" or waiting in a pool.
>>>>>
>>>>> We use streams extensively in Node.js and most frameworks hides it
>>>>> from us. Call it magic, I call it pragmatism.
>>>>>
>>>>> Now, async, await, Promises ("Futures")... we can make it all work in
>>>>> parallel.
>>>>>
>>>>> This would explore more libuv in Node.js and browsers could handle it
>>>>> too, seamlessly.
>>>>>
>>>>> Each function could be run in a green thread, pulled from a
>>>>> browser/libuv pool, allowing Node.js and browsers to process aggregations
>>>>> and heavy rendering without heavy start costs and complicated message
>>>>> control through events.
>>>>>
>>>>> More, I ask why not, and "single thread nature of JS" looks more like
>>>>> a bad legacy from old browsers. We can do it in pieces, like the proposed
>>>>> async-await and, on better days, provide a Parallel API (something like *parallelize(()
>>>>> -> { // parallel stuff here })*).
>>>>>
>>>>> I wanna leave you with the possibilities in mind and bully this single
>>>>> thread dogma.
>>>>>
>>>>> You have been told.
>>>>>
>>>>> *Leo Dutra, **on **Facebook <http://www.facebook.com/leodutra.br> **and LinkedIn
>>>>> <https://www.linkedin.com/in/leobr>*
>>>>>
>>>>> _______________________________________________
>>>>> es-discuss mailing list
>>>>> es-discuss at mozilla.org
>>>>> https://mail.mozilla.org/listinfo/es-discuss
>>>>>
>>>>>
>>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.mozilla.org/pipermail/es-discuss/attachments/20161102/4c244c41/attachment-0001.html>
leodutra.br at gmail.com (2016-11-02T14:49:17.250Z)
This is not only easier, this is faster.

We have a single memory allocation and all the threads share it.
leodutra.br at gmail.com (2016-11-02T14:48:21.471Z)
This is not only easier, this is faster.

We have a single memory allocation and all the threads share it.
Considering your case:

let a = {};

async alpha() { a.text += 'hello' }
async beta() { a.text + ' world' }

parallel.all( a, b ) // or thread.parallel( a, b )
.then(x -> console.log(x))

Will output ' worldhello' or 'hello world' like any multithread platform
will.