Future feedback

# Jorge (12 years ago)

On 13/05/2013, at 05:37, Jonas Sicking wrote:

On Sun, May 12, 2013 at 7:31 PM, Boris Zbarsky <bzbarsky at mit.edu> wrote:

Moreover the page can be reflowed between tasks. ANY async solution will have this property. What does it even mean to be async if you don't allow reflows in the meantime?

Work that is performed at end-of-microtask is sort of between fully asynchronous and normal synchronous. Since it runs as part of the same task it means that reflows can't happen before the end-of-microtask work happens.

This means that you get some advantages of asynchronous code, such as not having to worry about being in an inconsistent state due to code higher up on the call stack being half-run. And likewise you don't have to worry about not messing up code higher up on the callstack which didn't expect to have things change under it.

But it also means that you are missing out of some of the advantages of asynchronous code, such as you still have to worry about hogging the event loop for too long and thus not processing pending UI events from the user.

The event loops used to look ~ like this (node's nextTick used to be === setImmediate):

while ( RUN ) { despatchSetImmediate(); despatchIOandGUIandTimerEvents(); if (!setImmediateQueue.length && !pendingEventsSignal) sleep(); }

IIUC now node's (new) event loop looks ~ like this instead (now that nextTick !== setImmediate):

while ( RUN ) { despatchSetImmediate(); despatchNextTickQueue(); despatchIOandGUIandTimerEvents(); if (!setImmediateQueue.length && !nextTickQueue.length && !pendingEventsSignal) sleep(); }

despatchNextTickQueue() unlike despatchSetImmediate() walks its queue entirely (simplified pseudo code):

function despatchSetImmediate () { var queue= setImmediateQueue; setImmediateQueue= []; for (var i=0 ; i< queue.length ; i++) queuei; }

function despatchNextTickQueue () { for (var i=0 ; i< nextTickQueue.length ; i++) nextTickQueuei; nextTickQueue.length= 0; }

If a nextTick()ed function adds further nextTick()ed functions, those newly added functions will run in the current tick as well, unlike setImmediate()ed functions, which seems to be the whole point of this modified, new event loop.

Bus this also means that if nextTicked functions call nextTick() recursively the event loop blocks!

To solve that they've added a counter into despatchNextTickQueue() so that it won't ever walk in a single tick more than n elements of the nextTickQueue.

Now that means that nextTick()ed functions may sometimes behave as if setImmediate()d: you never know for sure.

To have a new event loop model that may block is a bad thing IMO, and the "let's add a counter" solution isn't a good solution.

Before the mod always knew what was going to happen, now you don't.

# Mark S. Miller (12 years ago)

---------- Forwarded message ---------- From: Mark S. Miller <erights at google.com>

Date: Tue, May 14, 2013 at 4:54 PM Subject: Re: Future feedback To: Boris Zbarsky <bzbarsky at mit.edu>

Cc: David Bruant <bruant.d at gmail.com>, Sean Hogan <shogun70 at westnet.com.au>,

Jonas Sicking <jonas at sicking.cc>, "public-script-coord at w3.org" <

public-script-coord at w3.org>

I see. I was thinking primarily about incoming queues whereas this formulates the issue primarily in terms of outgoing queues. Rather than have a non-deterministic interleaving of events into the incoming queue, which then services them later, this just moves the non-deterministic choice as late as possible, at the point when the next turn is ready to start. This effectively removes the notion of an incoming queue from the model.

Curiously, this is how Ken < www.usenix.org/conference/usenixfederatedconferencesweek/composable-reliability-asynchronous-systems-treating>

and NodeKen research.google.com/pubs/pub40673.html treat the

persistent storage of distributed messages. The incoming queues are ephemeral, outgoing messages are not dropped until receipt has been acknowledged, and messages are not acknowledged until processed by a turn that has been checkpointed. On restart a different interleaving may be chosen, which the "incoming queue" model would have a harder time accounting for. I like it. AFAICT, this is a better way to specify communicating event loops in all ways.

# Mark Miller (12 years ago)

AFAICT, the microtask queue is just another output queue, and the strict priority of the microtask queue over other queues is just a policy choice of which outgoing queue to service next. The input queue model could not guarantee strict priority without creating a two level queue. The outgoing queue model keeps this separate with no loss of generality. Cool.

# Jonas Sicking (12 years ago)

Actually, mutation observers have some special behavior that only lasts until the end-of-microtask queue is empty. If you start observing the mutations that happen in a particular Node subtree rooted in a node A, you will be told about all mutations that happen in the nodes that were descendants of A until all end-of-microtask notifications have fired. So even if a node is removed from A and then modified, the observer is notified about those mutations as long as they happen before all end-of-microtask observers have fired.

At least I think that's how I think it works. You'd have to check the spec for more details.

Possibly this is something that can be changed though.

# Mark Miller (12 years ago)

Is there any reason that this can't be modeled with the end-of-microtask queue still being just one of many output queues? These observed mutations would just queue notifications on the end-of-microtask queue. The interleaving policy would be to always select an event from the end of microtask queue first if it is non-empty. I.e., strict priority, decided at the moment when the next turn is about to be started. Am I missing something?

# Jonas Sicking (12 years ago)

On Tue, May 14, 2013 at 7:30 PM, Mark Miller <erights at gmail.com> wrote:

Is there any reason that this can't be modeled with the end-of-microtask queue still being just one of many output queues? These observed mutations would just queue notifications on the end-of-microtask queue. The interleaving policy would be to always select an event from the end of microtask queue first if it is non-empty. I.e., strict priority, decided at the moment when the next turn is about to be started. Am I missing something?

It's quite probably doable to modify the current solution. I'd recommend talking to Rafael Weinstein, Olli Pettay and Anne van Kesteren who designed and specified the current behavior.