setImmediate
From: es-discuss [mailto:es-discuss-bounces at mozilla.org] On Behalf Of
From what I understand, setTimeout 0 serves that use case and there is no reason for setImmediate to be better at this job.
This is not true, as can be seen from domenic.me/setImmediate-shim-demo. The clamping inside nested callbacks prevents setImmediate from being as good at this job as postMessage or MessageChannel, so as long as there is still clamping on those (which from what I understand is back-compat-constrained) setImmediate is necessary.
Microtasks are not sufficient for this purpose because they do not yield to the UI. If you ran the above demo with microtasks, the screen would update all at once, instead of smoothly showing the shorting progress.
I'm having a hard time understanding "before the browser renders again". I'm afraid this is a asking for laggy UIs if there is the least bug.
This attitude forces us to maintain user-space trampolines inside microtasks. "Before the browser renders again" microtasks are no more dangerous than while
loops, which are indeed what user-space trampolines are forced to defer to. There will always be people who abuse while
loops; those same people will abuse microtasks. Trying to protect us from ourselves by not giving (easy! non-MutationObserver!) microtasks is a poor strategy.
Kyle Simpson wrote:
Promises implementations necessarily have to insert a defer/delay between each step of a sequence, even if all the steps of that sequence are already fulfilled and would otherwise, if not wrapped in promises, execute synchronously. The async "delay" between each step is necessary to create a predictable execution order between sync and async usage.
An implementation can keep track of the order in which messages arrived to the queue and process them in order. No need to impose a delay, no?
Yes, this is what I mean by maintaining a user-space trampoline. It is a not-insigificant amount of code to do correctly, especially with correct error semantics (i.e., disallow throwing tasks from interfering with future tasks, and re-throw their errors in order in such a way to reach window.onerror). It would be much easier if the browser maintained this queue for us, and we could simply do window.asap(myTask); window.asap(anotherTask);
. Here "window.asap
" is a hypothetical pure microtask queue-er, distinct from setImmediate
's macro-task queueing (and presumably without all the eval
-if-string and arguments-passing stuff).
2013/8/8 David Bruant <bruant.d at gmail.com>
I'm having a hard time understanding "before the browser renders again". I'm afraid this is a asking for laggy UIs if there is the least bug. I would rather recommend the following approach: play with "abstract" data (regular objects/arrays, etc.) in tasks/microtasks and update UI (DOM, Canvas, SVG, etc.) in requestAnimationFrame callbacks.
That's precisely what we want to do and why we need a mechanism for scheduling code to run in the next task/microtask.
The clamping inside nested callbacks prevents setImmediate from being as good at this job as postMessage or MessageChannel
Er, "prevents setTimeout(..., 0) from being as good..."
Le 08/08/2013 15:38, Domenic Denicola a écrit :
This is not true, as can be seen from domenic.me/setImmediate-shim-demo. The clamping inside nested callbacks prevents setImmediate from being as good at this job as postMessage or MessageChannel, so as long as there is still clamping on those (which from what I understand is back-compat-constrained) setImmediate is necessary.
The minimum delay is a mitigation mechanism implemented by browsers to avoid burning the CPU and leave the page almost non-responsive if a page has something equivalent to:
setTimeout(function f(){
setTimeout(f, 0)
}, 0)
(see the [4] of groups.google.com/a/chromium.org/forum/#!msg/blink-dev/Hn3GxRLXmR0/XP9xcY_gBPQJ for more details) I would love to know why this sort of bug cannot happen with setImmediate and why browsers won't be eventually forced to implement the exact same mitigation, making setImmediate(f) effectively an equivalent of setTimeout(f, 0).
Adam Barth made an equivalent argument in the Chromium thread 1
This attitude forces us to maintain user-space trampolines inside microtasks. "Before the browser renders again" microtasks are no more dangerous than
while
loops, which are indeed what user-space trampolines are forced to defer to. There will always be people who abusewhile
loops; those same people will abuse microtasks. Trying to protect us from ourselves by not giving (easy! non-MutationObserver!) microtasks is a poor strategy.
This is not a "Trying to protect us from ourselves" situation. This is a "browser trying to protect users from any sort of abuse" situation. For while loops, they implemented the "script takes too long" dialog. For mistakenly infinitely nested too short setTimeouts, they implemented 4ms clamping. If browsers can't have mitigation strategies when features are abused, we will run in the same situations than before.
As a JS dev, I want the same features than you. Now, how do browsers make sure this doesn't drain users battery in case of misuse? (I don't have an answer yet)
Yes, this is what I mean by maintaining a user-space trampoline. It is a not-insigificant amount of code to do correctly, especially with correct error semantics (i.e., disallow throwing tasks from interfering with future tasks, and re-throw their errors in order in such a way to reach window.onerror). It would be much easier if the browser maintained this queue for us
That's what I suggested ("the implementation keeps track..."), isn't it? Do we disagree?
and we could simply do
window.asap(myTask); window.asap(anotherTask);
. Here "window.asap
" is a hypothetical pure microtask queue-er, distinct fromsetImmediate
's macro-task queueing (and presumably without all theeval
-if-string and arguments-passing stuff).
I agree and I want "window.asap" asap. But I have the same question about misuse and battery. We need to tell implementors how they mitigate misuses. Otherwise, they'll just fallback to clamping as they did with setTimeout.
This is not a "Trying to protect us from ourselves" situation. This is a "browser trying to protect users from any sort of abuse" situation. For while loops, they implemented the "script takes too long" dialog. For mistakenly infinitely nested too short setTimeouts, they implemented 4ms clamping. If browsers can't have mitigation strategies when features are abused, we will run in the same situations than before.
As a JS dev, I want the same features than you. Now, how do browsers make sure this doesn't drain users battery in case of misuse? (I don't have an answer yet)
To me the answer always seemed obvious: use the slow-script dialog. What am I missing?
That's what I suggested ("the implementation keeps track..."), isn't it? Do we disagree?
I assumed by "implementation" you meant "promise implementation," as in the quoted paragraph. I'd much rather have the browser implementation maintain it.
I agree and I want "window.asap" asap. But I have the same question about misuse and battery. We need to tell implementors how they mitigate misuses. Otherwise, they'll just fallback to clamping as they did with setTimeout.
Why are implementers OK with giving us postMessage/MessageChannel but not setImmediate? Why are they OK with giving us MutationObservers/Object.observe but not "window.asap
"?
On Thu, Aug 8, 2013 at 3:03 PM, Domenic Denicola <domenic at domenicdenicola.com> wrote:
To me the answer always seemed obvious: use the slow-script dialog. What am I missing?
That seems like a bad answer. Slow-script dialogs are a misfeature. They only exist because otherwise single-threaded browsers would be in a world of hurt.
(As to why certain features and not others. I doubt such usage was foreseen in the creation of those features.)
Le 08/08/2013 16:03, Domenic Denicola a écrit :
To me the answer always seemed obvious: use the slow-script dialog. What am I missing?
For a while loop or just too long running script, this dialog may break your JS stack anywhere, stop at any instruction. It may not be always easy to recover as it may break all sorts of invariants your code relies on. It's exactly like OS thread pre-emption. But there is not really a cleaner way.
Let's say a spec requires window.asap to prompt a slow script dialog if abused. I imagine that all or part of the event queue is flushed (this needs to be standardized too). This also breaks program invariants in all sorts of ways. Let's make guesses. Let's say a browser decides that "script too slow" dialog is poor UX and that it wouldn't be such a big deal to insert here and there handling of click events... Suddenly, website X (that abuses microtasks) runs better on browser Y than Z. This might encourage Z to break the "microtask contract" as well. Once it's at it, Y, might just add 4ms clamping because delaying a microtask sounds more friendly than randomly breaking code invariants.
Small delays between (micro)task sounds like a local maximum it's hard to get away from unfortunately :-(
I assumed by "implementation" you meant "promise implementation," as in the quoted paragraph. I'd much rather have the browser implementation maintain it.
I meant "browser implementation", sorry for the confusion.
Why are implementers OK with giving us postMessage/MessageChannel but not setImmediate? Why are they OK with giving us MutationObservers/Object.observe but not "
window.asap
"?
I feel we just have to wait until these are abused and we'll see the clamping solution coming back. Exactly like setImmediate would be forced to if widely (and eventually mistakenly) used.
Maybe we can add the same feature with a different name every 5 years ? :-) We need to randomly choose the name so that people don't prolyfill :
Slow-script dialogs are a misfeature.
As I see it, what we want out of the method that browsers handle infinite loop like code is:
- Something which has the minimum possible impact on well designed pages.
- Something which will gracefully kill badly designed pages before they break the user's device/drain the battery.
I think slow-script dialogs provide both of those in just about the best possible way.
Also, on the point about draining the battery, using very short timeouts is terrible for battery life but using setImmediate
is considerably better.
Le 08/08/2013 16:43, Forbes Lindesay a écrit :
Also, on the point about draining the battery, using very short timeouts is terrible for battery life but using
setImmediate
is considerably better.
Why so?
Why so?
I think it was something Domenic Denicola said that I'm remembering, but don't the extremely short timeouts mean more work (and thus power) for the CPUs timer. I'm sure I remember reading something about timeouts less than a certain amount using additional power.
If I'm wrong (which I may be), I apologise for the mis-information.
Right, +1 to both of Forbes's points.
I think the essential equivalence I want to get across is between microtasks ("window.asap
") and synchronous loops. If there is a better solution than the slow-script dialog for such scenarios, great! Maybe we can use it in future APIs like window.asap
, and leave the slow-script dialog as something that happens with synchronous loops because of legacy.
But if we haven't come up with some better idea to deal with lots of synchronous script execution, I see no reason to prevent microtasks from being exposed, any more than we prevent synchronous looping primitives from being exposed. They are literally the same thing, after all, except that one takes place during the main stage of the event loop and one takes place at the end stage.
From: Forbes Lindesay [forbes at lindesay.co.uk]
Why so?
I think it was something Domenic Denicola said that I'm remembering, but don't the extremely short timeouts mean more work (and thus power) for the CPUs timer. I'm sure I remember reading something about timeouts less than a certain amount using additional power.
If I'm wrong (which I may be), I apologise for the mis-information.
This may be the case only on Windows, but Microsoft has repeatedly claimed (and shown) that scheduling timers somehow "wakes up" the computer from its low-power state. Thus repeatedly scheduling timers keeps it in some kind of "high alert" state where it's never ready to settle down into low-power because it knows that within a few milliseconds it'll need to perform the appropriate timer interrupt to fire the task.
This may be Windows-specific, or IE-specific, or even just FUD (i.e., if IE was smarter it could implement timers as efficiently as it does setImmediate
). I am not really in a position to say. But it is the reality today with IE10 and IE11.
Le 08/08/2013 16:03, Domenic Denicola a écrit :
To me the answer always seemed obvious: use the slow-script dialog. What am I missing?
Maybe implementations could decide to break a microtask chain, but instead of prompting a dialog, they just break it and call a callback (later, in a different task, not microtask) so that the script knows and can try to recover.
<draft>
asap(f); // queue a microtask;
var microtask = asap(g); // queues a microtask
microtask.on('toolong', h); // if the browser breaks somewhere in this
microtask subtree, call h
</draft>
You can't know where in the microtask something went wrong, but you can try to recover locally. Everyone creating microtasks can do it.
This way, implementations do what they want to preserve the user from error without annoying the user with impossible choices (let's be honest, the decision to continue or end a script based on a filename and line number is absurdly hard to make) and authors can try to recover from the partial failures, all locally.
Whad'ya think?
Hmm, interesting!
I wonder if it could be event simpler than that, and after an arbitrary limit (in time, not number of microtasks), just reschedule for the next event loop turn's microtask phase. For promise applications there is no problem with this; I am not sure however if that is an entirely representative use case. You could definitely imagine some invariants being broken. I guess that's why you have a 'toolong'
event, although I am skeptical that users of microtasks will have the knowledge of what to do in order to uphold their invariants in the rare case of being preempted.
Anyway, it's exciting that there are indeed alternatives to the slow-script dialog :).
How about just adding a parameter that tells you whether it was delayed for taking too long:
asap(function (tooLong) {
if (tooLong) {
//attempt to restore invariants here
}
//do work here
})
And then follow @domenic's solution of just pushing it into the next macro-task if it's spent too long executing micro tasks.
That way users who didn't care about such invariants could just ignore that argument and those who do can choose how to handle it gracefully.
tl;dr - I would simply fix setTimeout 0
^_^
long story long:
from a suer perspective, I never understood why setTimeout(fn, 0, arguments)
does not act as setImmediate(fn, arguments)
where latter one
is apparently needed to indeed replace a misleading behavior with first
setTimeout
"broken" call.
IMO, 0 (zero) means "immediate", "asap", on next "tick" ... why should a
user care about bad implementations from vendors? Why vendors should accept
0 at all if 4ms is the minimum? The problem is infinite loop? for(var i = arr.length; --i; doStuff);
is still able to "block a thread" so why
setTimeout
would care? how can setImmediate
prevent bad designed code
or infinite loops in a way setTimeout 0
couldn't?
Last but not least: I have no idea why DOM and W3C features should change anything in current ES specs and I still I don't practically get what makes setImmediate so special that setTimeout(fn, 0[, arguments]) cannot achieve resulting in a simplified, unique, way to set timers in JS world and leave W3C stuff out of the equation.
Best
Le 08/08/2013 17:00, Domenic Denicola a écrit :
Hmm, interesting!
I wonder if it could be event simpler than that, and after an arbitrary limit (in time, not number of microtasks), just reschedule for the next event loop turn's microtask phase. For promise applications there is no problem with this; I am not sure however if that is an entirely representative use case. You could definitely imagine some invariants being broken. I guess that's why you have a
'toolong'
event
Exactly.
although I am skeptical that users of microtasks will have the knowledge of what to do in order to uphold their invariants in the rare case of being preempted.
Exactly like with a script stuck in a while loop of sort (for which, we don't even have a callback to repair things. Maybe an extension to functions could allow this sort of mechanism for normal function calls). For the vast majority of the code I have written, I think I count myself in the category of those who do not know how their code is broken by the "script too long" dialog. As you say, it should be a rare case (as rare as the slow script dialog nowadays, I imagine)
On Thu, Aug 8, 2013 at 9:40 AM, David Bruant <bruant.d at gmail.com> wrote:
Small delays between (micro)task sounds like a local maximum it's hard to get away from unfortunately :-(
What if, instead of a slow script dialog, browsers responded to microtask/setTimeout(0) abuse with gradual throttling? Well-behaved applications would get immediate setTimeout(0) callbacks. Badly behaved applications would run slowly.
Le 08/08/2013 22:04, Jason Orendorff a écrit :
On Thu, Aug 8, 2013 at 9:40 AM, David Bruant <bruant.d at gmail.com> wrote:
Small delays between (micro)task sounds like a local maximum it's hard to get away from unfortunately :-(
I think I was wrong here when it comes to microtasks. Microtasks bypass the event queue, so delaying them delays all the other messages in the queue by definition. Forcing a delay on microtasks means an irresponsive UI.
What if, instead of a slow script dialog, browsers responded to microtask/setTimeout(0) abuse with gradual throttling? Well-behaved applications would get immediate setTimeout(0) callbacks. Badly behaved applications would run slowly.
That's already the case with setTimeout in a way. Try: setTimeout(function f(){ setTimeout(f, 0); }, 0) You never get the slow script dialog. The 4ms clamping is here to make that code this code runs yet does not burn the CPU.
Other than that, the browser with the shortest delay wins the battle, I believe. "my application runs faster in X than in Y" forcing Y to reduce the delay and align with X.
Now that I think about it, maybe the proposal I made for microtasks [1] could work for setImmediate. setImmediate would be guaranteed to run asap (in a different task, not microtask) without clamping. The mitigation for browsers is possible via kill too-deeply-nested setImmediates (preferably before running one and not in the middle of one :-p) and telling the script if it asked to be notified. That's a version of setImmediate I would agree with as it would be a significant improvement over what we have today.
David
Why is the slow script dialog box even relevant for setImmediate? As I understand it, setImmediate is equivalent to DoEvents in Visual Basic/Windows Forms and pumping the message loop in a normal C application. That is, you can use setImmediate to make your application run as fast as possible while still allowing the browser to pump messages, which ensures keyboard/mouse inputs are processed and the window does not get flagged as unresponsive.
This is ideal (especially compared to setTimeout 0, which introduces the use of timers and slows everything down in this use case, and compared to requestAnimationFrame which needlessly would slave computation to vsync). People who are writing long computation loops right now that hang the browser main thread for multiple seconds can split them up with setImmediate without causing any major performance regressions.
Whether or not setImmediate would increase battery usage is something you'd have to test; this isn't a case where timers would be waking the CPU up and keeping it awake, though; this is a case where computation would be keeping the CPU awake, and ultimately computation has to finish sooner or later. You're not going to save power just by making computation take longer unless you can ensure the CPU and other components remain in a low-power state during the computation.
On 08/08/2013, at 15:55, David Bruant wrote:
This is not a "Trying to protect us from ourselves" situation. This is a "browser trying to protect users from any sort of abuse" situation. For while loops, they implemented the "script takes too long" dialog. For mistakenly infinitely nested too short setTimeouts, they implemented 4ms clamping. If browsers can't have mitigation strategies when features are abused, we will run in the same situations than before.
As a JS dev, I want the same features than you. Now, how do browsers make sure this doesn't drain users battery in case of misuse? (I don't have an answer yet)
I think that it can't be avoided. A program, in the middle a longish operation, must yield to the event loop to avoid events starvation and/or to force redraws, so there must be a way to do so, and it must be fast (without 4ms clampings).
Yes, there are malicious sites and there are silly programmers to drain your batteries, but there are also 100% legit reasons to spin the event loop...
I would put in the browsers a cpu hog/battery drain dial/indicator per page, so that the users could at least see it and act accordingly (they'll soon learn why that's important).
I for one have already uninstalled lots of iPhone apps, just because they drained my batteries too fast.
Also, the original "classic" MacOS had an EventAvail() call to let the program know if there were any events pending, in a program in a busy loop this helps decide whether it's time to yield or not.
For promises using microtasks, one possibility I've been experimenting with in my polyfill is a Promise.yield() method that returns a Promise that resolves after the next time the UI thread gets a chance to drain its event queue (either through requestAnimationFrame or setTimeout).
While it works better with async/await or generators+trampoline, it still works with Promise#then (and Promise#flatMap). It doesn't prevent developers from writing bad code, but it does provide a way to break out of the microtask scheduler.
Also, unless it breaks an invariant or expectation, would it be useful to have microtasks periodically and/or randomly yield to the event queue to allow the UI to drain its events? There could also be a (albeit probably better named) nextMicrotaskWillYield() API that could be called to have some foreknowledge as to whether future microtasks scheduled within the current microtask will be delayed until after the browser can process tasks or the event queue.
On Aug 8, 2013, at 7:09 AM, Anne van Kesteren <annevk at annevk.nl> wrote:
On Thu, Aug 8, 2013 at 3:03 PM, Domenic Denicola <domenic at domenicdenicola.com> wrote:
To me the answer always seemed obvious: use the slow-script dialog. What am I missing?
That seems like a bad answer. Slow-script dialogs are a misfeature. They only exist because otherwise single-threaded browsers would be in a world of hurt.
Wait, what? The semantics of the web demands that runaway JS block any other even turns, or page layout/rendering from proceeding. Multi-process browsers can prevent cross-origin pages from interfering with each other, and Servo can do speculative layout to reward well-behaved pages, but badly-behaved pages unavoidably destroy the UX. Maybe I'm unimaginative but the only alternative to the slow-script dialog I can see is to allow a page to completely destroy itself unrecoverably.
On Aug 8, 2013, at 2:08 PM, K. Gadd <kg at luminance.org> wrote:
Why is the slow script dialog box even relevant for setImmediate? As I understand it, setImmediate is equivalent to DoEvents in Visual Basic/Windows Forms and pumping the message loop in a normal C application. That is, you can use setImmediate to make your application run as fast as possible while still allowing the browser to pump messages, which ensures keyboard/mouse inputs are processed and the window does not get flagged as unresponsive.
Yeah, I'm actually not at all clear which of (at least?) four plausible semantics could be meant by setImmediate:
(a) push a new microtask (to the front of the current microtask list) (b) enqueue a new microtask (to the back of the current microtask list) (c) push a new event (to the front of the event queue) (d) enqueue a new event (to the back of the event queue)
I'd always assumed it meant (d), which seems to me it's "what setTimeout(.., 0) really wanted to be." If people want something for scheduling microtasks I'd think they would still want something for scheduling events.
For the record, my opinions on this whole space are:
- I'm pretty sure we have to leave implementations free to throttle event queues, but current competitive pressure over performance would ensure that a new API would not be as heavily throttled as setTimeout 0.
- Microtasks should not be throttled since they block the event queue.
- Microtasks should be treated the same as ordinary synchronous code.
- I see no reasonable alternative to runaway microtask churn other than slow-script dialog.
(Opinions subject to revision yadda yadda yadda.)
Wait, what? The semantics of the web demands that runaway JS block any other even turns, or page layout/rendering from proceeding. Multi-process browsers can prevent cross-origin pages from interfering with each other, and Servo can do speculative layout to reward well-behaved pages, but badly-behaved pages unavoidably destroy the UX. Maybe I'm unimaginative but the only alternative to the slow-script dialog I can see is to allow a page to completely destroy itself unrecoverably.
I don't think of you as unimaginative, but I think there are other options. Sure, there must be a way to "kill" a script that's burning your CPU but it doesn't have to be a dialog.
Firstly, there's nothing really preventing a browser to perform a layout if it actually pauses the script, even if it may be hard to pause a thread while keeping all its pointer safe in an environment that's changed by another thread. But this is not impossible.
Secondly, when a website becomes resource-intensive, you can display a "toolbar" saying 'This website looks to overuse your computer.' with a (stop the script) button that doesn't interrupt your experience (you can switch tab if you want, which will in return cause the tab to get way less system resources since he's in the background) or continue to use the website if it turns out it's just slow but not stuck in an infinite loop.
By the way this kind of solution is totally applicable to setTimeout/setInterval loops that for now aren't covered by the script dialog, or a script that overuse your GPU with offscreen WebGL.
Le 13/08/2013 01:58, David Herman a écrit :
On Aug 8, 2013, at 2:08 PM, K. Gadd <kg at luminance.org> wrote:
Why is the slow script dialog box even relevant for setImmediate? As I understand it, setImmediate is equivalent to DoEvents in Visual Basic/Windows Forms and pumping the message loop in a normal C application. That is, you can use setImmediate to make your application run as fast as possible while still allowing the browser to pump messages, which ensures keyboard/mouse inputs are processed and the window does not get flagged as unresponsive.
Yeah, I'm actually not at all clear which of (at least?) four plausible semantics could be meant by setImmediate:
(a) push a new microtask (to the front of the current microtask list) (b) enqueue a new microtask (to the back of the current microtask list) (c) push a new event (to the front of the event queue) (d) enqueue a new event (to the back of the event queue)
I'd always assumed it meant (d)
Yes, this is (d). I think I'm partially responsible for side-tracking the discussion to talk about microtasks. Sorry about that.
- I see no reasonable alternative to runaway microtask churn other than slow-script dialog.
So did Dominic (esdiscuss/2013-August/032622). I suggested something else (esdiscuss/2013-August/032630) and he found the idea interesting. What do you think?
On Aug 12, 2013, at 5:40 PM, François REMY <francois.remy.dev at outlook.com> wrote:
I don't think of you as unimaginative, but I think there are other options.
:)
Sure, there must be a way to "kill" a script that's burning your CPU but it doesn't have to be a dialog.
That's fine, I guess I didn't really mean dialog box was the only UI, just that killing the JS entirely is the only reasonable semantics I can imagine.
Firstly, there's nothing really preventing a browser to perform a layout if it actually pauses the script, even if it may be hard to pause a thread while keeping all its pointer safe in an environment that's changed by another thread. But this is not impossible.
Not impossible to implement, but very bad. It would create new preemption semantics to shared state in JS, which is moooost of the time trying very hard to be strictly based on cooperative concurrency. Not only would injecting this preemption be a new potential source of very subtle bugs, it could be a security problem (run some slow script to force a new layout and use that as a bogus communication channel).
Secondly, when a website becomes resource-intensive, you can display a "toolbar" saying 'This website looks to overuse your computer.' with a (stop the script) button that doesn't interrupt your experience (you can switch tab if you want, which will in return cause the tab to get way less system resources since he's in the background) or continue to use the website if it turns out it's just slow but not stuck in an infinite loop.
Sure-- again, playing with the UI isn't really what I meant. It's that semantically I see no alternative to a slow script but killing JS.
Trying the move here the discussion happening at bugzilla.mozilla.org/show_bug.cgi?id=686201 (recent discussion starts comment 26) Moving it here, because I believe it overlaps a lot with the ongoing ES6/ES7 work bringing the event loop to ECMA262 (module loading, Object.observe, etc.)
(In reply to dopazo.juan from comment #32)
I'm having a hard time understanding "before the browser renders again". I'm afraid this is a asking for laggy UIs if there is the least bug. I would rather recommend the following approach: play with "abstract" data (regular objects/arrays, etc.) in tasks/microtasks and update UI (DOM, Canvas, SVG, etc.) in requestAnimationFrame callbacks.
Kyle Simpson wrote:
An implementation can keep track of the order in which messages arrived to the queue and process them in order. No need to impose a delay, no?