More flexibility in the ECMAScript part? (was: Re: Futures

# David Bruant (12 years ago)

Le 17/04/2013 21:03, Allen Wirfs-Brock a écrit :

On Apr 17, 2013, at 10:28 AM, David Bruant wrote:

...

Although promises were planned for ES7, they weren't part of any formal spec. As far as I know, no recent TC39 meetings even mentioned the concurrency strawman [2].

i don't think the "mention" observation is totally correct. More generally, it is the actual TC39 participants who set the agenda for meetings and who build the necessary consensus to advance proposals. That is how the ES7 observe proposal has advanced as rapidly as it has. It is up to champions of specific proposal (such as Alex or Mark, in this case, but it could be others) to add appropriate agenda items and to move the process forward.

But that didn't happen. Instead, Alex Russel gathered a couple of people, started with a minimal proposal privately then opened it up to public feedback, then it entered the WHATWG DOM spec (people have mixed feelings about that), this news has been so important it was shared to a couple of W3C mailing lists I follow and there seems to be plan to incorporate promises to other W3C specs.

Object.observe has moved quite fast, but still can't be consumed by other specs as far as I can tell (there might also be an outreach issue which is a different one)

No formally accepted and agreed upon spec makes ES7 promises and the concurrency strawman virtually inexistent. The current largely informal agreement on the concurrency strawman doesn't solve the current problem of the web platform using promises/futures.

I believe the problem lies in that ECMAScript has a monolitic spec snapshot model. This model doesn't allow the flexibility needed by WebIDL and the web platform which are an important consumer of the spec. I believe this is why the WHATWG was chosen to host the Future spec work [4].

TC39 does not exclusively do monolithic specs. See for example Ecma-402, the ECMAScript initialization API [5] which is a modular addition to Ecma=262. It is also a good example of domain experts working within the context of TC39 to focus attention on a specific feature area.

True. Can you provide a list of the next upcoming ECMA-xxx, please?

However, there is a very good reason that the ECMAScript Language specification is a monolithic spec. Language design is different from library design. (...)

The topic at hand is promises which is a library. I entirely agree with the rest of what you said about language design. I'm perfectly fine with the JS "virtual machine+syntax" being spec'ed as monolithic and agree it should for the reasons you cited.

There is stuff in Ecma-262, particularly as ES6 emerges, that are basically library features and there has been casual conversations within TC39 about the desirability and practicality of of having separate standards for some library components. Ecma-402 is an example of this. However, some care needs to be exercised here because sometimes library based features are actually cross-cutting language semantic extensions that are just masquerading as a library.

I understand. Has the Future [1] proposal been reviewed with this needed care? If not, can this be added to the agenda for the next meeting? (so asks the non-TC39 guy :-p) It feels important.

Assuming this is the agree cause, would it make sense for the ECMAScript spec model to change to fit the flexibility needs of WebIDL and the web platform? I'm also going to ask a pretty violent question, but: does it still need to be spec'ed by ECMA? The only argument I've heard in favor of staying at ECMA is that some people still find ISO standardization and Word/PDF important. Can this be re-assessed? Especially given the recent promise/future mess?

Language design is what it is, and to responsibly extend ECMAScript you need to have experienced language designers engaged. I think organizational venues and process has very little to do with the actual pragmatics of how you design extensions for a language as prominent as JavaScript.

I'm glad we agree on this point :-)

David

[1] dom.spec.whatwg.org/#futures

# Allen Wirfs-Brock (12 years ago)

On Apr 17, 2013, at 2:25 PM, David Bruant wrote:

Le 17/04/2013 21:03, Allen Wirfs-Brock a écrit :

On Apr 17, 2013, at 10:28 AM, David Bruant wrote:

...

Although promises were planned for ES7, they weren't part of any formal spec. As far as I know, no recent TC39 meetings even mentioned the concurrency strawman [2].

i don't think the "mention" observation is totally correct. More generally, it is the actual TC39 participants who set the agenda for meetings and who build the necessary consensus to advance proposals. That is how the ES7 observe proposal has advanced as rapidly as it has. It is up to champions of specific proposal (such as Alex or Mark, in this case, but it could be others) to add appropriate agenda items and to move the process forward. But that didn't happen. Instead, Alex Russel gathered a couple of people, started with a minimal proposal privately then opened it up to public feedback, then it entered the WHATWG DOM spec (people have mixed feelings about that), this news has been so important it was shared to a couple of W3C mailing lists I follow and there seems to be plan to incorporate promises to other W3C specs.

You have to ask Alex about that. We all, as individual (subject to constraints our employers might impose), can create designs, circulate them, find supporters, promote their use, etc. But that is just personal action. However, when a standards setting organization (SSO) agrees to undertake the development of a standard, that is a very different affair and formal rules start to apply. For example, there may be rules on dependencies. Such as the rule that an Ecma (or ISO) standard is not allowed to normatively reference things that aren't also normative standards from a recognized SSO. Differed organizations also have different chartered areas of responsibilities. One of the concerns I heard raised here is that Futures may more appropriately fall into TC39 areas of responsibility. Often when SSO find they have some overlap of interest they will find a way to jointly develop a standard.

Object.observe has moved quite fast, but still can't be consumed by other specs as far as I can tell (there might also be an outreach issue which is a different one)

Object.observe isn't part of a finished standard yet. There is a fairly detailed proposal [2] (but not an official specification draft) that has been "accepted" by TC39 as the basis for a future standardized specification. Some people are working on browser implementation based upon the proposal. It is expect that the ultimate specification may vary somewhat from the proposal based upon feedback from those implementations.

Whether or not other specs chose to take dependencies upon Object.observe probably depends upon the rules they operate under plus their judgement of associated risks.

...

TC39 does not exclusively do monolithic specs. See for example Ecma-402, the ECMAScript initialization API [5] which is a modular addition to Ecma=262. It is also a good example of domain experts working within the context of TC39 to focus attention on a specific feature area. True. Can you provide a list of the next upcoming ECMA-xxx, please?

Work is already underway on the 2nd edition of Ecma-402, the Internationalization APIs.

That (other than ES6) is the only formal standard in the current pipeline but there are other exploratory projects (see [3]) that are likely to either become part of the ES7 effort or a separate spec.

However, there is a very good reason that the ECMAScript Language specification is a monolithic spec. Language design is different from library design. (...) The topic at hand is promises which is a library. I entirely agree with the rest of what you said about language design. I'm perfectly fine with the JS "virtual machine+syntax" being spec'ed as monolithic and agree it should for the reasons you cited.

Yes, but it is an library that extends the sequential execution model of ES. That's a significant language level change to the ES specification. We are already scrambling to a certain degree in the ES6 spec. to make it align with the browser-reality eventing model.

A a rule of thumb, if a library does something that can not be expressed in the its base language there is a good chance it is extending "the virtual machine" of the language and it should at least be reviewed from that perspective and iframe semantics. These are both examples of browser design choices that have deep semantics impact upon the language.

There is stuff in Ecma-262, particularly as ES6 emerges, that are basically library features and there has been casual conversations within TC39 about the desirability and practicality of of having separate standards for some library components. Ecma-402 is an example of this. However, some care needs to be exercised here because sometimes library based features are actually cross-cutting language semantic extensions that are just masquerading as a library. I understand. Has the Future [1] proposal been reviewed with this needed care? If not, can this be added to the agenda for the next meeting? (so asks the non-TC39 guy :-p) It feels important.

no and see my first response above. It can be as important as the TC39 members choose to make it. TC39's agenda is largely driven by feature champions.

...

David

[1] dom.spec.whatwg.org/#futures

...

[2] harmony:observe [3] strawman:data_parallelism

# Tab Atkins Jr. (12 years ago)

On Wed, Apr 17, 2013 at 3:57 PM, Allen Wirfs-Brock <allen at wirfs-brock.com> wrote:

A a rule of thumb, if a library does something that can not be expressed in the its base language there is a good chance it is extending "the virtual machine" of the language and it should at least be reviewed from that perspective and iframe semantics. These are both examples of browser design choices that have deep semantics impact upon the language.

Note that Futures are entirely expressible in today's JS semantics.

(Not to say that it shouldn't be reviewed by the language gurus here, just saying.)

# Anne van Kesteren (12 years ago)

On Thu, Apr 18, 2013 at 4:07 AM, Tab Atkins Jr. <jackalmage at gmail.com> wrote:

Note that Futures are entirely expressible in today's JS semantics.

(Not to say that it shouldn't be reviewed by the language gurus here, just saying.)

JavaScript does not have an event loop (as I mentioned elsewhere) so that is not true. HTML defines the event loop model and processing model for any asynchronous JavaScript execution. Lifting that up to JavaScript seems difficult.

-- annevankesteren.nl

# Sam Tobin-Hochstadt (12 years ago)

On Thu, Apr 18, 2013 at 3:40 AM, Anne van Kesteren <annevk at annevk.nl> wrote:

On Thu, Apr 18, 2013 at 4:07 AM, Tab Atkins Jr. <jackalmage at gmail.com> wrote:

Note that Futures are entirely expressible in today's JS semantics.

(Not to say that it shouldn't be reviewed by the language gurus here, just saying.)

HTML defines the event loop model and processing model for any asynchronous JavaScript execution.

This is true.

JavaScript does not have an event loop (as I mentioned elsewhere) so [Futures are entirely expressible] is not true.

The only part of futures that isn't expressible today in ES5 (I believe) is the requirement that callbacks be called in the next tick when .then() is called on a resolved promise.

Lifting that up to JavaScript seems difficult.

Fortunately, this isn't true, because the module loader system requires us to add the event loop to ES6.

# Kevin Smith (12 years ago)

Note that Futures are entirely expressible in today's JS semantics.

Even setting aside the event loop, this is arguable. Futures as implemented in libraries today are expressible, sure. That's a tautology. But there are cross-cutting issues at play, as Allen explained.

In the case of Futures, there's that strange little method named done. The done method has always been conceived as a stop-gap solution for making "unhandled" rejections visible to the programmer. The usability of done is, well, not so great.

It's quite clear that the garbage collector provides us with an upper bound on how long we must wait to know that a rejected future is truly unhandled. When the future is collected, then obviously no additional error handlers can be assigned to it.

WeakRefs would give us just the information we need, but no consensus has been reached on them yet. Does it make sense to move forward with done when WeakRefs are sitting on the horizon? I don't have the answer, but these are the kind of cross-cutting issues that need to be carefully considered. Preferably on es-discuss. : )

# Mark S. Miller (12 years ago)

On Thu, Apr 18, 2013 at 7:45 AM, Kevin Smith <zenparsing at gmail.com> wrote:

Note that Futures are entirely expressible in today's JS semantics.

Even setting aside the event loop, this is arguable. Futures as implemented in libraries today are expressible, sure. That's a tautology. But there are cross-cutting issues at play, as Allen explained.

In the case of Futures, there's that strange little method named done. The done method has always been conceived as a stop-gap solution for making "unhandled" rejections visible to the programmer. The usability of done is, well, not so great.

It's quite clear that the garbage collector provides us with an upper bound on how long we must wait to know that a rejected future is truly unhandled. When the future is collected, then obviously no additional error handlers can be assigned to it.

WeakRefs would give us just the information we need, but no consensus has been reached on them yet. Does it make sense to move forward with done when WeakRefs are sitting on the horizon? I don't have the answer, but these are the kind of cross-cutting issues that need to be carefully considered. Preferably on es-discuss. : )

I think we have informal consensus on the general functionality that WeakRefs will provide and the security constraints they must not violate. We still of course need to argue through many details. But the non-controversial parts of WeakRefs are clearly adequate for the scenario you have in mind -- except for one thing ;).

GC is never required to be complete. We must allow the collector to not collect some unreachable objects. This means that, without .done, there's no guarantee that an unseen-rejection bug will ever get diagnosed. Therefore we still need .done.

# Kevin Smith (12 years ago)

GC is never required to be complete. We must allow the collector to not collect some unreachable objects. This means that, without .done, there's no guarantee that an unseen-rejection bug will ever get diagnosed. Therefore we still need .done.

A perfect example of why the discussion should take place on es-discuss. : )

# Domenic Denicola (12 years ago)

From: Mark S. Miller [erights at google.com]

GC is never required to be complete. We must allow the collector to not collect some unreachable objects. This means that, without .done, there's no guarantee that an unseen-rejection bug will ever get diagnosed. Therefore we still need .done.

I still think the best solution to this is for the developer tools to curate a list of unhandled rejections. Just like window.onerror and the developer console work together to display unhandled exceptions, unhandled rejections could be treated very similarly. They would appear in the console while unhandled, then disappear when/if handled. (And there could be programmatic hooks too, just like window.onerror, e.g. window.onunhandledrejection/window.onrejectionhandled.)

In case others weren't aware, the Promises/A+ group has been compiling ideas for the unhandled rejection problem at

promises-aplus/unhandled-rejections-spec/issues?state=open

# Kevin Smith (12 years ago)

I still think the best solution to this is for the developer tools to curate a list of unhandled rejections. Just like window.onerror and the developer console work together to display unhandled exceptions, unhandled rejections could be treated very similarly. They would appear in the console while unhandled, then disappear when/if handled. (And there could be programmatic hooks too, just like window.onerror, e.g. window.onunhandledrejection/window.onrejectionhandled.)

That could certainly work for browsers, but what about node and its console?

Also, can someone point me to a real-world example of delayed rejection handling?

# Kevin Gadd (12 years ago)

I'm not sure this is a perfect match, but:

The futures library and task scheduler I've been using in my applications for around ~5 years does unhandled error detection and delayed error handling.

For the former, the model is that the future has an internal callback that is fired when a consumer checks its error status. This is done via a property accessor on the rough equivalent of Future.Error, along with any other methods that implicitly check errors (for example, Future.Result will throw instead of returning a value, if the future contains an error). Paired with this is logic in the task scheduler such that any future that goes through the task scheduler (to be waited on, etc) will be tracked to ensure that the consumer task(s) have handled any errors that were returned to them in their next step. I don't currently abort if you fail to check a future for errors unless the future actually contained an error - that is, this only is used to detect ignored errors, not to detect code quality issues (though the latter is feasible with this method).

The downside is that this doesn't give you complete coverage for unhandled errors - futures used outside the task scheduling mechanism don't get tracked. Given ES promises' potential level of integration, though, you could probably integrate this kind of diagnostic more fully if you can find the right place to put it. I do agree that relying on GC to detect unseen rejections is unacceptable; I likewise found that having the failure be delayed for any amount of time was a problem. This is why I moved it into the task scheduler, so that it can synchronously warn you as soon as you fail to handle it. Let me know if you would like to see more detail on this and I can link to my implementation and explain more.

By delayed rejection handling, do you mean the ability to delay response to an error indefinitely? I am not sure what else this would mean. If so, I've built a few apps atop futures and a task scheduler where the error handling policy is that any errors that occur are stored immediately, then propagated up the chain of task dependencies (which means they aren't handled until the whole chain of tasks has woken up to see them) and then if they walk up the entire chain of dependencies without being handled, the task scheduler fires a last chance 'unhandled background error' callback (which could be occurring seconds after the error). I usually put an error dialog in that callback, or in cases where the error is recoverable I log it to a file. If this is a close fit to what you were thinking I can link you to the source to a couple of the applications.