Futures (was: Request for JSON-LD API review)

# Mark S. Miller (12 years ago)

[+es-discuss]

# Anne van Kesteren (12 years ago)

On Wed, Apr 17, 2013 at 8:29 AM, Mark S. Miller <erights at google.com> wrote:

The main argument I've heard for proceeding with w3c/DOMFutures rather than tc39/promises is that the DOM can't wait for tc39 to get around to standardizing promises in ES7. But we should have our eyes open to the consequences. As Crockford says (paraphrasing Knuth) "Premature standardization is the root of all evil." The likely result of DOMFuture proceeding in this way is that it will be wrong, ES7 will be stuck with it and mostly unable to fix it, and we will all be stuck with the consequences for a very very long time.

As with Object.observe, if the need for promises is that urgent, it needs to be on an accelerated track with the JavaScript context -- as it already de facto is at promises/A+. It should not be needlessly tied to the browser or to w3c.

I don't find the whole who owns what discussions very interesting to be honest. If it was up to me JavaScript would just be part of the W3C and we would not have to deal with that layer of distraction.

In any event, you can take the specification and improve on it elsewhere if you so desire. It is in the public domain for a reason. You can also provide technical feedback as to what exactly is evil. Saying "stop doing this" and implying you're somehow the superior forum to the other party is not helpful and has certainly not helped in the past.

-- annevankesteren.nl

# Mark S. Miller (12 years ago)

On Wed, Apr 17, 2013 at 8:46 AM, Anne van Kesteren <annevk at annevk.nl> wrote:

On Wed, Apr 17, 2013 at 8:29 AM, Mark S. Miller <erights at google.com> wrote:

The main argument I've heard for proceeding with w3c/DOMFutures rather than tc39/promises is that the DOM can't wait for tc39 to get around to standardizing promises in ES7. But we should have our eyes open to the consequences. As Crockford says (paraphrasing Knuth) "Premature standardization is the root of all evil." The likely result of DOMFuture proceeding in this way is that it will be wrong, ES7 will be stuck with it

and mostly unable to fix it, and we will all be stuck with the consequences

for a very very long time.

As with Object.observe, if the need for promises is that urgent, it needs

to be on an accelerated track with the JavaScript context -- as it already

de facto is at promises/A+. It should not be needlessly tied to the browser

or to w3c.

I don't find the whole who owns what discussions very interesting to be honest. If it was up to me JavaScript would just be part of the W3C and we would not have to deal with that layer of distraction.

In any event, you can take the specification and improve on it elsewhere if you so desire. It is in the public domain for a reason. You can also provide technical feedback as to what exactly is evil. Saying "stop doing this" and implying you're somehow the superior forum to the other party is not helpful and has certainly not helped in the past.

Hi Anne, promises were already in progress for ES7. It was the w3c that chose to fork the effort rather than participate and provide feedback. Given then, let's paraphrase your advice simply by swapping the roles, in order to keep things historically accurate:

(paraphrasing) I don't find the whole who owns what discussions very interesting to be honest. If it was up to me the W3C would stop biting off on more than they can chew, and would particularly avoid starting turf wars with other organizations, and we would not have to deal with that layer of distraction.

In any event, you can take the [promise] specification and improve on it elsewhere if you so desire. It is in the public domain for a reason. You can also provide technical feedback as to what exactly is evil. Saying "stop doing this" and implying you're somehow the superior forum to the other party is not helpful and has certainly not helped in the past.

I'll note that I didn't feel the need to change one word of your last sentence.

# Anne van Kesteren (12 years ago)

On Wed, Apr 17, 2013 at 4:56 PM, Mark S. Miller <erights at google.com> wrote:

Hi Anne, promises were already in progress for ES7. It was the w3c that chose to fork the effort rather than participate and provide feedback.

Okay, lets assume promises are not in the DOM specification. How soon do you think we can get a specification we can use for the dozens of APIs in development today? I put them in the DOM specification because waiting for ES7 will get us even more "W3C is terrible at APIs!!1!". It's also still not clear what you think is wrong with the current text. Naming?

(Technically it's not part of any W3C draft by the way, but I guess the sentiment is the same either way.)

-- annevankesteren.nl

# Anne van Kesteren (12 years ago)

On Wed, Apr 17, 2013 at 5:06 PM, Anne van Kesteren <annevk at annevk.nl> wrote:

[...]

My previous experience has been trying to get bytes in JavaScript (mostly for XMLHttpRequest): asked from TC39 in 2006. Eventually delivered by Khronos for WebGL. Mourned over by TC39 (and others, myself included) in 2012. If the party "responsible" for delivering the solution does not deliver, it will happen elsewhere. And to some extent that seems good, as it keeps everyone alert and the platform is not held hostile to the progress (and process) of a single entity.

As I mentioned elsewhere, the platform needs more primitives. IO streams, event streams (or signals), futures, ... I have no strongly held believes as to where these should be hosted, I just know that we need them, soon. And getting them soon in JavaScript seems hard as you'd have to lift up the event loop model defined in HTML somehow to the language level.

-- annevankesteren.nl

# Kevin Smith (12 years ago)

You both make good points: Mark is correct when he suggests that a DOMFuture spec will effectively undercut TC39's role in designing a future/promise API. It will also set a precedent (one that is perhaps already in motion) where TC39 is relegated to syntax-only enhancements and playing catch-up with platforms continually performing an end-run.

And Anne is certainly correct to point out that TC39 has not, as of yet, been able to provide the base-platform APIs that developer-facing platforms so badly need.

On the other hand, TC39 has done an amazing job with the ES6 language. The usability improvements are striking and the module system will be exceptional.

It appears to me that what we are missing is a group sitting somewhere between TC39 and W3C, perhaps incorporating members of both. This group would be responsible for designing the EcmaScript base platform API upon which developer-facing implementations can rely. It would iterate more quickly than TC39, but unlike W3C its scope would include all EcmaScript-hosting platforms. It would also share TC39's charge of maintaining the conceptual integrity of the language.

I nominate myself ; )

Ultimately, our goals are the same: a well-designed, conceptually consistent language and development platform. We just need the right structure to make that happen.

Regarding futures specifically, for now I think any standardization discussions should be moved to es-discuss (or at least dual-homed there), as it is currently the only accepted public forum for platform-agnostic ES standards work.

# Tab Atkins Jr. (12 years ago)

On Wed, Apr 17, 2013 at 11:27 AM, Kevin Smith <zenparsing at gmail.com> wrote:

You both make good points: Mark is correct when he suggests that a DOMFuture spec will effectively undercut TC39's role in designing a future/promise API. It will also set a precedent (one that is perhaps already in motion) where TC39 is relegated to syntax-only enhancements and playing catch-up with platforms continually performing an end-run.

And Anne is certainly correct to point out that TC39 has not, as of yet, been able to provide the base-platform APIs that developer-facing platforms so badly need.

On the other hand, TC39 has done an amazing job with the ES6 language. The usability improvements are striking and the module system will be exceptional.

It appears to me that what we are missing is a group sitting somewhere between TC39 and W3C, perhaps incorporating members of both. This group would be responsible for designing the EcmaScript base platform API upon which developer-facing implementations can rely. It would iterate more quickly than TC39, but unlike W3C its scope would include all EcmaScript-hosting platforms. It would also share TC39's charge of maintaining the conceptual integrity of the language.

I nominate myself ; )

Ultimately, our goals are the same: a well-designed, conceptually consistent language and development platform. We just need the right structure to make that happen.

Regarding futures specifically, for now I think any standardization discussions should be moved to es-discuss (or at least dual-homed there), as it is currently the only accepted public forum for platform-agnostic ES standards work.

This group is public-script-coord, which we're already having the discussion on, so... success!

# Ron Buckton (12 years ago)

As someone who has been interested in Promises/Futures in JavaScript for a number of years, I'd like to throw in my $0.02 regarding a proposed API for Promises/Futures for thoughts:

gist.github.com/rbuckton/5406451

My apologies in advance as the API definitions are written using TypeScript and not Web IDL.

Ron

# Jorge (12 years ago)

On 17/04/2013, at 17:46, Anne van Kesteren wrote:

If it was up to me JavaScript would just be part of the W3C and we would not have to deal with that layer of distraction.

On 17/04/2013, at 19:48, Tab Atkins Jr. wrote:

I strongly support any efforts to move JS standardization into the umbrella of the W3C.

The very thought of it sends chills down my spine.

The w3c has demonstrated blindness and incompetence. Remember how and why the whatwg came to be? Stop pretending. You guys ought to be deeply embarrassed because HTML5 is not your child.

Who wants a JS infested of inconvenient APIs w3c-style?

# Anne van Kesteren (12 years ago)

On Thu, Apr 18, 2013 at 2:00 PM, Jorge <jorge at jorgechamorro.com> wrote:

You guys ought to be deeply embarrassed because HTML5 is not your child.

I don't even

-- annevankesteren.nl

# Alex Russell (12 years ago)

Comments inline.

On Wed, Apr 17, 2013 at 7:35 PM, Ron Buckton <Ron.Buckton at microsoft.com>wrote:

As someone who has been interested in Promises/Futures in JavaScript for a number of years, I'd like to throw in my $0.02 regarding a proposed API for Promises/Futures for thoughts:

gist.github.com/rbuckton/5406451

My apologies in advance as the API definitions are written using TypeScript and not Web IDL.

There's a lot of API in here. If you give the DOMFutures github repo a look, you'll see that we considered some of them: slightlyoff/DOMFuture

In particular, progress was explicitly written out of the contract of the base class so that subclasses that need it can mix it back in without burdening everyone else with perhaps nonsensical methods. See: slightlyoff/DOMFuture/blob/master/ProgressFuture.idl

As for the state variables, we've removed them in the most recent version to avoid potential for "cheating" (as Luke Hoban described it). I'm also not entirely sure I understand why the capability to cancel is being vended to all consumers of a Future. We explicitly disallow that in the current design to prevent the potential for multiple users of a Promise/Future stepping on each other. The thought with the current design is that if you have an interface that needs to vend cancelation, you should at it in a subclass of Future.

Your constructor signature looks similar (but not the same) as the one we ended up with, and there appear to be many convenience static methods on Promise. How strongly do you feel about them?

From a design standpoint the only thing that jumps out at me as being very

strange is the "synchronous" option for .then() and .done(). What is it meant to do?

# Ron Buckton (12 years ago)

I?ll preface this with a disclaimer that I?m not directly involved with any of the standards discussions, TypeScript, or IE at Microsoft, but rather am expressing my interest as a producer and consumer of Promises/Futures while building cloud applications. I?ve published a few variations of Promise/Future libraries for JavaScript at blogs.msdn.com/b/rbuckton and a lot of my experience is biased towards previously having leveraged Task-based asynchrony in C#.

Despite my API proposal leaning towards the term Promise, I?ll use the term Future instead to align with the proposal for DOM when speaking about Promises or Futures in the general sense.

My version of the PromiseResolver provides resolve/reject methods and does not include an ?accept? method. My understanding of this is that FutureResolver#accept resolves the value explicitly, while FutureResolver#resolve hooks the Future#done method of the value if it is a future so that Future(A) for Future(B) is eventually resolved as B, not Future(B). I?m not sure I understand all the use cases for accept over resolve at this point, but have always preferred the resolve approach in all of my actual uses so far.

Progress notifications are a bit of a mixed bag for me. My first implementations of Future didn?t have them, but I had a number of instances in the C# world where I needed a way to update the Future consumer for the benefit of UI notifications. The recent discussion on EventStreams has me wondering if progress notifications could serve a similar purpose for Futures, where a progress notification could be triggered for each instance of an event in the stream, where resolve is triggered for single use events (like DOMContentLoaded), or when the event producer is signaling that it has concluded processing.

With Progress notifications relegated to a subclass, would chained Futures also be ProgressFuture instances? The benefit of having progress as an optional member of the Future class is that a chained Future could also enlist in progress notifications, but that is less of a concern if the Future created by a .then from a ProgressFuture is itself a ProgressFuture.

Not having progress can be mitigated somewhat by passing in a progress notification callback to the function that creates the Future, but chained descendants would not be aware of the progress and would have a more complicated task to write code that properly accepts progress handlers, and then we?re getting back to the callback/errback continuation passing that Futures are partially designed to replace.

Cancellation was an attempt to support something akin to Cooperative Cancellation as it exists in .NET?s implementation, as well as a means to ?unlisten? or forget a Future if you no longer need its value. In the API proposal, by default cancel would only essentially remove a Future (and therefore its chained descendants) from receiving resolve/reject/progress signals from its antecedent. Cancellation also would allow the ability to prevent the resolve/reject/progress callbacks from executing in a later turn of the dispatcher to prevent the execution of now unneeded code. It can also be used to abort a XHR request or shut down a WebWorker.

The second callback in the Promise constructor would be a means to provide user supplied cancellation logic, such a updating the UI in response to a cancelled pending operation. I debated on whether it should be possible to also cancel the antecedent tasks from a chained descendent, and it is a very tentative part of the API.

In the .NET world, I would use a CancelationTokenSource and CancellationToken to provide cancellation, which serves several purposes. One is the ability to prevent the execution of the background Task before it starts (which is provided by adding Promise#cancel()). Second is the ability to perform some kind of user-defined cleanup logic in the event of cancellation (e.g. detach event handlers, abort an XHR, notify the UI, etc.). The third is the ability to track cancellation when in a background thread that might be running in a loop, however with the possible exception of Web Workers, this is unlikely to be required in traditional JavaScript programs that are single threaded and rely on a dispatcher/event-loop and don?t have the traditional concept of a Thread. CTS also allows the ability to aggregate multiple cancellation tokens when waiting on multiple parallel tasks, which is even less likely in JavaScript/ES.

Promise#cancel() in this respect can have an effect similar to EventStream#unlisten in that proposal.

That being said, a ?CancellationToken? could be implemented by passing in another Future to the function that generates the Future you care about. Resolving the ?cancellation? future could be used to abort an XHR, but not to cancel a task that is still waiting to be executed on the dispatcher/event-loop, as the .then() would likely execute in a different turn, unless it could be explicitly marked as synchronous.

The options argument provides additional optional named parameters for the then/done/catch/progress continuations that in effect make the ?synchronous? flag in the DOMFutures something that the user can control. This is similar to the TaskContinuationOptions.ExecuteSynchronously enum value in .NET which can be used to optimize some continuations to execute synchronously when its antecedent is resolved or rejected to reduce the need to wait for another turn of the dispatcher/event-loop. This optimization is primarily defined for small function bodies to reduce overhead, and could be used to make cancellation-by-future more effective.

The reason options is expected to be an object/object literal is that this can be extended to add additional control over the resulting continuation. This could include the ability to prevent cancellation (in the event .cancel is supported with the antecedents argument), or the ability to only signal chained descendants if a future is rejected and not to forward resolve to those descendants. This also allows for future additions to the options in later versions without breaking consumers. In this vein, it could be useful to have an options argument for the Future constructor as well, although I haven?t yet had an occasion to need one yet.

Finally, the additional API definitions are convenience APIs for certain scenarios. By default, I expect both Promise.resolve and PromiseResolver#resolve to only hook the resolve/reject of a Promise from the same library. Calling Promise as a function (or adding a Promise.of static method) might be the only Promise ?interop' to userland Future libraries, though I would almost prefer that no ?interop? between libraries for a DOM or ES version to exist, but rather would require explicitly creating a new Future and using its resolver to interoperate with the userland promise.

The Promise.any, Promise.every, and Promise.some methods are very similar to what is in DOMFutures, except that the current version of the DOMFutures spec leaves a few things unspecified that could be problematic for end users. According to the spec for Future.every, order of the resolved values is arbitrary, based on the order that the provided Futures are resolved. As a consumer of Future.every, the Array of resolved values should be in the same order as the futures that were provided to the method, to be able to distinguish which value belongs to which future. This may or may not be in the polyfill, but it is not explicitly (or at least clearly) specified in the DOMFutures spec. The same can be said for the Array of errors in the Future.some API definition.

I added AggregateError as a tentative Error object as a means to provide a single Error object to use as the value for the reject handler, and have considered wrapping all non Error values passed to the reject method on the resolver into an Error object to set expectations for the consumer. That way, the argument to the reject callback is always recognizable as an Error, and it can be easier to test the argument to provide appropriate handling. For instance, without Error wrapping or AggregateError, I would have to result to duck typing or Array.isArray to determine whether the errors provided are the result of a single error or multiple errors from a call to Future.some. This is, again, inspired by the .NET AggregateException, though I would likely send the single underlying Error if the AggregateError would only contain a single error.

The remaining API?s are designed to help support await-style asynchronous development as possibly afforded by generators or any future addition of something like ?await? into the language. To that end, static methods like Promise.yield() and Promise.sleep() can help to let the dispatcher/event-loop do other work in the middle of a long-running async function, or to pause for a period of time before continuing such as with animation. Promise.delay() is similar to sleep, but resolves with a value.

Promise.run() is close to setImmediate, where the result is the future value of the callback. In this case, Promise#cancel() is then effectively a call to clearImmediate. In a similar fashion, Promise.start() is roughly equivalent to setTimeout with its Promise#cancel() then synonymous with clearTimeout.

I?m not strongly tied to having progress, cancel, or the synchronous option, but do find that they provide a level of flexibility. Subclassing Future to provide this could make sense, but again I am concerned about ensuring the subclass prototype is somehow reused for chained dependents so that you don?t lose your .progress or .cancel if you do a .then before you return. The .yield/.sleep/.delay convenience methods are much more useful with yield or await.

I can understand Luke?s concern around the state properties, the only one I might push back on might be PromiseResolver#wasCanceled if Promise#cancel were to be supported to be able to test for cancellation if the future might be resolved in a later turn than it was created (such as in the onload event listener for an XHR). The properties on Promise itself are much less necessary and I?m not strongly tied to them.

One last thing not mentioned in my proposal, nor the DOMFutures spec, is dealing with Error#stack with respect to Futures or async methods. Since ES has no rethrow concept, the only way for a reject handler to pass the error to chained descendants is to throw the exception. This can then possibly negatively effect the content of Error#stack and can complicate debugging futures. I am still reading through the issues list for DOMFutures, so I apologize in advance if this is a topic that has already been covered.

I sincerely look forward to a standard implementation of Futures, and truly hope this can become part of ES. Some of the proposals for ES6 and later could likely benefit from Futures. The current proposal for module Loaders already is leaning towards both Node/CPS-like callback/errback arguments for Loader#load as well as something very Future-like in its argument to Loader#fetch (at least as far as of what I have been able to find online). It seems to me that both methods would be better served by Futures. Object.observe could be served by something like the EventStreams proposal as well.

# Tab Atkins Jr. (12 years ago)

On Fri, Apr 19, 2013 at 2:24 PM, Ron Buckton <rbuckton at chronicles.org> wrote:

My version of the PromiseResolver provides resolve/reject methods and does not include an ‘accept’ method. My understanding of this is that FutureResolver#accept resolves the value explicitly, while FutureResolver#resolve hooks the Future#done method of the value if it is a future so that Future(A) for Future(B) is eventually resolved as B, not Future(B). I’m not sure I understand all the use cases for accept over resolve at this point, but have always preferred the resolve approach in all of my actual uses so far.

"accept" is just syntax sugar. If you only provide resolve/reject, you can get the same behavior as "accept" by just wrapping the value in a dummy Future before returning it.

The fact that chained futures only have resolve/reject semantics makes this pretty clear.

Progress notifications are a bit of a mixed bag for me. My first implementations of Future didn’t have them, but I had a number of instances in the C# world where I needed a way to update the Future consumer for the benefit of UI notifications. The recent discussion on EventStreams has me wondering if progress notifications could serve a similar purpose for Futures, where a progress notification could be triggered for each instance of an event in the stream, where resolve is triggered for single use events (like DOMContentLoaded), or when the event producer is signaling that it has concluded processing.

I've also given thought to this, but, even though they're structurally similar at first glance, the use-cases for ProgressFuture and EventStream are actually quite different. A ProgressFuture is still fundamentally focused on the fulfilled/rejected state, while an EventStream is fundamentally focused on updates.

With Progress notifications relegated to a subclass, would chained Futures also be ProgressFuture instances? The benefit of having progress as an optional member of the Future class is that a chained Future could also enlist in progress notifications, but that is less of a concern if the Future created by a .then from a ProgressFuture is itself a ProgressFuture.

I'm currently of the opinion that progress updates should probably automatically bubble through chains. (I'm currently doing the similar thing with EventStreams, and making completion bubble through.)

Cancellation was an attempt to support something akin to Cooperative Cancellation as it exists in .NET’s implementation, as well as a means to ‘unlisten’ or forget a Future if you no longer need its value. In the API proposal, by default cancel would only essentially remove a Future (and therefore its chained descendants) from receiving resolve/reject/progress signals from its antecedent. Cancellation also would allow the ability to prevent the resolve/reject/progress callbacks from executing in a later turn of the dispatcher to prevent the execution of now unneeded code. It can also be used to abort a XHR request or shut down a WebWorker.

The problem with cancellation, as stated, is that it allows one consumer to affect the state that another consumer sees. Right now, that's not a possibility, which lets you reason about futures much more easily. (The fact that you can do this in jQuery's promises, for example, makes them extremely hard to work with generically.)

As Alex says, creating a Future subclass that's single-listener would avoid this issue, so cancelling would probably work.

The second callback in the Promise constructor would be a means to provide user supplied cancellation logic, such a updating the UI in response to a cancelled pending operation. I debated on whether it should be possible to also cancel the antecedent tasks from a chained descendent, and it is a very tentative part of the API.

Most consumers of Futures won't be using the constructor - they'll just be handed an already-constructed future for them to listen to. So, using the constructor as the channel to pass in cancellation info won't really help. :/

In the .NET world, I would use a CancelationTokenSource and CancellationToken to provide cancellation, which serves several purposes. One is the ability to prevent the execution of the background Task before it starts (which is provided by adding Promise#cancel()). Second is the ability to perform some kind of user-defined cleanup logic in the event of cancellation (e.g. detach event handlers, abort an XHR, notify the UI, etc.). The third is the ability to track cancellation when in a background thread that might be running in a loop, however with the possible exception of Web Workers, this is unlikely to be required in traditional JavaScript programs that are single threaded and rely on a dispatcher/event-loop and don’t have the traditional concept of a Thread. CTS also allows the ability to aggregate multiple cancellation tokens when waiting on multiple parallel tasks, which is even less likely in JavaScript/ES.

Promise#cancel() in this respect can have an effect similar to EventStream#unlisten in that proposal.

I've wiped out that function for now, because I had the semantics of listen() wrong. I need to figure out a better way to unlisten.

That being said, a “CancellationToken” could be implemented by passing in another Future to the function that generates the Future you care about. Resolving the “cancellation” future could be used to abort an XHR, but not to cancel a task that is still waiting to be executed on the dispatcher/event-loop, as the .then() would likely execute in a different turn, unless it could be explicitly marked as synchronous.

The options argument provides additional optional named parameters for the then/done/catch/progress continuations that in effect make the “synchronous” flag in the DOMFutures something that the user can control. This is similar to the TaskContinuationOptions.ExecuteSynchronously enum value in .NET which can be used to optimize some continuations to execute synchronously when its antecedent is resolved or rejected to reduce the need to wait for another turn of the dispatcher/event-loop. This optimization is primarily defined for small function bodies to reduce overhead, and could be used to make cancellation-by-future more effective.

The reason options is expected to be an object/object literal is that this can be extended to add additional control over the resulting continuation. This could include the ability to prevent cancellation (in the event .cancel is supported with the antecedents argument), or the ability to only signal chained descendants if a future is rejected and not to forward resolve to those descendants. This also allows for future additions to the options in later versions without breaking consumers. In this vein, it could be useful to have an options argument for the Future constructor as well, although I haven’t yet had an occasion to need one yet.

Interesting ideas!

Finally, the additional API definitions are convenience APIs for certain scenarios. By default, I expect both Promise.resolve and PromiseResolver#resolve to only hook the resolve/reject of a Promise from the same library. Calling Promise as a function (or adding a Promise.of static method) might be the only Promise ‘interop' to userland Future libraries, though I would almost prefer that no ‘interop’ between libraries for a DOM or ES version to exist, but rather would require explicitly creating a new Future and using its resolver to interoperate with the userland promise.

Correct, and I expect the same. (That said, we can probably at least adopt Promises/A+ adoption semantics, where thenables with behavior that is anywhere near sane can be automatically converted into Futures.)

The Promise.any, Promise.every, and Promise.some methods are very similar to what is in DOMFutures, except that the current version of the DOMFutures spec leaves a few things unspecified that could be problematic for end users. According to the spec for Future.every, order of the resolved values is arbitrary, based on the order that the provided Futures are resolved. As a consumer of Future.every, the Array of resolved values should be in the same order as the futures that were provided to the method, to be able to distinguish which value belongs to which future. This may or may not be in the polyfill, but it is not explicitly (or at least clearly) specified in the DOMFutures spec. The same can be said for the Array of errors in the Future.some API definition.

Yes, the order of the result array does need to be in the same order as the input futures. Good catch. I'll file this in a new top-level thread.

I added AggregateError as a tentative Error object as a means to provide a single Error object to use as the value for the reject handler, and have considered wrapping all non Error values passed to the reject method on the resolver into an Error object to set expectations for the consumer. That way, the argument to the reject callback is always recognizable as an Error, and it can be easier to test the argument to provide appropriate handling. For instance, without Error wrapping or AggregateError, I would have to result to duck typing or Array.isArray to determine whether the errors provided are the result of a single error or multiple errors from a call to Future.some. This is, again, inspired by the .NET AggregateException, though I would likely send the single underlying Error if the AggregateError would only contain a single error.

No, Future.some always passes an array into the reject handler. No need to duck-type, unless you're passing the same reject handler to multiple futures. If you are, Array.isArray() is reliable.

The remaining API’s are designed to help support await-style asynchronous development as possibly afforded by generators or any future addition of something like “await” into the language. To that end, static methods like Promise.yield() and Promise.sleep() can help to let the dispatcher/event-loop do other work in the middle of a long-running async function, or to pause for a period of time before continuing such as with animation. Promise.delay() is similar to sleep, but resolves with a value.

Promise.run() is close to setImmediate, where the result is the future value of the callback. In this case, Promise#cancel() is then effectively a call to clearImmediate. In a similar fashion, Promise.start() is roughly equivalent to setTimeout with its Promise#cancel() then synonymous with clearTimeout.

I expect these kind of conveniences to show up eventually, but likely in separate specs. For example, Future.sleep() or Future.delay() would be defined alongside setTimeout().

I’m not strongly tied to having progress, cancel, or the synchronous option, but do find that they provide a level of flexibility. Subclassing Future to provide this could make sense, but again I am concerned about ensuring the subclass prototype is somehow reused for chained dependents so that you don’t lose your .progress or .cancel if you do a .then before you return. The .yield/.sleep/.delay convenience methods are much more useful with yield or await.

I can understand Luke’s concern around the state properties, the only one I might push back on might be PromiseResolver#wasCanceled if Promise#cancel were to be supported to be able to test for cancellation if the future might be resolved in a later turn than it was created (such as in the onload event listener for an XHR). The properties on Promise itself are much less necessary and I’m not strongly tied to them.

One last thing not mentioned in my proposal, nor the DOMFutures spec, is dealing with Error#stack with respect to Futures or async methods. Since ES has no rethrow concept, the only way for a reject handler to pass the error to chained descendants is to throw the exception. This can then possibly negatively effect the content of Error#stack and can complicate debugging futures. I am still reading through the issues list for DOMFutures, so I apologize in advance if this is a topic that has already been covered.

Yes, Q has some basic support for reconstructing a stack from errors. This should be explored more fully, because otherwise it's very hard to use errors for debugging.

# Kevin Gadd (12 years ago)

My solution for cancellation has been to allow cancellation notifications to be bidirectional - that is, when you subscribe to completion notifications on a Future, you can also subscribe to cancellation notifications. Then it's possible to cancel a given future without breaking any other listeners (as long as they subscribed to cancellation notifications if they care about cancellation). Has that been considered? I can see how it might be too finicky for the average developer; losing out on cancellation really sucks though.

In particular it feels more important to have explicit cancellation built into the object representing work if you can in JS, since there's no way to lean on the garbage collector to cancel work - in environments like Python you can make cancellation implicit by doing it when the Future representing the work is collected, but in JS that's impossible, so having an explicit way to dispose of a future is valuable, even if in many cases the cancellation doesn't do anything. It's also particularly good in terms of encapsulation - if there's a general cancellation mechanism that is well-factored, you can just universally make a habit of cancelling unneeded futures, and any backend implementations that support cancellation will automatically get told to cancel and save cycles/bandwidth. It means that you don't have to go add cancellation in 'after the fact' when the source of a Future changes from a local buffer to a network operation, or remove cancellation when you replace a network operation with a cache.

Any kind of task scheduler like dherman's task.js can easily leverage this to automatically cancel any task represented by a cancelled Future, and in particular, task schedulers can propagate cancellation, by cancelling any of the Futures a task is waiting on when the task is cancelled. This has a very desirable property of allowing you to cancel a huge, amorphous blob of pending work when it becomes unnecessary by simply cancelling the root - for example in one application I worked on, we kicked off a task to represent each avatar in a 3D scene that was responsible for loading the avatar's textures, meshes, etc. If the user left the scene before the avatar was fully loaded, all we had to do was cancel the task and any pending texture loads or network requests automatically stopped. Getting that right by hand would have been much more difficult, and we wouldn't have necessarily known to build cancellation explicitly into that API when we started.

# Tab Atkins Jr. (12 years ago)

On Fri, Apr 19, 2013 at 3:35 PM, Kevin Gadd <kevin.gadd at gmail.com> wrote:

My solution for cancellation has been to allow cancellation notifications to be bidirectional - that is, when you subscribe to completion notifications on a Future, you can also subscribe to cancellation notifications. Then it's possible to cancel a given future without breaking any other listeners (as long as they subscribed to cancellation notifications if they care about cancellation). Has that been considered? I can see how it might be too finicky for the average developer; losing out on cancellation really sucks though.

In particular it feels more important to have explicit cancellation built into the object representing work if you can in JS, since there's no way to lean on the garbage collector to cancel work - in environments like Python you can make cancellation implicit by doing it when the Future representing the work is collected, but in JS that's impossible, so having an explicit way to dispose of a future is valuable, even if in many cases the cancellation doesn't do anything. It's also particularly good in terms of encapsulation - if there's a general cancellation mechanism that is well-factored, you can just universally make a habit of cancelling unneeded futures, and any backend implementations that support cancellation will automatically get told to cancel and save cycles/bandwidth. It means that you don't have to go add cancellation in 'after the fact' when the source of a Future changes from a local buffer to a network operation, or remove cancellation when you replace a network operation with a cache.

Any kind of task scheduler like dherman's task.js can easily leverage this to automatically cancel any task represented by a cancelled Future, and in particular, task schedulers can propagate cancellation, by cancelling any of the Futures a task is waiting on when the task is cancelled. This has a very desirable property of allowing you to cancel a huge, amorphous blob of pending work when it becomes unnecessary by simply cancelling the root - for example in one application I worked on, we kicked off a task to represent each avatar in a 3D scene that was responsible for loading the avatar's textures, meshes, etc. If the user left the scene before the avatar was fully loaded, all we had to do was cancel the task and any pending texture loads or network requests automatically stopped. Getting that right by hand would have been much more difficult, and we wouldn't have necessarily known to build cancellation explicitly into that API when we started.

I'm curious about what sort of API you use for this. Right now, Futures are pretty easy to use, because there are only two useful signals that a callback has to give - successful completion or error - and it can do this by either returning or throwing. It seems like we've run out of basic syntax to use for this kind of message-passing, though, and going any further would require a dedicated object.

Maybe this can just be done by a second argument that is given to the callbacks, with a messaging object similar to the resolver object sent to the resolver callback?

We could use this object to hold accept/reject/resolve functions too, in case it's convenient to be more explicit about signaling these. Then, though, we'd have to be careful to separate the semantics of "functions that affect the output future" and "functions that talk to the input future".

# Kevin Gadd (12 years ago)

I'm not sure there's a perfect solution, yeah. Cancellation is definitely not something you want every listener to be responsible for in a multiple-listener scenario - most scenarios I've dealt with are ones where a single task is responsible for the lifetime of a future - deciding whether to cancel it, etc - usually the task that started it, but other tasks may be monitoring its progress. For example, a simple 'memoization' primitive might subscribe to a future in order to store its result when it is completed, in order to return a cached result the next time. The memoization primitive would never have a reason to cancel the future - that would be up to the task that actually requested the work. So it's tricky.

.NET's standard library uses the 'cancellation token' primitive that Ron described, and I feel that's a pretty low-risk way to encapsulate cancellation, but it loses the benefits of having cancellation baked into the future itself - when I cancel a task via a cancellationtoken, for any subscribers to know about cancellation, I'll have to complete the Future (with some sort of special TaskCancelledError instead of a result?) or drop it on the floor and never complete it. So it creates a need for side-channel communication in all cancellation scenarios, and it requires all consumers to know whether or not a given Future can be cancelled. Maybe this is unavoidable.

My particular API approach was simple, albeit not ideal: Since .NET has a 'Disposable' concept, my Future class simply became Disposable. So this meant that in all use cases, the simplest way to get cancellation 'right' was to use the language built-in:

var taskFuture = StartSomeTask(); // returns Future

using (taskFuture) { // when this block is left, taskFuture is disposed
  // ... do some async work using taskFuture ...
  yield return taskFuture; // wait on taskFuture
}

In practice what this meant is that when the task was suspended to wait on taskFuture, it currently 'owned' the lifetime of that future. As a result, if the task itself were cancelled, the task scheduler would dispose the task, and because the task currently owned the lifetime of taskFuture, disposing the task disposed taskFuture.

Cancelling an already-complete future in this manner is totally safe, so the 'using' pattern ends up not having any downsides there - my Future implementation is basically first-come-first-serve, where if someone stores a result into a Future before you, they win (and you get an exception for trying to complete it twice), and if you cancel after a result has been stored into it the cancel is a no-op.

If you wanted to go further with the design of a task scheduler you could automatically cancel any futures a task is waiting on, but I decided not to do that since I didn't have an opportunity to think through all the consequences.

Essentially in my model, the v1.0 equivalent had tri-state Futures: Incomplete, CompletedWithResult, and CompletedWithError. Cancellation was introduced in a later rev of the API and added a fourth 'Disposed' state.

From a callback perspective I ended up with two callbacks, one for 'completion' (either with result or error - the premise being that if you handle one you always want to handle the other) and another for cancellation.

The split between functions that affect a Future and functions that consume it is definitely an interesting one. To be honest, my API never made the distinction - a Future is always read/write, and the state change model generally ensures that if the Future is mishandled, an exception will be thrown somewhere to notify you that you screwed up. But I think that capability split is probably important, and I don't know how cancellation fits into that model - in particular since ES6/ES7 seem very focused on using object capability as a security model, you don't want passing a Future across a boundary to give some third party the ability to fake the result of a network request or something like that.

# Tab Atkins Jr. (12 years ago)

On Fri, Apr 19, 2013 at 4:02 PM, Kevin Gadd <kevin.gadd at gmail.com> wrote:

I'm not sure there's a perfect solution, yeah. Cancellation is definitely not something you want every listener to be responsible for in a multiple-listener scenario - most scenarios I've dealt with are ones where a single task is responsible for the lifetime of a future - deciding whether to cancel it, etc - usually the task that started it, but other tasks may be monitoring its progress. For example, a simple 'memoization' primitive might subscribe to a future in order to store its result when it is completed, in order to return a cached result the next time. The memoization primitive would never have a reason to cancel the future - that would be up to the task that actually requested the work. So it's tricky.

.NET's standard library uses the 'cancellation token' primitive that Ron described, and I feel that's a pretty low-risk way to encapsulate cancellation, but it loses the benefits of having cancellation baked into the future itself - when I cancel a task via a cancellationtoken, for any subscribers to know about cancellation, I'll have to complete the Future (with some sort of special TaskCancelledError instead of a result?) or drop it on the floor and never complete it. So it creates a need for side-channel communication in all cancellation scenarios, and it requires all consumers to know whether or not a given Future can be cancelled. Maybe this is unavoidable.

Right, I think there are only two basic approaches that work:

  1. Have a Future subclass that allows downstream consumers to affects its value.
  2. Have a Future subclass that can only have one consumer, so the consumer/producer distinction is safe to blur.

There's nothing intrinsically wrong with #1 - it's already the case that XHR can do so, for example. The problem is figuring out the best way for other consumers to respond.

One simple possibility would be to just expose accept/resolve/reject on the returned Future itself. Calling any of these cancels the Future (if the Future has a notion of cancellation), and forces it to adopt the passed state as appropriate. The constructor would take two callbacks, one for normal operation (called immediately) and one to handle cancellation (called when needed). This has the nice benefit that a consumer can provide a default value for other consumers to use, and it doesn't require any new codeflow channels.

Another possibility is to add a cancel method on the returned Future, and also expose a cancel listener, akin to the progress listener. This exposes a new codeflow, which has to be handled (unsure whether it should be a real codeflow channel, automatically passing down the chain if unhandled, or just considered a subset of the rejection channel, auto-rejecting the output promise if it's unhandled).

While #2 is probably appropriate for some cases, I think it's less general, and providing both might be confusing. (Not to mention the confusion of having a brand new behavior for observing.)

The split between functions that affect a Future and functions that consume it is definitely an interesting one. To be honest, my API never made the distinction - a Future is always read/write, and the state change model generally ensures that if the Future is mishandled, an exception will be thrown somewhere to notify you that you screwed up. But I think that capability split is probably important, and I don't know how cancellation fits into that model - in particular since ES6/ES7 seem very focused on using object capability as a security model, you don't want passing a Future across a boundary to give some third party the ability to fake the result of a network request or something like that.

Yes, the capability split is very important to allow reasoning about it sanely, for the reason you give. Maintaining this principle suggests the proper way forward pretty clearly, I think.

If you pass something across a security boundary, and you're afraid of them being able to fake it, you're afraid of them doing anything to it. This suggests that you actually want something like Q's Deferred, which is basically a naked resolver object that you can pull a promise off of.

It would be so nice if JS had multiple return values, so we could let cancellable future-returning APIs just return a naked resolver as their second value, and only clueful call sites would need to care about it. ^_^ Instead, we'll probably need to have API variants that instead return something like a Deferred, or that return a pair of a future and a resolver.

# Ron Buckton (12 years ago)

From: Tab Atkins Jr. [mailto:jackalmage at gmail.com] Sent: Friday, April 19, 2013 3:14 PM

On Fri, Apr 19, 2013 at 2:24 PM, Ron Buckton <rbuckton at chronicles.org> wrote:

Progress notifications are a bit of a mixed bag for me. My first implementations of Future didn?t have them, but I had a number of instances in the C# world where I needed a way to update the Future consumer for the benefit of UI notifications. The recent discussion on EventStreams has me wondering if progress notifications could serve a similar purpose for Futures, where a progress notification could be triggered for each instance of an event in the stream, where resolve is triggered for single use events (like DOMContentLoaded), or when the event producer is signaling that it has concluded processing.

I've also given thought to this, but, even though they're structurally similar at first glance, the use-cases for ProgressFuture and EventStream are actually quite different. A ProgressFuture is still fundamentally focused on the fulfilled/rejected state, while an EventStream is fundamentally focused on updates.

I can agree, though there is some overlap in the capabilities they are definitely focused on solving different problems.

With Progress notifications relegated to a subclass, would chained Futures also be ProgressFuture instances? The benefit of having progress as an optional member of the Future class is that a chained Future could also enlist in progress notifications, but that is less of a concern if the Future created by a .then from a ProgressFuture is itself a ProgressFuture.

I'm currently of the opinion that progress updates should probably automatically bubble through chains. (I'm currently doing the similar thing with EventStreams, and making completion bubble through.)

I agree. Whether using a ProgressFuture or not, these kinds of notifications need to bubble down the chain of descendants until they reach an interested caller. The problem would be how to allow interested callers to be able to intercept that message without themselves being a ProgressFuture instance. This leads me back to the point that Progress notifications make some sense on Future. Alternatively, the result of calling then or catch on a ProgressFuture should then return a new Progress future.

Granted, I imagine most API's won't have need of the progress API. XHR has some limited use cases, though transmitting files using the File API, or a Future that sits atop a Worker or WebSocket could make use of progress in a meaningful fashion.

Cancellation was an attempt to support something akin to Cooperative Cancellation as it exists in .NET?s implementation, as well as a means to ?unlisten? or forget a Future if you no longer need its value. In the API proposal, by default cancel would only essentially remove a Future (and therefore its chained descendants) from receiving resolve/reject/progress signals from its antecedent. Cancellation also would allow the ability to prevent the resolve/reject/progress callbacks from executing in a later turn of the dispatcher to prevent the execution of now unneeded code. It can also be used to abort a XHR request or shut down a WebWorker.

The problem with cancellation, as stated, is that it allows one consumer to affect the state that another consumer sees. Right now, that's not a possibility, which lets you reason about futures much more easily. (The fact that you can do this in jQuery's promises, for example, makes them extremely hard to work with generically.)

As Alex says, creating a Future subclass that's single-listener would avoid this issue, so cancelling would probably work.

If you take out the optional "antecedent" argument in my gist, cancel here would be designed to remove the Future and only its chained descendants from receiving a resolve or reject signal from that future's antecedent. There is the question of how to handle cancellation notifications as they bubble down the chain. One option is to have cancellation merely reject the future with something like a "CancelledError". Descendants in the chain with a reject handler could inspect the argument to their handler to make a determination about what to do when it is cancelled. The downside is that this adds additional operations to the dispatcher to asynchronously handle the cancellation state. Ideally I would want to prevent all of these possible operations from completing and only have to deal with handlers for cancellation cleanup.

The cancelCallback argument proposed for Promise is designed to provide the means of interpreting that cancellation signal. You might, for example, perform an XHR GET in the following fashion:

function fetchAsync(url) {
  var xhr = new XMLHttpRequest();
  return new Promise(function(resolver) {
    xhr.onload = function() { resolver.resolve(xhr.responseText); }
    xhr.onerror = function() { resolver.resolve(xhr.statusText); }
    xhr.open("GET", url, true);
    xhr.send();
  }, function () {
    xhr.abort();
  });
}

Unfortunately, it does make the function a bit odd with respect to cancellation, as I'm forced to lift the xhr reference out of the initCallback. In an earlier rev, I would have the user create the PromiseResolver (called PromiseSource at the time) first, and it had a .promise property that returned the Promise. The constructor to PromiseSource took in a cancellation callback. That looked more like the following:

function fetchAsync(url) {
  var xhr = new XMLHttpRequest();
  var source = new PromiseSource(function () { xhr.abort(); });
  xhr.onload = function() { source.resolve(xhr.responseText); }
  xhr.onerror = function() { source.reject(xhr.statusText); }
  xhr.open("GET", url, true);
  xhr.send();
  return source.promise;
}

The second callback in the Promise constructor would be a means to provide user supplied cancellation logic, such a updating the UI in response to a cancelled pending operation. I debated on whether it should be possible to also cancel the antecedent tasks from a chained descendent, and it is a very tentative part of the API.

Most consumers of Futures won't be using the constructor - they'll just be handed an already-constructed future for them to listen to. So, using the constructor as the channel to pass in cancellation info won't really help. :/

In the .NET world, if you were passing along a CancellationToken you could use the Register method to queue up a callback to execute if cancellation was requested. If we considered the cancellation-by-future approach, it might look something like this: gist.github.com/rbuckton/5424214

In that way, we can simply use a Future for cancellation, and attach custom cleanup steps using Future#done. There are a few caveat's with this approach. Since there are no properties on a Future to know if it's been cancelled, it's harder to cooperate with cancellation logic. Also, cancellation-by-future doesn't prevent chained descendants from possibly queuing tasks on the dispatcher to handle a possible reject signal, and doesn't remove pending tasks from the dispatcher that have not yet been processed, and at this point don't need to.

In building SPAs, I've found that it's often necessary to cancel an async operation. A user might make a request for a page of data that requires a server-side fetch, but then decide to switch to the next page or perform an operation that might require a server-side sort, etc. In those cases, the previously requested Futures that may have not yet completed are merely costing us additional turns or wasting network bandwidth.

The reason options is expected to be an object/object literal is that this can be extended to add additional control over the resulting continuation. This could include the ability to prevent cancellation (in the event .cancel is supported with the antecedents argument), or the ability to only signal chained descendants if a future is rejected and not to forward resolve to those descendants. This also allows for future additions to the options in later versions without breaking consumers. In this vein, it could be useful to have an options argument for the Future constructor as well, although I haven?t yet had an occasion to need one yet.

Interesting ideas!

The options argument might also be a valid place to trap additional signals such as cancel, progress, etc. without polluting then/done.

Finally, the additional API definitions are convenience APIs for certain scenarios. By default, I expect both Promise.resolve and PromiseResolver#resolve to only hook the resolve/reject of a Promise from the same library. Calling Promise as a function (or adding a Promise.of static method) might be the only Promise ?interop' to userland Future libraries, though I would almost prefer that no ?interop? between libraries for a DOM or ES version to exist, but rather would require explicitly creating a new Future and using its resolver to interoperate with the userland promise.

Correct, and I expect the same. (That said, we can probably at least adopt Promises/A+ adoption semantics, where thenables with behavior that is anywhere near sane can be automatically converted into Futures.)

As I understand it, that usually involves checking for a callable "then" data property on the result. The drawback to that is that most Promise/A-like (or "thenable") implementations allocate a new Promise object for their "then" result, which we effectively ignore. Trapping a callable "done" would incur less overhead, but either way we fall into the trap of knowing the value is really a Promise/A-like through duck-typing.

The Promise.any, Promise.every, and Promise.some methods are very similar to what is in DOMFutures, except that the current version of the DOMFutures spec leaves a few things unspecified that could be problematic for end users. According to the spec for Future.every, order of the resolved values is arbitrary, based on the order that the provided Futures are resolved. As a consumer of Future.every, the Array of resolved values should be in the same order as the futures that were provided to the method, to be able to distinguish which value belongs to which future. This may or may not be in the polyfill, but it is not explicitly (or at least clearly) specified in the DOMFutures spec. The same can be said for the Array of errors in the Future.some API definition.

Yes, the order of the result array does need to be in the same order as the input futures. Good catch. I'll file this in a new top-level thread.

I'm working on pushing up my working implementation to codeplex.com in the near future following these semantics.

I added AggregateError as a tentative Error object as a means to provide a single Error object to use as the value for the reject handler, and have considered wrapping all non Error values passed to the reject method on the resolver into an Error object to set expectations for the consumer. That way, the argument to the reject callback is always recognizable as an Error, and it can be easier to test the argument to provide appropriate handling. For instance, without Error wrapping or AggregateError, I would have to result to duck typing or Array.isArray to determine whether the errors provided are the result of a single error or multiple errors from a call to Future.some. This is, again, inspired by the .NET AggregateException, though I would likely send the single underlying Error if the AggregateError would only contain a single error.

No, Future.some always passes an array into the reject handler. No need to duck-type, unless you're passing the same reject handler to multiple futures. If you are, Array.isArray() is reliable.

My main interest in having an Error (or AggregateError) is for the .stack property, assuming we can have a .stack that is meaningful when debugging Futures.

The remaining API?s are designed to help support await-style asynchronous development as possibly afforded by generators or any future addition of something like ?await? into the language. To that end, static methods like Promise.yield() and Promise.sleep() can help to let the dispatcher/event-loop do other work in the middle of a long-running async function, or to pause for a period of time before continuing such as with animation. Promise.delay() is similar to sleep, but resolves with a value.

Promise.run() is close to setImmediate, where the result is the future value of the callback. In this case, Promise#cancel() is then effectively a call to clearImmediate. In a similar fashion, Promise.start() is roughly equivalent to setTimeout with its Promise#cancel() then synonymous with clearTimeout.

I expect these kind of conveniences to show up eventually, but likely in separate specs. For example, Future.sleep() or Future.delay() would be defined alongside setTimeout().

Considering Future's already specifies in part how to handle queuing of tasks for the dispatcher/event-loop, it seems to make sense to have them as part of the Future spec. They're functionally similar to the Timers API, however their implementation might be different depending on the platform (e.g., use process.nextTick in Node.js). I am however basing this on the convenience methods of a similar nature that exist on Task in .NET.

# Ron Buckton (12 years ago)

-----Original Message----- From: Tab Atkins Jr. [mailto:jackalmage at gmail.com] Sent: Friday, April 19, 2013 5:18 PM To: Kevin Gadd Cc: Ron Buckton; es-discuss Subject: Re: Futures (was: Request for JSON-LD API review)

On Fri, Apr 19, 2013 at 4:02 PM, Kevin Gadd <kevin.gadd at gmail.com> wrote:

One simple possibility would be to just expose accept/resolve/reject on the returned Future itself. Calling any of these cancels the Future (if the Future has a notion of cancellation), and forces it to adopt the passed state as appropriate. The constructor would take two callbacks, one for normal operation (called immediately) and one to handle cancellation (called when needed). This has the nice benefit that a consumer can provide a default value for other consumers to use, and it doesn't require any new codeflow channels.

I'd be more interested in having a creatable FutureResolver with a .future accessor property for those cases. Given the current API, its possible (but not pretty) to do something like:

function someCancelable() {
  var cancel;
  var future = new Future(function(resolver) { 
    cancel = function() { resolver.reject("cancelled"); }
    // other async work
  })
  return { cancel: cancel, future: future };
}

var { cancel, future } = someCancelable();
future.then(...).done(...);
elt.onclick = cancel;

Though this still wouldn't really prevent unnecessary tasks from being queued on the dispatcher.

It would be so nice if JS had multiple return values, so we could let cancellable future-returning APIs just return a naked resolver as their second value, and only clueful call sites would need to care about it. ^_^ Instead, we'll probably need to have API variants that instead return something like a Deferred, or that return a pair of a future and a resolver.

That sounds like what I just mentioned in gist.github.com/rbuckton/5424214.

# Tab Atkins Jr. (12 years ago)

On Fri, Apr 19, 2013 at 6:37 PM, Ron Buckton <rbuckton at chronicles.org> wrote:

From: Tab Atkins Jr. [mailto:jackalmage at gmail.com]

On Fri, Apr 19, 2013 at 4:02 PM, Kevin Gadd <kevin.gadd at gmail.com> wrote:

One simple possibility would be to just expose accept/resolve/reject on the returned Future itself. Calling any of these cancels the Future (if the Future has a notion of cancellation), and forces it to adopt the passed state as appropriate. The constructor would take two callbacks, one for normal operation (called immediately) and one to handle cancellation (called when needed). This has the nice benefit that a consumer can provide a default value for other consumers to use, and it doesn't require any new codeflow channels.

I'd be more interested in having a creatable FutureResolver with a .future accessor property for those cases. Given the current API, its possible (but not pretty) to do something like:

That doesn't help our main use-cases, which is allowing you to get a cancelable future out of platform APIs, where the platform constructs the future for you.

It would be so nice if JS had multiple return values, so we could let cancellable future-returning APIs just return a naked resolver as their second value, and only clueful call sites would need to care about it. ^_^ Instead, we'll probably need to have API variants that instead return something like a Deferred, or that return a pair of a future and a resolver.

That sounds like what I just mentioned in gist.github.com/rbuckton/5424214.

It's inverted, actually, but it works out similarly. That might be the way to go - it lets you keep a single calling function, but still optionally send in cancellation notices.

# Ron Buckton (12 years ago)

I put up a rough DOM Future polyfill (with a few additions for experimentation) at: rbuckton/promisejs

It has:

  • Future
  • Future#then (tentatively adds the options argument in my proposal for forcing synchronous execution of continuation)
  • Future#done (tentatively adds the options argument in my proposal for forcing synchronous execution of continuation)
  • Future#catch (tentatively adds the options argument in my proposal for forcing synchronous execution of continuation)
  • Future.of (extension. coerces thenables)
  • Future.isFuture (extension. Checks branding using a pseudo-symbol, cross-realm only for ES5+)
  • Future.resolve
  • Future.reject
  • Future.any
  • Future.some (with properly ordered reject array value)
  • Future.every (with properly ordered resolve array value)
  • Future.yield (extension. Helpful for "await"-style asynchrony or yielding to another waiting task in the dispatcher)
  • Future.sleep (extension. Helpful for "await"-style asynchrony and timeouts)
  • Future.sleepUntil (extension. Something like a SpinWait primitive for "await"-style asynchrony)
  • Future.run (extension. Runs a callback in a later turn or after a delay)
  • FutureResolver (not creatable)
  • FutureResolver#accept
  • FutureResolver#resolve (only chains branded Future instances)
  • FutureResolver#reject
  • Deferred (extension. Extracts and encapsulates the resolver for use cases like "cancellation-by-future")

I'm considering adding a Future#finally as well, as I've found some use cases in a Future-based AMD module loader I'm tinkering with.

Ron

# Mark S. Miller (12 years ago)

On Tue, Apr 23, 2013 at 5:16 PM, Ron Buckton <rbuckton at chronicles.org>wrote:

[...] a Future-based AMD module loader I'm tinkering with.

In that case, you might want to look at < strawman:concurrency#amd_loader_lite>

and < code.google.com/p/google-caja/source/browse/trunk/src/com/google/caja/ses/makeSimpleAMDLoader.js

# Ron Buckton (12 years ago)

Other than the Future polyfill all it does is roughly this:

        // extract the indices for built-in dependencies
        if ((exportsIndex = dependencies.indexOf("exports")) > -1) dependencies.splice(exportsIndex, 1);
        if ((moduleIndex = dependencies.indexOf("module")) > -1) dependencies.splice(moduleIndex, 1);
        if ((requireIndex = dependencies.indexOf("require")) > -1) dependencies.splice(requireIndex, 1);

        // ... other stuff

       // load and wait for all dependencies
        Future.every
            .apply(null, dependencies.map(load))
            .then(function (imports) {
                var exports = {};

                // reapply the built-in dependencies if requested.
                if (requireIndex > -1) imports.splice(requireIndex, 0, require);
                if (moduleIndex > -1) imports.splice(moduleIndex, 0, config);
                if (exportsIndex > -1) imports.splice(exportsIndex, 0, exports);

                if (typeof factory === "function") {
                    var result = factory.apply(null, imports);
                    if (result) {
                        exports = result;
                    }
                    else if (config.exports) {
                        exports = config.exports;
                    }
                }
                else if (factory) {
                    exports = factory;
                }
                resolver.resolve(exports);
            })
            .catch(resolver.reject)
            .done();

Though there's more to it to handle things like module concatenation, etc.

Ron

From: Mark S. Miller [mailto:erights at google.com] Sent: Tuesday, April 23, 2013 5:33 PM To: Ron Buckton Cc: Tab Atkins Jr.; es-discuss Subject: Re: Futures (was: Request for JSON-LD API review)

On Tue, Apr 23, 2013 at 5:16 PM, Ron Buckton <rbuckton at chronicles.org<mailto:rbuckton at chronicles.org>> wrote:

[...] a Future-based AMD module loader I'm tinkering with.

In that case, you might want to look at strawman:concurrency#amd_loader_lite and code.google.com/p/google-caja/source/browse/trunk/src/com/google/caja/ses/makeSimpleAMDLoader.js.

# Anne van Kesteren (12 years ago)

On Wed, Apr 24, 2013 at 1:16 AM, Ron Buckton <rbuckton at chronicles.org> wrote:

  • Future#then (tentatively adds the options argument in my proposal for forcing synchronous execution of continuation)
  • Future#done (tentatively adds the options argument in my proposal for forcing synchronous execution of continuation)
  • Future#catch (tentatively adds the options argument in my proposal for forcing synchronous execution of continuation)

Why is that not up to the resolver? I think I'm missing something here.

  • Future.of (extension. coerces thenables)
  • Future.resolve

What would be the difference between these?

  • Future.some (with properly ordered reject array value)
  • Future.every (with properly ordered resolve array value)

These are now fixed in the specification.

I added the simple static completed future constructors (accept/resolve/reject). I'm waiting a bit with the library until there's more implementations and some feedback.

-- annevankesteren.nl

# Anne van Kesteren (12 years ago)

On Wed, Apr 17, 2013 at 4:46 PM, Anne van Kesteren <annevk at annevk.nl> wrote:

I don't find the whole who owns what discussions very interesting to be honest. If it was up to me JavaScript would just be part of the W3C and we would not have to deal with that layer of distraction.

I got some private feedback that this might have come across as some kind of power grab. What I meant, as in my mind TC39 and JavaScript are kinda intertwined, is that TC39 would become part of the W3C rather than ECMA, so coordination between those working on browser APIs and those working on the language would be even more of a given. Taking JavaScript away from TC39 never crossed my mind, but reportedly it did for others, so sorry about the confusion.

-- annevankesteren.nl

# Ron Buckton (12 years ago)

-----Original Message----- From: annevankesteren at gmail.com [mailto:annevankesteren at gmail.com] On Behalf Of Anne van Kesteren Sent: Wednesday, April 24, 2013 7:08 AM To: Ron Buckton Cc: Tab Atkins Jr.; es-discuss Subject: Re: Futures (was: Request for JSON-LD API review)

On Wed, Apr 24, 2013 at 1:16 AM, Ron Buckton <rbuckton at chronicles.org> wrote:

  • Future#then (tentatively adds the options argument in my proposal for forcing synchronous execution of continuation)
  • Future#done (tentatively adds the options argument in my proposal for forcing synchronous execution of continuation)
  • Future#catch (tentatively adds the options argument in my proposal for forcing synchronous execution of continuation)

Why is that not up to the resolver? I think I'm missing something here.

In this case I am emulating a feature of .NET Tasks, similar to this: msdn.microsoft.com/en-us/library/system.threading.tasks.taskcontinuationoptions.aspx. This allows the consumer of the future to make the determination as to whether their continuation should execute synchronously or asynchronously. From the MSDN documentation:

"Specifies that the continuation task should be executed synchronously. With this option specified, the continuation will be run on the same thread that causes the antecedent task to transition into its final state. If the antecedent is already complete when the continuation is created, the continuation will run on the thread creating the continuation. Only very short-running continuations should be executed synchronously."

This is often used to reduce the overhead of scheduling the continuation task if the continuation is very lightweight. I say "tentatively" as I am not sure if it's worth keeping, but I wanted to experiment with it a bit. It does allow for a bit of "cheating" though, as you can schedule a then/done synchronously to get the value synchronously if its available, so I might drop it if it has an adverse impact. I have found that it is helpful in the "cancellation-by-future" scenario, as if all scheduling is asynchronous it's difficult to properly time cancellation. For example:

function someAsync(canceler) { return new Future(function(resolver) { /b/ var handle = setImmediate(function() { /e/ resolver.resolve(); }); /c/ if (canceler) canceler.done(function() { /f/ clearImmediate(handle); }); }); }

/a/ var cancelSource = new Deferred(); var future = someAsync(cancelSource.future);

/d/ cancelSource.resolve();

The order of events is: Turn 0 (T0): a. execution starts in the current turn (T0) b. the setImmediate will schedule the callback on the event loop to be executed in turn T1. c. the continuation is added to the canceler during T0 d. the canceller is resolved during T0, continuations will be scheduled in T2

Turn 1 (T1): e. the future from someAsync is resolved

Turn 2 (T2): f. the continuation for canceler is executed, which is too late to cancel.

There are at least two solutions to this:

  1. Provide the ability to schedule the continuation at (c) synchronously (similar to .NET Tasks and the proposed options argument).
  2. Provide the ability to resolve the canceler at (d) synchronously (possibly by exposing the "synchronous" flag in the DOM Futures spec to the developer).

Please note that "cancellation-by-future" is an alternative to providing either Future#cancel or some kind of Future.cancellable(init) -> { future, cancel }. I would still much rather have a formal Future#cancel both as a way to explicitly cancel a future and its chained descendants (since I may not care about the value of the Future anymore) as well as a way to provide cooperative cancelation.

  • Future.of (extension. coerces thenables)
  • Future.resolve

What would be the difference between these?

Future.resolve either accepts the value or resolves a Future, but not a "thenable" (e.g. it must be a branded Future, including cross-realm). Future.of is similar to Future.resolve but will coerce a "thenable" (though it ideally looks for a callable "done" if also present, to prevent the possible and unnecessary overhead of allocating a new "thenable".

  • Future.some (with properly ordered reject array value)
  • Future.every (with properly ordered resolve array value)

These are now fixed in the specification.

I added the simple static completed future constructors (accept/resolve/reject). I'm waiting a bit with the library until there's more implementations and some feedback.

I'm not yet sold on having both accept and resolve on the resolver. In the .NET world, a Task for a Task (e.g. Task<Task<T>>) is just that, and you have to unwrap the Task with something like a TaskCompletionSource<T>, which is explicitly like FutureResolver#accept. Libraries like Q.js automatically assume a Future of a Future is just a Future, implicitly unwrapping like FutureResolver#resolve. When I started digging into Promise/Future for JavaScript a few years back I was primarily invested in the first camp due to my experience with Futures in a type-safe language. Actively using Futures in a dynamic language has pushed me more towards simplifying a nested Future into a single Future due to its simplicity. I can see how providing both mechanisms could possibly satisfy developers in either camp, but could be confusing to API consumers if one library generally uses FutureResolver#accept and another library generally uses FutureResolver#resolve.

Is there a lot of interest to support both, or any previous discussion on the topic that I could peruse to understand the arguments for having either "accept", "resolve", or both?

Thanks, Ron

# Tab Atkins Jr. (12 years ago)

On Wed, Apr 24, 2013 at 10:10 AM, Ron Buckton <rbuckton at chronicles.org> wrote:

I'm not yet sold on having both accept and resolve on the resolver. In the .NET world, a Task for a Task (e.g. Task<Task<T>>) is just that, and you have to unwrap the Task with something like a TaskCompletionSource<T>, which is explicitly like FutureResolver#accept. Libraries like Q.js automatically assume a Future of a Future is just a Future, implicitly unwrapping like FutureResolver#resolve. When I started digging into Promise/Future for JavaScript a few years back I was primarily invested in the first camp due to my experience with Futures in a type-safe language. Actively using Futures in a dynamic language has pushed me more towards simplifying a nested Future into a single Future due to its simplicity. I can see how providing both mechanisms could possibly satisfy developers in either camp, but could be confusing to API consumers if one library generally uses FutureResolver#accept and another library generally uses FutureResolver#resolve.

Is there a lot of interest to support both, or any previous discussion on the topic that I could peruse to understand the arguments for having either "accept", "resolve", or both?

Q and similar libraries don't actually assume that a Future<Future<x>>

is a Future<x>. (Well, not all of them.) Properly, they're treating

Future as a monad, and .then() as the monadic operation, so you can "chain" future-returning functions easily (this is the core value of monads).

(Some libraries do indeed fully flatten the types, but that's a bad behavior imo, as far as I can tell not what DOM Futures do.)

# Mark S. Miller (12 years ago)

On Wed, Apr 24, 2013 at 10:14 AM, Tab Atkins Jr. <jackalmage at gmail.com>wrote:

On Wed, Apr 24, 2013 at 10:10 AM, Ron Buckton <rbuckton at chronicles.org> wrote:

I'm not yet sold on having both accept and resolve on the resolver. In the .NET world, a Task for a Task (e.g. Task<Task<T>>) is just that, and you have to unwrap the Task with something like a TaskCompletionSource<T>, which is explicitly like FutureResolver#accept. Libraries like Q.js automatically assume a Future of a Future is just a Future, implicitly unwrapping like FutureResolver#resolve. When I started digging into Promise/Future for JavaScript a few years back I was primarily invested in the first camp due to my experience with Futures in a type-safe language. Actively using Futures in a dynamic language has pushed me more towards simplifying a nested Future into a single Future due to its simplicity. I can see how providing both mechanisms could possibly satisfy developers in either camp, but could be confusing to API consumers if one library generally uses FutureResolver#accept and another library generally uses FutureResolver#resolve.

Is there a lot of interest to support both, or any previous discussion on the topic that I could peruse to understand the arguments for having either "accept", "resolve", or both?

Q and similar libraries don't actually assume that a Future<Future<x>> is a Future<x>.

Yes it does. Except of course that we call these "promises". Please see the extensive discussions on the Promises/A+ site about why this flattening behavior is important.

# Andreas Rossberg (12 years ago)

On 24 April 2013 19:20, Mark S. Miller <erights at google.com> wrote:

On Wed, Apr 24, 2013 at 10:14 AM, Tab Atkins Jr. <jackalmage at gmail.com> wrote:

Q and similar libraries don't actually assume that a Future<Future<x>> is a Future<x>.

Yes it does. Except of course that we call these "promises". Please see the extensive discussions on the Promises/A+ site about why this flattening behavior is important.

That strikes me as a very odd design decision, since it would seem to violate all sorts of structural and equational invariants. Mark, could you summarize the rationale for this, or provide a more specific link to the appropriate bit of the discussion you are referring to?

# Domenic Denicola (12 years ago)

From: Andreas Rossberg [rossberg at google.com]

Mark, could you summarize the rationale for this, or provide a more specific link to the appropriate bit of the discussion you are referring to?

I'm not Mark, and he might have something more specific in mind, but this summary was pretty helpful:

gist.github.com/ForbesLindesay/5392612

# Tab Atkins Jr. (12 years ago)

On Wed, Apr 24, 2013 at 10:51 AM, Domenic Denicola <domenic at domenicdenicola.com> wrote:

From: Andreas Rossberg [rossberg at google.com]

Mark, could you summarize the rationale for this, or provide a more specific link to the appropriate bit of the discussion you are referring to?

I'm not Mark, and he might have something more specific in mind, but this summary was pretty helpful:

gist.github.com/ForbesLindesay/5392612

These aren't very good reasons, unfortunately. :/

The JQP... problem can be solved by a single "flatten" operation added to the API. This is a totally reasonable operation, same as it would be for Arrays.

The identify function for monads is the monadic lift function - Future.accept() in the case of Futures. The only reason passing the identity function in at all works is because we have special magic that lets authors return non-Futures from the callbacks (which I think is a good idea, mind you). It's not actually an identity function, though.

The synchronous analog objections don't make sense. The analogue is returning the Error object itself, which could certainly be reasonable at times. Recursive unwrapping is like making "return new Error()" identical to "throw new Error()". I don't even understand the attempted analogy with f() and g() - chaining takes care of unwrapping in all reasonable cases like that. Similarly, I don't think the Stack Frame analogy has been well-thought-out - chaining Futures is the analogy to stack frames, and that works just fine.

I'm not sure what the Parametricity Argument is.

However, all of this falls before the simple fact that recursive unwrapping means that no one can ever create any object with a .then() method on it ever again. If .then() is treated as the normal monadic operation over Futures (that is, it unwraps one layer of its return value), then you can safely return objects with a .then() method by wrapping them in a Future with a method like Future.accept(). (You can safely do this to all values unless you're explicitly asking for chaining to happen.)

# Ron Buckton (12 years ago)

Resending due to a mail error.

-----Original Message----- From: es-discuss-bounces at mozilla.org [mailto:es-discuss- bounces at mozilla.org] On Behalf Of Tab Atkins Jr. Sent: Wednesday, April 24, 2013 11:18 AM To: Domenic Denicola Cc: Mark S. Miller; es-discuss Subject: Re: Futures (was: Request for JSON-LD API review)

On Wed, Apr 24, 2013 at 10:51 AM, Domenic Denicola <domenic at domenicdenicola.com> wrote:

From: Andreas Rossberg [rossberg at google.com]

Mark, could you summarize the rationale for this, or provide a more specific link to the appropriate bit of the discussion you are referring to?

I'm not Mark, and he might have something more specific in mind, but this summary was pretty helpful:

gist.github.com/ForbesLindesay/5392612

These aren't very good reasons, unfortunately. :/

The JQP... problem can be solved by a single "flatten" operation added to the API. This is a totally reasonable operation, same as it would be for

Arrays.

The identify function for monads is the monadic lift function - Future.accept() in the case of Futures. The only reason passing the identity function in at all works is because we have special magic that lets authors return non-Futures from the callbacks (which I think is a good idea, mind you). It's not actually an identity function, though. The synchronous analog objections don't make sense. The analogue is returning the Error object itself, which could certainly be reasonable at times. Recursive unwrapping is like making "return new Error()" identical to "throw new Error()". I don't even understand the attempted analogy with f() and g() - chaining takes care of unwrapping in all reasonable cases like that. Similarly, I don't think the Stack Frame analogy has been well- thought-out - chaining Futures is the analogy to stack frames, and that works just fine.

I'm not sure what the Parametricity Argument is.

However, all of this falls before the simple fact that recursive unwrapping means that no one can ever create any object with a .then() method on it ever again. If .then() is treated as the normal monadic operation over Futures (that is, it unwraps one layer of its return value), then you can safely return objects with a .then() method by wrapping them in a Future with a method like Future.accept(). (You can safely do this to all values unless you're explicitly asking for chaining to happen.)

Given the collision with possible existing uses of "then", it might be better to either use a symbol to brand an object as a thenable, or use a symbol to define a compatible then (or done) -like method similar to @iterator. It would only be necessary to import and apply the symbol to interoperate with a native Future, and subclasses of Future would implicitly have support. Alternatively, only "recursively unwrap" native Futures or Future subclasses, and require an explicit coercion using something like Future.of.

Ron

# Dean Landolt (12 years ago)

On Wed, Apr 24, 2013 at 2:18 PM, Tab Atkins Jr. <jackalmage at gmail.com>wrote:

On Wed, Apr 24, 2013 at 10:51 AM, Domenic Denicola <domenic at domenicdenicola.com> wrote:

From: Andreas Rossberg [rossberg at google.com]

Mark, could you summarize the rationale for this, or provide a more specific link to the appropriate bit of the discussion you are referring to?

I'm not Mark, and he might have something more specific in mind, but this summary was pretty helpful:

gist.github.com/ForbesLindesay/5392612

These aren't very good reasons, unfortunately. :/

The JQP... problem can be solved by a single "flatten" operation added to the API. This is a totally reasonable operation, same as it would be for Arrays.

I'll do you one better and suggest the JQP... problem can go away completely the day TC39 decides on a built-in -- let's call it Promise for the sake of argument. A new spec, call it Promises/A++, could then be defined which states that this class is to be included in the proto chain of compatible promises. For the sake of interoperable shimming libraries should create this global if it doesn't exist (this part's a little sketchy but I can't think of a good alternative that doesn't involve abusing proto).

Now, instead of a ducktest for a then method the promise check would instead be specified as instanceof Promise. For the sake of backward compatibility libraries can choose to add a Promise.prototype.then so that these new promises work with old promise libs too. If it comes comes to it, old promises can be made to work in the new regime with a little proto hacking.

The only reason thenables won is because library authors didn't have a formal namespace to hang these things. This is what ultimately made assimilation necessary, and it's a non-issue as soon as TC39 specifies a Promise base class.

[snipped the rest, but FWIW I totally agree w/ Tab]

# Ron Buckton (12 years ago)

Be it Promise or Future, instanceof won't work across frames. It would likely still require a Future.isFuture/Promise.isPromise just as we need to have Array.isArray now. That is, of course, unless we can use symbols for branding in a fashion that library authors could use without forking their library for pre- and post- ES6 (or later) versions.

From: es-discuss-bounces at mozilla.org [mailto:es-discuss-bounces at mozilla.org] On Behalf Of Dean Landolt Sent: Wednesday, April 24, 2013 12:09 PM To: Tab Atkins Jr. Cc: Mark S. Miller; es-discuss Subject: Re: Futures (was: Request for JSON-LD API review)

On Wed, Apr 24, 2013 at 2:18 PM, Tab Atkins Jr. <jackalmage at gmail.com<mailto:jackalmage at gmail.com>> wrote:

On Wed, Apr 24, 2013 at 10:51 AM, Domenic Denicola <domenic at domenicdenicola.com<mailto:domenic at domenicdenicola.com>> wrote:

From: Andreas Rossberg [rossberg at google.com<mailto:rossberg at google.com>]

Mark, could you summarize the rationale for this, or provide a more specific link to the appropriate bit of the discussion you are referring to?

I'm not Mark, and he might have something more specific in mind, but this summary was pretty helpful:

gist.github.com/ForbesLindesay/5392612

These aren't very good reasons, unfortunately. :/

The JQP... problem can be solved by a single "flatten" operation added to the API. This is a totally reasonable operation, same as it would be for Arrays.

I'll do you one better and suggest the JQP... problem can go away completely the day TC39 decides on a built-in -- let's call it Promise for the sake of argument. A new spec, call it Promises/A++, could then be defined which states that this class is to be included in the proto chain of compatible promises. For the sake of interoperable shimming libraries should create this global if it doesn't exist (this part's a little sketchy but I can't think of a good alternative that doesn't involve abusing proto).

Now, instead of a ducktest for a then method the promise check would instead be specified as instanceof Promise. For the sake of backward compatibility libraries can choose to add a Promise.prototype.then so that these new promises work with old promise libs too. If it comes comes to it, old promises can be made to work in the new regime with a little proto hacking.

The only reason thenables won is because library authors didn't have a formal namespace to hang these things. This is what ultimately made assimilation necessary, and it's a non-issue as soon as TC39 specifies a Promise base class. [snipped the rest, but FWIW I totally agree w/ Tab]

# Claus Reinke (12 years ago)

Now, instead of a ducktest for a then method the promise check would instead be specified as instanceof Promise.

Picking a message at random for an interjection, there is something that seems to be missing in this discussion:

Promises are only one kind of thenables (the asynchronous thenables).

Ducktesting for 'then' will match things that aren't thenables (in the JS monadic sense), and identifying thenables will match things that aren't Promises.

The type separation between thenables and Promises makes sense because there are library routines generically based on thenables that will work with Promises and with other thenables. At least, that is the experience in other languages.

Also, much of the discussion seems not to be specific to Promises, asking for a standard answer to the question of reliable dynamic typing in JS.

Claus

# Allen Wirfs-Brock (12 years ago)

On Apr 24, 2013, at 9:17 PM, Ron Buckton wrote:

Be it Promise or Future, instanceof won’t work across frames. It would likely still require a Future.isFuture/Promise.isPromise just as we need to have Array.isArray now. That is, of course, unless we can use symbols for branding in a fashion that library authors could use without forking their library for pre- and post- ES6 (or later) versions.

Note that ES 6 will fully support subclassing of built-ins constructors. Instances of built-ins are branded and subclasses instances are expected to be branded the same as the original built-in superclass instances. Inheritable built-in prototype methods, when appropriate, are specified as doing a brand check. The actual branding mechanism used for built-ins is not observable and hence is left as an implementation detail. The mechanism is required to work across frames. The new operator protocol in ES6 is enhanced to ensure that subclass instances are branded appropriately. However, there is a technique that can be used to accomplish essentially the same thing in any current JS implementation that supports proto.

For an overview of ES6 support for subclassing built-ins see meetings:subclassing_builtins.pdf For the backwards compatibility hack that accomplishes the same thing see slide 16.