Promise.cast and Promise.resolve

# Paolo Amadini (7 years ago)

I have a suggestion about the current draft of the Promises specification that derives from my experience on the Mozilla code base. I'm not familiar with how the specification process works, discussion in this group seemed like a good start.

Those familiar with the subject may find that this topic has already been discussed (see 1 and 2), but I think I may provide some additional data that has not been previously examined, at least according to the discussions that I found online, and may help in determining the final direction.

In our code base, historically we only defined the "Promise.resolve" function. I had a look at its 490 uses: mxr.mozilla.org/mozilla-central/search?string=Promise.resolve

It looks like 487 of them don't have a promise as an argument, in which case "cast" and "resolve" would behave in exactly the same way.

In the remaining cases (namely Task.jsm and AsyncShutdown.jsm) I suspect we would always use either "resolve" (to be safe) or "cast" (to be more efficient), and I'd lean for "cast" since we'd like Task iterations to be faster. It also appears that "cast" has been selected as the implicit conversion for Promise.all([valueOrPromise]) and Promise.race([valueOrPromise]), maybe for the same efficiency reason.

Moreover, only after I read Domenic's comment 3 I realized that, despite the name, the function that makes less guarantees about the returned object is "cast", and "resolve" actually made more guarantees.

So, if you're really interested in casting an "unsafe" promise to the type of your promise library, it's theoretically better to use "resolve" than "cast". By "unsafe" I mean a promise created internally that may have been modified. In the case of "foreign" promises from other libraries, using "cast" and "resolve" is again exactly the same thing.

My take is that the difference between "cast" and "resolve" is so subtle that I don't think it captures developer intention. In other words, if I see some code calling Promise.cast(otherPromise), I can't be sure that the developer made an intentional choice over Promise.resolve(otherPromise).

In the case of non-promises, where the behavior is really identical, I expect that, with both functions available, by now in the Mozilla code base we would have about 50% calls to "cast" and 50% calls to "resolve". This, unless we spent time in defining a guideline for one function or the other, with the associated education costs.

Assuming we succeeded in defining the guideline of always using "cast" for non-promises, we'd have 490 calls to "cast" and no calls to "resolve" at all. I also don't see a real-world use case for "resolve" in the future. In case we needed to be really sure about the object we're handling (that I don't think will be needed), I think we could always use "new Promise(r => r(otherPromise))" instead.

My conclusion is that the "resolve" function could be removed.

Optionally, "cast" could be renamed to "resolve". It has in fact been mentioned in 2 that some promise libraries already implement an optimized "resolve", that works mostly like "cast". Against this suggestion, there were concerns about the length of the name and establishing a precedent for the name "cast".

I don't think the name is really important, but I'd rather not have two almost identical functions to be chosen that will require a style guideline and maybe another lint tool check to be defined in addition to the ones that JavaScript already requires.

What do you think?

# Kevin Smith (7 years ago)

My take is that the difference between "cast" and "resolve" is so subtle that I don't think it captures developer intention. In other words, if I see some code calling Promise.cast(otherPromise), I can't be sure that the developer made an intentional choice over Promise.resolve(otherPromise).

Note that Promise.resolve(p), where p is a promise, does not return an eventual for the eventual value of p. Rather, it returns an eventual for p, literally. Promises should be considered fully parametric (in other words, nestable).

If you want to convert a maybe-promise into a promise, then you want cast.

# Juan Ignacio Dopazo (7 years ago)

That's not true as per the last consensus. There was Promise.fulfill(p) which would've nested promises, but it was left for ES7 to investigate fulfill+flatMap.

# Kevin Smith (7 years ago)

I don't think that statement is correct. Domenic, Allen, Mark, can we get a ruling? : )

# Domenic Denicola (7 years ago)

I am hesitant to retread, but I suppose we should clear up the confusion of the last few messages.

Indeed, things evolved slightly since the last consensus, to the point where the solution that made the most sense was to store the promise's resolution value (whatever it may be) then do the unwrapping entirely inside then. So effectively, Promise.resolve is now what Promise.fulfill used to be; however, because then unwraps, in practice then users will not notice a difference (i.e. promises for promises are not observable). ES7-timeframe monad additions to the standard library could indeed make use of the stored resolution value.

To bring things down to earth more:

The following code will retain GC references to all of the variables, including p2, p3, and p4:

var p1 = Promise.cast(1);
var p2 = Promise.resolve(gcRoot.p1);
var p3 = Promise.resolve(p2);
gcRoot.p4 = Promise.resolve(p3);

gcRoot.p4 maintains a reference to p3, which maintains a reference to p2, which maintains a reference to p1.

Whereas, given

var p1 = Promise.cast(1);
var p2 = Promise.cast(gcRoot.p1);
var p3 = Promise.cast(p2);
gcRoot.p4 = Promise.cast(p3);

we have that all the variables are equal, i.e. p1 === p2, p2 === p3, and p3 === gcRoot.p4.


With all that said, I somewhat agree with the sentiment of the OP that Promise.resolve is pointless, and even (from a memory consumption POV), a footgun. However, we settled on keeping it, based on a few weak arguments:

  • Symmetry with Promise.reject(r)new Promise((resolve, reject) => reject(r)); we should also have Promise.resolve(x)new Promise((resolve, reject) => resolve(x)).
  • Sometimes you want to guarantee object non-identity so a shorthand is useful (Tab Atkins had some use case involving CSS specs).

You can peruse the issue tracker for more discussion. We're certainly retreading old ground here.

# Mark S. Miller (7 years ago)

First, "q = Promise.fulfill(p)" never made any sense, as if p is pending, then q cannot be fulfilled -- there's no non-promise value to fulfill it to. Remember, a promise is fulfilled when it's (original) .then would invoke its first (onFulfilled) callback. "fulfilled", "rejected", "pending" are at the .then level of abstraction.

At the .flatMap level of abstraction, the fulfill-like concept is "accept". Promise q accepts value (whether promise or non-promise) v when q's (original) .flatMap would invoke its first (onAccept) callback. "accept" is fully parametric, at the .flatMap level of abstraction, where "fully parametric" is meaningful.

So, except for the name of the old operation we're no longer using, I agree with Domenic. It is not that we renamed "Promise.fulfill" to "Promise.resolve", we killed "Promise.fulfill" because it was incoherent, and we renamed "Promise.accept" to "Promise.resolve".

For people concerned only about the .then level of abstraction, I see little reason to ever use Promise.resolve rather than Promise.cast, and much reason to prefer Promise.cast. But fwiw we did agree to express the difference now, to be perceived by future and experimental uses of .flatMap in the future.

# Paolo Amadini (7 years ago)

[Replying here since I'm not sure about how to move the discussion to the issue tracker without losing the context of Mark's observation.]

On 28/01/2014 20.28, Mark S. Miller wrote:

For people concerned only about the .then level of abstraction, I see little reason to ever use Promise.resolve rather than Promise.cast, and much reason to prefer Promise.cast. But fwiw we did agree to express the difference now, to be perceived by future and experimental uses of .flatMap in the future.

I don't have a background on the .flatMap level of abstraction. In the following scenario, will users of getMyPromise() have a different behavior with .flatMap if the library code used "cast" rather than "resolve"? If so, this can definitely lead to confusion when .flatMap is introduced in the future since the callers cannot be sure about what the library did internally, assuming the library authors didn't intentionally choose one or the other.

// ------ MyLibrary.js

function getMyPromise() {
  var a = condition ? getMyOtherPromise() : "value2";
  return Promise.resolve(a);
}

function getMyOtherPromise() {
  return Promise.resolve("value1");
}
# Quildreen Motta (7 years ago)

So, if I understand correctly, it basically boils down to:

Promise.resolve :: a → Promise(a)

Promise.cast :: a → Promise(a)
Promise.cast :: Promise(a) → Promise(a)

But right now we'll only get .then, so transformations with nested promises will flatMap automatically:

Promise.then :: (Promise(a)) => ((a → MaybePromise(b), (a → MaybePromise(b))

Promise.then :: (Promise(Promise(a))) => ((a → MaybePromise(b), (a → MaybePromise(b))

But later, once flatMap is introduced, you'll be able to deal with nested promises without breaking parametricity:

Promise.flatMap :: (Promise(a)) => (a → Promise(b))

If that's correct, I don't see any use cases for Promise.resolve right now, unless a library where to provide a corresponding unspecified flatMap implementation.

# Forbes Lindesay (7 years ago)

This accidentally ended up not being forwarded to the list


Forbes Lindesay:

It seems that the use case of Promise.resolve is to allow you to create a Promise for a Promise. It seems to me that we should wait until it observable before we define a method for creating it. I feel like we’re going about things backwards at the moment. We can commit to discussing Promises for Promises later, but if we add this now it's as if we're committing to actually having Promises for Promises.

I'm definitely in favour of removing Promise.resolve.

Mark S. Miller:

That has always been my preference as well. But given the current stage things are in, is it worth fighting to remove it? If you feel it is, please repost your objection to es-discuss or promises-unwrapping.

# Brendan Eich (7 years ago)

Quildreen Motta wrote:

If that's correct, I don't see any use cases for Promise.resolve right now, unless a library where to provide a corresponding unspecified flatMap implementation.

IIRC (and I may not; endless promises threads make me sleepy) this is the reason to have Promise.resolve: to enable some to blaze the flatMap trail ahead of ES6.

# Domenic Denicola (7 years ago)

Kind of. Since Promise.resolve(x) is just new Promise(resolve => resolve(x)), the only reasons for including that shorthand method are symmetry and the idea that this operation might be common enough to necessitate a shorthand. See my earlier reply.

On the other hand, that is part of the reason we moved all the unwrapping to the then side---as Mark puts it, using the "accept" operation instead of "resolve," albeit while still using the "resolve" name. (Which is pretty confusing when you try to explain it, but I swear, it makes sense...)

# Brendan Eich (7 years ago)

I buy it.

Paolo may not. Mozilla XUL JS does seem full of .resolve calls, some of which might want to be .cast calls instead.

# Andreas Rossberg (7 years ago)

On 28 January 2014 21:21, Quildreen Motta <quildreen at gmail.com> wrote:

Promise.then :: (Promise(a)) => ((a → MaybePromise(b), (a → MaybePromise(b))
Promise.then :: (Promise(Promise(a))) => ((a → MaybePromise(b), (a → MaybePromise(b))

This type is not really accurate. Due to recursive unwrapping, `then' has no type you could express in any reasonable type system -- the argument type can be an arbitrary tower of nested promises (and the maximum one for each use). As you note below, this makes the operation inherently non-parametric. (Which is why I still hold up that recursive unwrapping is a bad idea, especially as the core primitive.)

But later, once flatMap is introduced, you'll be able to deal with nested promises without breaking parametricity:

Promise.flatMap :: (Promise(a)) => (a → Promise(b))

If that's correct, I don't see any use cases for Promise.resolve right now, unless a library where to provide a corresponding unspecified flatMap implementation.

The V8 implementation provides it under the name `chain', with the obvious semantics.

# Brendan Eich (7 years ago)

+1 on chain as name, not flatMap, but Haskell'ers should weigh in. Wait, no. :-P

# Andreas Rossberg (7 years ago)

Well, Haskell does not have flatMap. Scala has.

# Brendan Eich (7 years ago)
# Quildreen Motta (7 years ago)

The V8 implementation provides it under the name `chain', with the obvious semantics.

Considering that the fantasy-land specification uses the chain name as well, this is really neat.

# Forrest L Norvell (7 years ago)

In Haskell, 'flatMap' is the operation on applicative functors that corresponds to the 'bind' operation on monads. Monads were introduced to Haskell to provide a mechanism for doing effectful programming that restricted the areas of danger around I/O, and the reification of the underlying category theory in which monads are a specialization of applicative functors only came later. Rationalizing all of that would require breaking backwards compatibility of most Haskell code that depends on the standard library. Scala wasn't stuck with this legacy, so they kept the more general term for the operation.

'chain' is a fine name, especially because "monadic promises" work fine in a language with type systems, but the JS version, when / if it comes, is only going to be a distantly rhyming version of a monad, for reasons that we've gone over to death in the past. JS should give names to operations that will actually give people a clue to their purpose.

F

# Andreas Rossberg (7 years ago)

What's all this then?

www.google.com/search?q=haskell+flatmap

When the first page of results consists of StackOverflow questions and blog posts it can't be real. :)

Seriously, Haskell calls it >>=. (It also has concatMap, which is just

the same function restricted to lists.)

# Brendan Eich (7 years ago)

Andreas Rossberg wrote:

Seriously, Haskell calls it>>=.

Right, that thing. Did it have the f-l-a-t-... name in Haskell, or is it pronounced b-i-n-d always, monads or not?

(It also has concatMap, which is just the same function restricted to lists.)

Rhyming.

# Kevin Smith (7 years ago)

The V8 implementation provides it under the name `chain', with the obvious semantics.

I like chain too, and the V8 implementation in general. However, note that the V8 implementation does not comply with the current spec. According to the current spec, the output side of then is recursively unwrapped. This is, of course, incompatible with the AP2 design principle that the consumer determines the unwrapping strategy, and not the provider.

# Sam Tobin-Hochstadt (7 years ago)

On Tue, Jan 28, 2014 at 7:47 PM, Brendan Eich <brendan at mozilla.com> wrote:

Andreas Rossberg wrote:

Seriously, Haskell calls it>>=.

Right, that thing. Did it have the f-l-a-t-... name in Haskell, or is it pronounced b-i-n-d always, monads or not?

No, it's always called "bind" in Haskell. The name flatMap was invented for Scala, to the best of my knowledge.

# Quildreen Motta (7 years ago)

I suppose using names that don't give you a hint of the meaning of the operation fits perfectly Haskell's (and Scalaz's) "Avoid Success At All Costs" tradition.

# Brendan Eich (7 years ago)

lulz.

And so we will go with 'chain'.

# Kris Kowal (7 years ago)

In this case, a half pursuit of type purity is a side quest at the expense of users. Having two ways to resolve and two ways to observe a promise is unnecessarily confusing. In my experience, one method like "then", that unwraps recursively, and one function, like "Promise.cast", that automatically lifts if necessary, and "then" handlers that return into the waiting hands of "Promise.cast" are coherent and ergonomic. Having a choice between "cast" and "resolve" and a choice between "then" and "chain", will leave developers unnecessarily confused and worried all the while they use or abandon Promises as too subtle.

For quite some time, grammarians have been losing a war to impose Latin purity on English, in the case of split infinitives. Not long ago, the famous phrase, "to boldly go", were a racy departure from convention because in Latin, the infinitive verb "to go", is a single word, and you simply would not break the word in half to put a poetic adverb between the syllables. English is not Latin, and JavaScript is not Haskell.

Auto-lifting/unwrapping promises are beautiful. Purely monadic promises are beautifully captured by type theory. But, I would pick one or the other over one with multiple personalities. I would pick "Promise.cast" and "then", let complexity melt off the spec, and stop worrying.

# Ryan Scheel (7 years ago)

This seems like something that can be deferred to ES7.

# Jake Verbaten (7 years ago)

But, I would pick one or the other over one with multiple personalities

This is a good mentality to have. Consider picking the one that allows the other one to be implemented in user land.

This would allow both parties to benefit from the integrated tooling & performance boosts of it being a native primitive. i.e. everybody wins.

# Brendan Eich (7 years ago)

Kris Kowal wrote:

I would pick "Promise.cast" and "then", let complexity melt off the spec, and stop worrying.

Seems not everyone agrees, especially since Promise.resolve is all over the place.

I wonder with task.js like solutions whether we'll actually have lots of hand-.then'ing of promises. I should hope not!

# Brendan Eich (7 years ago)

Ryan Scheel wrote:

This seems like something that can be deferred to ES7.

Which "this"? .chain is deferred to ES7, and in V8 experimentally.

# Kevin Smith (7 years ago)

Auto-lifting/unwrapping promises are beautiful. Purely monadic promises are beautifully captured by type theory. But, I would pick one or the other over one with multiple personalities. I would pick "Promise.cast" and "then", let complexity melt off the spec, and stop worrying.

AP2 might not be what everyone wants, but it is a coherent design, and (more importantly) it has consensus. We should use that as the starting point and focus on whether the current spec correctly implements AP2.

# Kevin Smith (7 years ago)

This seems like something that can be deferred to ES7.

Which "this"? .chain is deferred to ES7, and in V8 experimentally.

Deferring chain is, if not future-hostile to chain, then "future-irritant".

The way Promises are currently spec'd, the return value of the then callback is itself recursively unwrapped using then, rather than the more appropriate single-unwrapping of chain. Single unwrapping of the output side is consistent with the AP2 design principal that the provider does not determine the unwrapping policy.

You cannot consistently implement AP2 without chain from the get-go. It is the primitive unwrapping function which then should be calling.

# Brendan Eich (7 years ago)

On Jan 28, 2014, at 10:14 PM, Kevin Smith <zenparsing at gmail.com> wrote:

This seems like something that can be deferred to ES7.

Which "this"? .chain is deferred to ES7, and in V8 experimentally.

Deferring chain is, if not future-hostile to chain, then "future-irritant".

The way Promises are currently spec'd, the return value of the then callback is itself recursively unwrapped using then, rather than the more appropriate single-unwrapping of chain. Single unwrapping of the output side is consistent with the AP2 design principal that the provider does not determine the unwrapping policy.

You cannot consistently implement AP2 without chain from the get-go. It is the primitive unwrapping function which then should be calling

Preaching to the choir. Andreas R. and I were talking about this after today's meeting. So why isn't chain in the spec?

# Sébastien Cevey (7 years ago)

On 29 Jan 2014 03:42, "Brendan Eich" <brendan at mozilla.com> wrote:

And so we will go with 'chain'.

I assume it has been pointed out before that Underscore/LoDash have popularised their own "chain" function/concept?

underscorejs.org/#chaining

Hopefully it's distant enough from Promise-world to alleviate most confusion, but I thought I'd point it out for completeness.

On the flatMap name, as a Scala developer, I find it quite conveniently describes the "flattening" nature of the operation, although it only really works thanks to the parallel with map:

M[A]#map(f: A => B): M[B]

M[A]#flatMap(f: A => M[B]): M[B] (not M[M[B]] as it would with map - hence the "flat")

But given that 'then' is a sort of a mix of map/flatMap (and flatThen is terrible), I don't think there is an obvious parallel currently.

Seeing as flatMap/chain is being discussed though, have there been plans to add a map method on Promises (that would map to a new Promise without unwrapping returned promises)?

And have there been any discussion to add other monadic types to ES, eg Option/Maybe, Try, etc, or to make existing types support more monadic operations, eg Array.prototype.flatMap (or "chain"?).

Thanks, and sorry if this has been discussed before (is there any way to "catch up" on such topics if it has, short of reading piles of minutes?).

# Paolo Amadini (7 years ago)

While the getMyPromise question wasn't answered yet and still isn't clear to me, from the other posts I think it can be reworded this way:

var p1 = Promise.resolve(Promise.cast(1));
var p2 = Promise.cast(Promise.cast(1));

p1 and p2 will have a different internal state, but there is no way to tell the difference between the two (and no way to write a regression test to check that the internal state is correct), until in the future a new method will be defined.

Is this correct? What about the case of:

var p3 = new Promise(resolve => resolve(Promise.cast(1));

Will the state of p3 be more similar to that of p1 or that of p2?

I'll file a bug when I understand the situation better.

, Paolo

# Paolo Amadini (7 years ago)

On 29/01/2014 5.12, Kris Kowal wrote:

In this case, a half pursuit of type purity is a side quest at the expense of users. Having two ways to resolve and two ways to observe a promise is unnecessarily confusing. In my experience, one method like "then", that unwraps recursively, and one function, like "Promise.cast", that automatically lifts if necessary, and "then" handlers that return into the waiting hands of "Promise.cast" are coherent and ergonomic. Having a choice between "cast" and "resolve" and a choice between "then" and "chain", will leave developers unnecessarily confused and worried all the while they use or abandon Promises as too subtle.

As an imperative programmer, I confirm I'm left worried and confused ;-)

But I understand that functional programming might need more complexity. I would be less confused if functionality that is only relevant for functional programming would be separate and inaccessible by default, even if supported and optimized at the implementation level.

For example:

let p1 = FunctionalPromise.accept(...)
FunctionalPromise.chain(p1, ...)

And no "Promise.accept" or "p1.chain" methods to be chosen over "Promise.cast" and "p1.then".

Paolo

# Quildreen Motta (7 years ago)

On 29 January 2014 10:45, Paolo Amadini <paolo.02.prg at amadzone.org> wrote:

On 29/01/2014 5.12, Kris Kowal wrote:

In this case, a half pursuit of type purity is a side quest at the expense of users. Having two ways to resolve and two ways to observe a promise is unnecessarily confusing. In my experience, one method like "then", that unwraps recursively, and one function, like "Promise.cast", that automatically lifts if necessary, and "then" handlers that return into the waiting hands of "Promise.cast" are coherent and ergonomic. Having a choice between "cast" and "resolve" and a choice between "then" and "chain", will leave developers unnecessarily confused and worried all the while they use or abandon Promises as too subtle.

As an imperative programmer, I confirm I'm left worried and confused ;-)

But I understand that functional programming might need more complexity.

It is, actually, more simplicity. The concept of Promise.resolve and Promise.chain is simpler than Promise.cast and Promise.then (i.e.: they represent orthogonal concepts, not "complected"). Promise.cast and Promise.then may be, arguably, easier to work with, from a user POV, since you don't need to make as many choices. I would argue that it would make more sense to write cast and then in terms of resolve and chain, however. But this seems to have already been decided.

Answering your previous question:

var p1 = Promise.resolve(Promise.cast(1));
var p2 = Promise.cast(Promise.cast(1));

p1 will be a Promise(Promise(1)), whereas p2 will be Promise(1) — IOW, Promise.resolve will just put anything inside of a Promise, regardless of that being a promise or a regular value, whereas Promise.cast will put the innermost value that is not a promise inside of a promise.

var p3 = new Promise(resolve => resolve(Promise.cast(1));
// is the same as:
var p3 = Promise.resolve(Promise.cast(1))
# Kevin Smith (7 years ago)

It is, actually, more simplicity. The concept of Promise.resolve and Promise.chain is simpler than Promise.cast and Promise.then (i.e.: they represent orthogonal concepts, not "complected"). Promise.cast and Promise.then may be, arguably, easier to work with, from a user POV, since you don't need to make as many choices. I would argue that it would make more sense to write cast and then in terms of resolve and chain, however. But this seems to have already been decided.

It does make more sense, and from my point of view it's not decided. What is decided is the AP2 design. That is:

  1. promises are fully parametric and,
  2. then recursively unwraps the input to its callbacks.

For the sake of coherency (aka conceptual integrity), then needs to be written in terms of chain. This issue needs to be resolved ASAP, however, since promises are making their way into engines now.

# Jonas Sicking (7 years ago)

On Wed, Jan 29, 2014 at 7:33 AM, Kevin Smith <zenparsing at gmail.com> wrote:

It is, actually, more simplicity. The concept of Promise.resolve and Promise.chain is simpler than Promise.cast and Promise.then (i.e.: they represent orthogonal concepts, not "complected"). Promise.cast and Promise.then may be, arguably, easier to work with, from a user POV, since you don't need to make as many choices. I would argue that it would make more sense to write cast and then in terms of resolve and chain, however. But this seems to have already been decided.

It does make more sense, and from my point of view it's not decided. What is decided is the AP2 design. That is:

  1. promises are fully parametric and,
  2. then recursively unwraps the input to its callbacks.

For the sake of coherency (aka conceptual integrity), then needs to be written in terms of chain. This issue needs to be resolved ASAP, however, since promises are making their way into engines now.

Resolving this would be much appreciated. Gecko recently turned on Promises in our nightly builds, and already branched to our aurora builds. I believe Chrome is in a similar situation.

/ Jonas

# Rick Waldron (7 years ago)

Per Resolution (rwaldron/tc39-notes/blob/master/es6/2014-01/jan-30.md#conclusionresolution-3)

  • Promise.cast is renamed to Promise.resolve (remove old Promise.resolve)
  • Keep then, reject chain (NOT DEFER, reject!)
  • Renaming .cast thus removes over-wrapping (always-wrap) deoptimization in old Promise.resolve
# Tab Atkins Jr. (7 years ago)

On Tue, Feb 4, 2014 at 10:55 AM, Rick Waldron <waldron.rick at gmail.com> wrote:

Per Resolution (rwaldron/tc39-notes/blob/master/es6/2014-01/jan-30.md#conclusionresolution-3)

  • Promise.cast is renamed to Promise.resolve (remove old Promise.resolve)
  • Keep then, reject chain (NOT DEFER, reject!)
  • Renaming .cast thus removes over-wrapping (always-wrap) deoptimization in old Promise.resolve

Okay, so this is discarding the previous consensus, right? Monadic promises are thrown out?

This breaks async maps forever, then. :/ With the previous consensus, the result promise from an async map could be observed directly with .chain(), and it worked great regardless of what was stored inside the map.

With only flat promises, if you store a promise in an async map and then try to retrieve it later, the error callback gets called if the key wasn't in the map or the key was in the map, but the stored value was a rejected promise. You have to spread your program logic over both callbacks to catch that case (or more likely, people will just write programs that break occasionally because they assumed that the reject branch of the AM.get() promise is only for failed lookups, as the documentation probably says).

Similarly, if you store a forever-pending promise in the async map, it's impossible to ever retrieve it. Even just a long-pending promise will stall your program's logic while it waits, preventing you from updating UI when the map returns and then completing the operation when the stored promise finally resolves.

The only way to get a working async map is to defensively store your values in a single-element array (or some other non-promise wrapper object) and unwrap them on retrieval.

Pretending that all promises are identical identity-less wrappers that can be arbitrarily collapsed is such a mistake. >_<

Le sigh.

# Brendan Eich (7 years ago)

Tab Atkins Jr. wrote:

On Tue, Feb 4, 2014 at 10:55 AM, Rick Waldron<waldron.rick at gmail.com> wrote:

Per Resolution (rwaldron/tc39-notes/blob/master/es6/2014-01/jan-30.md#conclusionresolution-3)

  • Promise.cast is renamed to Promise.resolve (remove old Promise.resolve)
  • Keep then, reject chain (NOT DEFER, reject!)
  • Renaming .cast thus removes over-wrapping (always-wrap) deoptimization in old Promise.resolve

Okay, so this is discarding the previous consensus, right? Monadic promises are thrown out?

Fundamental conflict between .chain and .then, on three levels:

  1. Committee "do both" / union proposals smell and bloat -- this is a problem per se.

  2. All the standard combinators call .then not .chain.

  3. Including .chain requires always-wrapping resolve, observability trumps optimizability.

There's really a 0 implicit in this:

  1. Monadic vs. non-monadic is an exclusive or -- you can't "have it all".

This breaks async maps forever, then. :/ With the previous consensus, the result promise from an async map could be observed directly with .chain(), and it worked great regardless of what was stored inside the map.

With only flat promises, if you store a promise in an async map and then try to retrieve it later, the error callback gets called if the key wasn't in the mapor the key was in the map, but the stored value was a rejected promise. You have to spread your program logic over both callbacks to catch that case (or more likely, people will just write programs that break occasionally because they assumed that the reject branch of the AM.get() promise is only for failed lookups, as the documentation probably says).

This does seem like a problem in the async map API, but I bet it can be solved without taking the hit of 1-3 above. Overloading get's return value is the root of the evil. Hand-boxing disambiguates the r.v.

Similarly, if you store a forever-pending promise in the async map, it's impossible to ever retrieve it. Even just a long-pending promise will stall your program's logic while it waits, preventing you from updating UI when the map returns andthen completing the operation when the stored promise finally resolves.

This, I don't buy. You have to hand-box using {value: } or whatever.

The only way to get a working async map is to defensively store your values in a single-element array (or some other non-promise wrapper object) and unwrap them on retrieval.

Right, hand-box. That's the price of the XOR between Monadic and anti-Monadic promises, and anti-monadic already won in the library evolution of Promises.

Pretending that all promises are identical identity-less wrappers that can be arbitrarily collapsed is such a mistake.>_<

There are no great paths forward. I would like to make promises value objects, E-like. But it's too late. Promises happened, the DOM and ES6 need them, worse is better. You knew that already!

# Gorgi Kosev (7 years ago)

Not sure if this is correct, but: This might actually be (perhaps accidentally) one step closer to allowing promises for thenables in a cleaner way if the need arises in the future.

Since its impossible to create a promise for a promise now, it would be theoretically possible (at some point in the future) to just

  • introduce Promise.of (wraps thenables)
  • make .then unwrap "true" promises only once
  • make resolve cast thenables into promises that recursively unwrap thenables

and be done. You'd have

  • Promise.of - wraps thenables, returns promises, wraps other values
  • resolve - converts thenables to recursively unwrapping promises, returns promises, wraps other values
  • .then(f, r) - for values returned by the handlers: recursively unwraps thenables, unwraps promises once, wraps other values

After that, you could use .then in a monadic way, provided you remember to protect thenables with Promise.of (should not be a problem with typed compile-to-js languages). e.g. resolve(Promise.of(thenable)) or p.then(x => Promise.of(thenable)) You could also implement .map using .then

and Promise.of

Previously you'd have to make resolve and Promise.cast the same before making .then unwrap a single level. Alternatively, you'd have to introduce .chain and have a confusing dual API (chain/then, resolve/cast).

# Gorgi Kosev (7 years ago)

Whoops, maybe scratch the "monadic" part - it would still be impossible to create a promise for a promise. It would however be possible to protect thenables.

# mfowle at eldergods.com (7 years ago)
  1. Committee "do both" / union proposals smell and bloat -- this is a problem per se.

Irony: .then() does both things and there's no way to stop it from doing both. The "do both" committee is actually not wanting both, it wants to only do one thing. We insist, we demand on having a way to NOT do both: we want .chain, and those who want .then can keep recursing through .chain while it's still a promise: that's your .then.

Irony aside, this is really a very handwavy thing. How much bloat is it? Is the smell going to effect the neighbors? Is this like Fabreeze level spec smell or will knock over an elephant level bad small?

Anyways,

To use "both" as loaded ammunition to justify an option that a) cuts off the possibility of doing only one thing b) mandates doing two steps is hilarious. In a Kafkaesque, my god, I'm a giant beetle everything still sucks kind of way.

#wat -> @domenic

  1. All the standard combinators call .then not .chain.

c) This isn't a reason against including .chain. d) .chain has one reasonably well known standard combinator on top of it: .then. e) Other "less" standard combinators will compose .chain, perhaps in ES.next.

.chain'ists don't care about higher levels. They want someone to not wall off the damned ground floor entrance. Compose away, put whatever higher order stuff you want in, awesome, users will love it. But leave a non-composed primitive underneath, please, something that JUST does one thing: signals listeners with a value when it's ready, whatever that value happens to be.

Using yet-higher order combinators to debase no-composition is #2.

  1. Including .chain requires always-wrapping resolve, observability trumps optimizability.

What does this effect beyond .resolve()? How much .resolve is out in the world?

I'd love more on this. What I've got is from the meeting notes:

"A resolve of a resolve does not create multiple levels of wrapping. Without chain this is not observable." -MM

".chain() requires over-wrapping/-unwrapping, without chain this is unobservable and therefore optimizable" -- says to reject chain -be

What use cases are being debated here? I haven't seen STH or LH's proposals for chain/compromise, they are just short sentance fragments in the notes. It's hard to get a grasp on the magnitude/maturity of the optimization being discusssed here, this final point that swayed BE and YK.

There's really a 0 implicit in this:

  1. Monadic vs. non-monadic is an exclusive or -- you can't "have it all".

.#3 is the only point I concede as being even possibly concrete, and is a question of optimizability, the extent of which hasn't been spoken to. The first are mere stylistic/ opinion-based assertions which cannot be contested.

The compromises proposed having it all. It was literally closing minutes of ES6 when a new thing floated into sight and changed one person's mind. A second member attested that they'd rather not hold the show up, that they'd rather stamp this than old up ES6, and that all seems so ignoble, and it seems like .resolve() is such a tiny trivial barely used thing to make a "you can't have it all" case out of. Please help me, help others understand what #3 entails, what made it seem so so onerous. Yes, .resolve() was going to have to keep creating new wrappings: that was the point of chain! What else is there?

TC39, ES6: please make the simpler uncomposed base case available. Un-finizale. Give users access to the base material. Allow people to engineer flow control: .then() insist on one brand of flow control, returns only a final done thing: please give the ability for promise chains to be used in not-yet-determined computation.

MF


Postscript:

There seems to be some kind of notion floating about that "hey, you can just box your promises in an object and they won't resolve." This is only true for producers. If you use any existing promises based system, you will have no way what so ever to pull a chain out of it. .chain allows consumers to decide how to consume (it's an opt in thing), boxing has to be done producer side (up- front desgin).

Please please please: let a promise return something that hasn't happened yet, so that that promise can be reasoned about.

Post-post-script:

Async map(/monadic) really obfuscate what's being talked about to me. I respect that people can have conversations about it, and I'm sure you can understand the score marks as you talk back and forth over them, but I think it leaves a lot of people unable to grasp the issue itself:

The ability to return a value "something from the the future" out of a promise, versus promises never ever being able to return anything but "done, here's the final thing."

# Brendan Eich (7 years ago)

mfowle at eldergods.com wrote:

#3 is the only point I concede as being even possibly concrete

Then you get a plonk, or at least a demeric, for ignoring the very concrete problem #2: combinators such as .all do not use .chain, they use .then. Also for ignoring #1 -- it doesn't matter that .then is compound, complex, multifarious -- it's a de-facto standard. And it is what combinators (#2) in the standard use, so you can't ignore the committee bloat or unioning cost.

Imagine we simplified to do just .chain and .cast, and combinators called .chain. We'd be breaking compatibility with extant promises libraries, and de-jure future-banning .then/.resolve as an API alternative. The existing promises library users would be faced with a porting problem whose size is hard to estimate, since they are relying on assimilation and unwrapping happening behind the scenes.

Promises happened. They are real. That's why I tweeted about #realism.

Wanting something value-ish, featureless, monadic, is a good goal for a differently named abstraction built on value objects. Promises are not that abstraction. Do the work to show the win, under a new name (Future?). Until then this is selective arguing and cheap talk.

# Brendan Eich (7 years ago)

Brendan Eich <mailto:brendan at mozilla.com> February 4, 2014 at 7:46 PM

Then you get a plonk, or at least a demeric,

"demerit', of course.

But I'm talking to a fake address. That's pretty bad, it makes me want to ignore you (plonk in USENET terms). Do better quickly, or else that's the right outcome.

# Mark Roberts (7 years ago)

Wanting something value-ish, featureless, monadic, is a good goal for a differently named abstraction built on value objects. Promises are not that abstraction. Do the work to show the win, under a new name (Future?).

I believe the win has already been shown, though: "Futures" would of course give you the same thing Promises do while supporting those additional use cases where you don't want recursive unwrapping. My concern is settling on a shortsighted implementation that artificially limits use cases.

.#2: combinators such as .all do not use .chain, they use .then

Is this such a big deal? If you have .chain then you have .then, so you can certainly "have it all".

Imagine we simplified to do just .chain and .cast, and combinators called

.chain. We'd be breaking compatibility with extant promises libraries

IMO, leave .all and the rest of the combinators as they are using .then. I would not mind so long as .chain is intact.

# mfowle at eldergods.com (7 years ago)

I like your looking forwards to the future Future, but I really want to understand the conflict underneath Promises. I'm not trying to selectively pick issues out, I'm trying to get a general sense for where the conflict lies that kicked .chain out from complemeting the spec.

The existing promises library users would be faced with a porting problem whose size is hard to estimate

Naive go here;

Promise.prototype.then= function(fn){ function recurse(val){ if(val.chain) val.chain(recurse) else fn(val) } return this.chain(recurse) }

combinators such as .all do not use .chain, they use .then.

I've argued for nothing but giving everyone as many combinators as they want in the spec. .then is a combinator of .chain, .all .some combinators of .then: fine, great, shiny. You all get to join the spec, we won't try to change you into doing something other than what you do now. Embrace the de riguere standards!

These are not any reason the existing combinators cannot specify an underpinning .chain that doesn't auto-unwrap. De jure has the chance to rule that we can have it all, and the naive silly cheap code above moves towards that promise, starts to show that the two are one & the same, & it's just a question of what is pinned underneath.

I tend to believe there is for sure something real, and interesting under #3: that there was a real issue in implementation that drove ES6 at the very last hour away from .chain serving as the basis for .then. If you want something other than cheap words an unreality, perhaps you'd color in some of those details.

MF

# Brendan Eich (7 years ago)

Mark Roberts wrote:

Wanting something value-ish, featureless, monadic, is a good goal
for a differently named abstraction built on value objects.
Promises are not that abstraction. Do the work to show the win,
under a new name (Future?).

I believe the win has already been shown, though: "Futures" would of course give you the same thing Promises do while supporting those additional use cases where you don't want recursive unwrapping.

No. Future as a featureless value object would not have any of the identity-based problems reference types including Promises have. It would become its result when fulfilled. You would not be able to do much with a Future other than "then" it. There would be no thenable assimilation via duck-typing of thenables.

#2: combinators such as .all do not use .chain, they use .then

Is this such a big deal? If you have .chain then you have .then, so you can certainly "have it all".

That .then would be based on .chain is irrelevant. Anyone using .all would not get .chain and would potentially depend on all the .then magic. Purists trying to use only .chain would need to write their own .chainAll, etc., which would not be in the standard.

Imagine we simplified to do just .chain and .cast, and combinators
called .chain. We'd be breaking compatibility with extant promises
libraries

IMO, leave .all and the rest of the combinators as they are using .then. I would not mind so long as .chain is intact.

See above. This does not wash unless you aren't using combinators at all, or you are rolling your own -- in which case we have a bloated "do both" forked standard.

# Brendan Eich (7 years ago)

mfowle at eldergods.com wrote:

The existing promises library users would be faced with a porting problem whose size is hard to estimate

Naive go here;

Promise.prototype.then= function(fn){ function recurse(val){ if(val.chain) val.chain(recurse) else fn(val) } return this.chain(recurse) }

That's not how .then works in any existing library I know of. It's not how .then works in Promises/A+ (promises-aplus.github.io/promises-spec). That is not going to interop with anything thenable. It introduces a new chainable (duck-typed, no function value test even, .chain truthy property).

There is no point in showing a trivial but incompatible layering of .then on .chain. That's not the issue.

Here is V8's .chain-based, self-hosted-in-V8's-JS-for-builtins-dialect, Promise.prototype.then implementation:

function PromiseThen(onResolve, onReject) { onResolve = IS_UNDEFINED(onResolve) ? PromiseIdResolveHandler : onResolve; var that = this; var constructor = this.constructor; return this.chain( function(x) { x = PromiseCoerce(constructor, x); return x === that ? onReject(MakeTypeError('promise_cyclic', [x])) : IsPromise(x) ? x.then(onResolve, onReject) : onResolve(x); }, onReject ); }

Notice the recursion is not via .chain, rather (and only if IsPromise(x), a nominal type test) via .then.

I tend to believe there is for sure something real, and interesting under #3: that there was a real issue in implementation that drove ES6 at the very last hour away from .chain serving as the basis for .then. If you want something other than cheap words an unreality,

(I have no idea what "cheap words an unreality" means here. Should I care?)

perhaps you'd color in some of those details.

It's pretty simple. Adding .chain in addition to library-compatible .then means Promise.resolve must always make a new promise, even for a promise argument. Otherwise you can't compose resolve and chain to wrap once always and unwrap once only, respectively.

Drop .chain, use only .then, and resolve can cast instead of wrap, which some promises libraries that lack chain indeed do today (an optimization, #3 in your cited text, my point (3)).

Having the two APIs together require what V8's self-hosted, .chain-extended implementation does: always wrapping the argument x to Promise.resolve in a new Promise. That's the problem I numbered (3). Clear?

# Ryan Scheel (7 years ago)

I do believe it meant to say 'cheap words and unreality'. Nothing to care about.

# Brendan Eich (7 years ago)

Ryan Scheel wrote:

I do believe it meant to say 'cheap words and unreality'. Nothing to care about.

Thanks -- makes me not care about replying, you are right!

# Kevin Smith (7 years ago)
  • Promise.cast is renamed to Promise.resolve (remove old Promise.resolve)
  • Keep then, reject chain (NOT DEFER, reject!)
  • Renaming .cast thus removes over-wrapping (always-wrap) deoptimization in old Promise.resolve

So basically, since September, we've seen:

  • A design proposal which achieved "consensus"
  • A spec implementation which completely ignored that "consensus"
  • And now a third design, which also throws out "consensus" and has been implemented by precisely no-one

And all of this over 200 to 300 LOC.

These shenanigans are deeply disturbing.

# Brendan Eich (7 years ago)

Kevin Smith wrote:

So basically, since September, we've seen:

  • A design proposal which achieved "consensus"
  • A spec implementation which completely ignored that "consensus"
  • And now a third design, which also throws out "consensus" and has been implemented by precisely no-one

And all of this over 200 to 300 LOC.

These shenanigans are deeply disturbing.

I agree. I kept my nose out of it until it was front-and-center on the last day of the last TC39 meeting, and in context of threads here that were pushing (I was on side with you) to add .chain.

The design proposal in September did not have consensus, for real. That proposal was changed after the September meeting (Mark has details). The implementation in V8 then added .chain based on an understanding from the May 2013 meeting (multiple, "Rashomon" views of consensus).

There's still some concern about supporting Promise subclassing, too, but I think it can be handled without controversy (I could be wrong). It entails using the public-named methods (in case of overrides) consistently, and not using internal methods or other such shortcuts some of the time. Basically "always use .then".

Can we regain consensus on the September status quo ante, minus any "do both" half-hearted compromises that don't work? Mark, what do you think?

# Paolo Amadini (7 years ago)

On 05/02/2014 9.43, Brendan Eich wrote:

Can we regain consensus on the September status quo ante, minus any "do both" half-hearted compromises that don't work? Mark, what do you think?

For what it's worth, I've been planning to file a bug on GitHub before I saw the most recent conclusion, that dispelled my concerns.

My conclusion, deriving from the explanations of the design that I received on this list (very informative by the way, thanks!) was that "Promise objects for imperative and functional use should be distinct".

In fact, in the design I was concerned about, when calling a function that returns a Promise object generated by code that uses "imperative" promises (i.e. expects "then" to be used), a consumer that works with "functional" promises (i.e. uses "chain") cannot reasonably know which Promise type is returned, potentially leading to subtle bugs.

Consider this scenario, with a typical documentation of an opaque imperative library: "The getResponse() method will return a Promise resolved with the response string." Will getResponse return a Promise(String) or a Promise(Promise(String))?

It may actually return one or the other. To be sure about the type when used in a functional context, the promise will require unwrapping at the boundary between the two usage styles. Using the same object for both uses might lead to confusion and forgetting the unwrapping.

I think the conclusion of using two distinct objects for that (with the other named FuctionalPromise, Future, or anything else) resolves the issue. Imperative Promise implementations may use those alternative primitives internally.

# Mark S. Miller (7 years ago)

On Wed, Feb 5, 2014 at 12:43 AM, Brendan Eich <brendan at mozilla.com> wrote:

Can we regain consensus on the September status quo ante, minus any "do both" half-hearted compromises that don't work? Mark, what do you think?

I see no need to reopen this yet again. At this last meeting, we declared a new consensus, to keep the .cast and .then level, dispense with the .accept (previously renamed .resolve) and .chain level, and to rename .cast to .resolve.

This kills the September AP2 consensus, in that we are no longer constraining the .then level's mechanics to be compatible with introducing (whether now or in the future) a .chain level. The September consensus was a compromise, in the "do both" mode that standards committees are tempted by. I have a deeper appreciation of those temptations now, having promoted it at the time. Given that we were trying to do both, Todd's AP2 was a wonderful way to minimize the pain. But I am now proud to see our committee once again rise above the "do both" failure mode.

IMO, the most important argument against "do both" is that it would lead to perpetual confusion, as some libraries are written to the .then style and others are written to the .chain style. With AP2, these would live together as well as possible, but that's still not well.

Regarding the comments in this thread, I haven't engaged since I have not seen any new points raised. All the old arguments are publicly archived. If you're about to repost a previously posted argument, please instead just go read the previously posted response. No one convinced anyone of very much that time, and repetition is unlikely to do better.

The one remaining open issue for me is the means of flattening the output side of a .then operation. The end-of-september "consensus" flattened one level, but nominal typing and use of internal properties. The current spec flattens using .then. The change was made to make subclassing more flexible in ways I have yet to appreciate. But since we are no longer trying to co-exist with .chain, now or in the future, I no longer object to using .then for the output flattening of .then. I am fine with the .then spec as currently written, in the context of the rest of our decisions at the end of January meeting.

# Mark S. Miller (7 years ago)

On Wed, Feb 5, 2014 at 6:37 AM, Mark S. Miller <erights at google.com> wrote:

The end-of-september "consensus" flattened one level, but nominal typing and use of internal properties.

Should be

by nominal typing and use of internal properties.

# Quildreen Motta (7 years ago)

I can understand the "you can't have your cake and eat it too" sentiment. Indeed, I do think that having .then() and .chain() in the spec is a mistake. The only thing that saddens me (and this also with some other parts of the spec, such as Object.create) is that .then() is not an orthogonal and composable primitive, it's a multi-method on top of simpler primitives, but as with other parts of JavaScript, the simpler, orthogonal primitives are not available for users to derive more complex functionality from easily. That is, these higher-level primitives provide support to a class of higher-level use cases, but not the necessary basis to support these use cases in general, and this is a bummer for modularity and re-usability. More so, I do think that just following de-facto standards without acknowledging that there might be "bugs" on them you want to fix when they are cheap to do so (i.e.: now, rather than after it becomes standard) really bothers me. Fixing libraries today, specially with a relatively new thing such as Promises/A+ is relatively "cheap."

At any rate, I'm just venting now, and I agree that the spec should only support either one or the other (even if .then() is just a combinator, the addition of it on the specs would lead to unnecessary headache, since it doesn't compose cleanly with other constructs). I am just sad that the specs are once again favouring the higher-level, limited use cases.

# Andreas Rossberg (7 years ago)

I really regret that I couldn't be there on the last day of the meeting, and share the frustration I hear in this thread. I honestly have a hard time viewing this as a "consensus", if for nothing else than that I do not consent. :)

For the record, let me repeat what I wrote in a private conversation with Brendan the other day:

"This destroys the fragile consensus we had built last May, on which my and others' consent to moving forward with the spec work had been based since then. I am deeply concerned how a strategy of (largely) ignoring that consensus and creating a different precedent instead has been successful here.

I don't buy the over-wrapping argument. I doubt it will be an issue in practice, for the same reason that recursive unwrapping is rarely needed in practice. The committee is prematurely optimising for the odd case out that can always be avoided by [careful] design. On the other hand, I'm far more concerned about the cost that the unwrapping and thenable-assimilation machinery imposes on the common case, and which can not be avoided (lacking .chain).

A [separate] library would be dead on arrival, especially since the DOM and other libraries won't use it. It wouldn't even allow you to hygienicly wrap them, since you'd always have to go through the tainted operations.

Anyway, we've discussed this briefly in the V8 team and decided to keep the .chain method in V8. We've heard enough voices, from inside and outside Google, who dislike the spec'ed API and much rather use chain. Unfortunately, you cannot polyfill it... "

To add to that, I don't view the "double functionality" as a serious issue either. Chain is the (compositional) primitive, .then is a convenience abstraction on top. We have plenty of those in the lib.

# Brendan Eich (7 years ago)

Paolo Amadini wrote:

I think the conclusion of using two distinct objects for that (with the other named FuctionalPromise, Future, or anything else) resolves the issue. Imperative Promise implementations may use those alternative primitives internally.

+1

# Brendan Eich (7 years ago)

Quildreen Motta wrote:

but as with other parts of JavaScript, the simpler, orthogonal primitives are not available for users to derive more complex functionality from easily.

So true. JS is like a mini-toolkit of three-tool Swiss Army Knife (functions) with constructor and closure tools as well as the big first-class function edged tool; and super-caulk (objects) usable in a pinch as adhesive as well as sealant. Kind of what you want when you are traveling light, in a hurry running from zombies, no time to get a proper toolbox.

Part of TC39's work has been decomposing some of the multi-tools into new forms that do one thing well (arrow functions are my favorite in this regard). But it is both hard to justify the effort, and actually a lot of effort, to decompose fully in all cases.

Still I agree with Paolo. If we had functional (even value-like, featureless, then-able only via a special form, as in E) Futures, we could rebuild Promises on them and let Promises remain the library they've always been.

# Ron Buckton (7 years ago)

Perhaps the unwrapping behavior of .then could be specified as an optional argument in the Promise constructor and .resolve methods. The default behavior is the current standard (i.e. .then auto-unwraps), but a different behavior could be specified:

var unwrapPromise1 = Promise.resolve(1);
var unwrapPromise2 = Promise.resolve(unwrapPromise);
unwrapPromise2.then(value => { 
  assert(value === 1); // unwraps
}) 

var flatPromise1 = Promise.resolve(1, "flat"); // or some other indicator of a monadic promise
var flatPromise2 = Promise.resolve(flatPromise1, "flat"); 
flatPromise2.then(value => { 
  assert(value === flatPromise1);
}) 

var mixedPromise = unwrapPromise2.then(value => flatPromise2);

mixedPromise.then(value => { 
  assert(value === flatPromise1); // mixedPromise's unwrapping stops at flatPromise1 as it starts to unwrap flatPromise2 but stops unwrapping due to the defined behavior. 
  return value;
}).then(value => {
  assert(value === flatPromise1); // Since the previous .then was unwrapped into a "flat" promise, chained .then calls remain "flat". 
});

Basically, unwrapping of .then stops once the monadic behavior is reached. This would work for Promise.all/Promise.race as well since they could respect this behavior. This has the added benefit of specifying Promises with a minimal API surface that auto-unwrap for ES6, but add optional arguments to the constructor and .resolve for ES7 or later to specify this behavior. As far as the Promise consumer is concerned, when they use .then, they will get the result the Promise producer expects them to get (the final underlying value in the common use case, or a possible promise in the monadic case). The API surface area does not change, and monadic promises become an opt-in for those that need the specialized case.

Best , Ron

# Domenic Denicola (7 years ago)

From: es-discuss [mailto:es-discuss-bounces at mozilla.org] On Behalf Of Brendan Eich

Kevin Smith wrote:

So basically, since September, we've seen:

  • A design proposal which achieved "consensus"
  • A spec implementation which completely ignored that "consensus"
  • And now a third design, which also throws out "consensus" and has been implemented by precisely no-one

And all of this over 200 to 300 LOC.

These shenanigans are deeply disturbing.

I agree. I kept my nose out of it until it was front-and-center on the last day of the last TC39 meeting, and in context of threads here that were pushing (I was on side with you) to add .chain.

The design proposal in September did not have consensus, for real. That proposal was changed after the September meeting (Mark has details). The implementation in V8 then added .chain based on an understanding from the May 2013 meeting (multiple, "Rashomon" views of consensus).

There's still some concern about supporting Promise subclassing, too, but I think it can be handled without controversy (I could be wrong). It entails using the public-named methods (in case of overrides) consistently, and not using internal methods or other such shortcuts some of the time. Basically "always use .then".

Like Mark, I've been mostly not engaging due to the lack of original points being raised. But I think it's important not to let Kevin's mischaracterization of the situation stand on the record, especially as it denigrates the hard work done by the many involved parties to come up with a workable promises design that meets all constraints, relegating it to such phrases as "shenanigans" and "200 to 300 LOC."

The evolution of DOM and ES promises, and the related fragile consensuses, has been an involved process.

Going back to the source documents, consider what the May AP2 proto-consensus really is: "Abashed Monadic Promises 2," from Mark's presentation, page 18. Using modern terminology, this consisted of "resolve," "then," "accept," and "flatMap" (monads+promises, for short). The outcome of that meeting was that the DOM would proceed with standardizing the subset of AP2 that it needs, namely "resolve" and "then" (promises, for short).

This resulted in the work at domenic/promises-unwrapping, which was targeted for the DOM. Coming to the September meeting, the documented that was distributed beforehand and presented was this revision. Committee members reviewed this document---some beforehand as per the agenda, some seemingly at the meeting---and declared it was a good subset to proceed with and would not prevent future problems if ES7 wanted to standardize monads+promises on top of the DOM's standard promises.

Dave Herman made the bold suggestion that promises, as they were, would be useful for the module loader API, and that the spec was in good enough shape that we could move it from the DOM into ES6. This was accepted, and as such domenic/promises-unwrapping at the time was declared as September consensus for ES6 promises. My understanding of that meeting was that the September consensus was for ES6 promises, and that everyone in the room understood that. But I have heard conflicting accounts from people who were also in the room, who believed that the September consensus was for ES6 monads+promises. At the same time, others have told me that they understood the September consensus to be for promises, and would not have declared consensus if they were being told to agree on monads+promises. So we can already see how the "consensus" did not have consensus, for real, as Brendan puts it.

That aside, let us consider Kevin's allegations about domenic/promises-unwrapping. At this point we have to say some words about the amount of work that goes in to producing a fully functioning spec, that respects all the constraints. Let those words be: it is nontrivial.

Thus, it may come as no surprise that the version presented at the September meeting was not perfect. In particular, it had two large related problems:

  • It had not even the most naïve support for subclassing (cf. Array and Date, pre-ES6: the Array.prototype.map problem, the Array.prototype.concat problem, and the (new (class extends Date)).getTime() problem, all solved in ES6)
  • It created a new class of magic-objects with hidden, inaccessible, un-proxyable, internal state (cf. Date, still not solved in ES6)

Both of these stemmed from, essentially, overuse of magic abstract operations and internal slots. This was yet another consensus-blocker, as discussions with several committee members revealed; another such object would never be allowed into ES6. So we had to do some work to reduce the reliance on magic, and move toward the usage of public APIs (viz. then), for accessing the state of other promises.

This resulted in a few design revisions, largely based on the work of Andreas Rossberg in his proto-implementation of monads+promises, by choosing the promises subset and porting it from V8-JS with nominal type-checking to ECMASpeak with structural type-checking. At one point a long series of discussions---originally prompted by concerns about memory leaks---it was discovered that we could move all of the unwrapping logic into then, instead of putting half of it in resolve and half in then, which made resolve become an "accept" operation in reality.

This is where the spec stands today, with resolve creating unobservable, memory-leaky promises-for-promises so that all the non-monadic behavior can be confined an over-complicated then algorithm. I welcome the simplification and performance benefits that will come with removing promises-for-promises and thereby simplifying then, and am excited to start producing mostly-red diffs on domenic/promises-unwrapping per the plan.

In practice, none of the revisions made after the September meeting, including the latest ones decided on at the January meeting, have any practical consequence for the day-to-day usage of new Promise() plus promise.then(...). They were observable changes, in the sense that you could send a specifically-crafted proxy through the algorithms, or override and instrument Promise.prototype.then appropriately, in order to discover them. And of course, they had an immensely positive impact on anyone who wanted to subclass Promise, or use AOP around promise methods, or use proxies in conjunction with promises, or use promises without skyrocketing memory usage, or even just wanted a predictable behavioral model for how promises interacted with themselves. But they did not impact "normal" usage.

The amount of heat Kevin has piled on to promises-unwrapping, repeatedly, has been puzzling and unwarranted, but I have chosen mostly to set it aside, as the words of someone who did not care to understand the whole history and design constraints involved in writing this spec. The dismissive tone of the above confirms to me that this was the right approach. But I think it's important to outline, for the record, what has really been going on here, so that observers understand that no decisions were made lightly; no consensus was ignored; and the evolution of promises-unwrapping has been a careful, deliberate, and collaborative process.

# Brendan Eich (7 years ago)

Domenic Denicola wrote:

The amount of heat Kevin has piled on to promises-unwrapping, repeatedly, has been puzzling and unwarranted, but I have chosen mostly to set it aside, as the words of someone who did not care to understand the whole history and design constraints involved in writing this spec.

Apologies for seeming to side with Kevin's "shenanigans", that's the wrong word. I shouldn't have cited it without noting how it usually implies bad intent. What I was reacting too, perhaps Kevin is as well, is the difficulty in getting something seemingly simple (as library code) standardized.

And you're right, there was a lot of hard work to get to where we are. Thanks for all you've done. I am in no position to complain, having stayed away from the long promises threads over the last many months, mostly.

But (and I fell for this too, no excuses) combining two conflicting APIs into one proposal was always a mistake.

Andreas R. argues we can live with the over-wrapping in resolve, and seems to argue that my higher priority points (1, committee "do both" error; 2, .all etc. use .then not .chain) are either not problems or minor enough to live with.

Ron Buckton just proposed a "do both" solution that is at least properly parameterized in the constructor, so combinators (2) work. But it's still a stinking optional state flag that forks runtime semantics and API reasoning chains (1), creating bug habitat and probably pleasing no one much.

My "realism" argument seems strong, and your summary backs it: DOM was only using .then and .resolve, that's all they needed. That's all library Promise users use, in general (please correct me if I'm wrong). That is what Promises "are" in a real sense that constrains our standards efforts for ES6 and the DOM.

HTH, and thanks again for all your work.

# Andreas Rossberg (7 years ago)

On 5 February 2014 18:35, Domenic Denicola <domenic at domenicdenicola.com> wrote:

The evolution of DOM and ES promises, and the related fragile consensuses, has been an involved process.

Going back to the source documents, consider what the May AP2 proto-consensus really is: "Abashed Monadic Promises 2," from [Mark's presentation][1], page 18. Using modern terminology, this consisted of "resolve," "then," "accept," and "flatMap" (monads+promises, for short). The outcome of that meeting was that the DOM would proceed with standardizing the subset of AP2 that it needs, namely "resolve" and "then" (promises, for short).

Just to be clear, that is not my recollection at all.

I'm still puzzled where this interpretation of the consensus came in. I was present in both respective meetings and don't remember this ever being said. Nor does anybody else I asked so far. It's not in the notes either. OTOH, I know that several participants have been under the assumption that the consensus was to include all of AP2 in ES6. That includes Mark, btw.

Sorry to dwell on this, but this misunderstanding is what I find particularly irritating.

# Sam Tobin-Hochstadt (7 years ago)

On Wed, Feb 5, 2014 at 1:34 PM, Andreas Rossberg <rossberg at google.com> wrote:

Just to be clear, that is not my recollection at all.

For whatever the history is worth, this is also not my recollection of the consensus in May. In particular, in May my memory is that we decided on a single constructor (ie, not both of "resolve" and "accept"), and that there wasn't any discussion or consensus for DOM to do something else.

# Brendan Eich (7 years ago)

Sam Tobin-Hochstadt wrote:

Just to be clear, that is not my recollection at all.

For whatever the history is worth, this is also not my recollection of the consensus in May.

Rashomon.

Who is the Woodcutter?

# Mark S. Miller (7 years ago)

On Wed, Feb 5, 2014 at 10:34 AM, Andreas Rossberg <rossberg at google.com>wrote:

Sorry to dwell on this, but this misunderstanding is what I find particularly irritating.

I don't want to dwell on this either. Different people do indeed have different memories of both of the crucial meetings. And AFAIK, the notes do not adequately resolve the memory disputes. But just for the record, my memory regarding the memory-conflict issues are:

At the end of May mtg, the consensus compromise was actually AP3. This rapidly fell apart due to discussion of its consequences on es-discuss, where AP2 was (correctly IMO) seen as strictly superior to AP3.

At the end of Sept mtg, my memory of the state on entry to the meeting agrees completely with Domenic's. On exit, my memory is

a) We had agreed to promote both the .then level and the .chain level to ES6. (This is probably the biggest disagreement among the memory of the attendees.) b) At the .then level, we agreed essentially to promises-unwrapping as it was at the time, which did one-level unwrapping of the output side of .then by use of internal properties. (Or what Domenic now characterizes as "by magic".) c) Domenic and Allen had talked about subclassing, and Domenic came up with a nice subclassing proposal that kept this "by magic" unwrapping.

After the September mtg

Yehuda did not attend the end-of-Sept mtg, and afterwards rejected the "by magic" unwrapping as hostile to subclassing goals I have yet to understand. Domenic responded by changing the output unwrapping of .then to use .then, which, as Kevin correctly points out, broke the AP2 consensus.

However, at my urging, Domenic initiated a private thread with the main .chain level advocates, including IIRC Andreas, Sam, and Tab, to see if any objected to the switch to .then doing unwrapping by use of .then. I was privately shocked that none did. At this point, perhaps I did the community a disservice by staying silent, and not pointing out more forcefully to the .chain advocates why they should object to this switch. But since

  • they were not objecting,
  • the switch only harmed properties that they care about and none that I care about,
  • we knew of no way to restore the AP2 consensus and also achieve Yehuda's subclassing goals I did stay silent until this problem was independently noticed by Kevin and brought to my attention and (at my urging) Domenic's attention as well.

Kevin's suggested a good way to meet all goals simultaneously. If we were still trying to do AP2, I think it would be worth considering:

  • On the output side of .then, still use some kind of nominalish type test to recognize whether the callback has returned a promise.
  • If it has, rather than use so-called "magic" to unwrap it one level, use that Promise's .chain.

By calling the returned promise's .chain rather than .then, we preserve the one-level unwrapping of AP2. By calling its .chain rather than accessing its internal properties, we preserve the subclassing flexibility Yehuda wants. If we were still going to do the standards committee "do both" failure mode, Kevin's suggestion is the best way I've seen to reconcile all the conflicting demands.

At Thursday of the end-of-January meeting, everyone there including the .chain advocates agreed to avoid the do-both compromise, and just adopt the .then level. This was an excellent decision, and one much better than I thought could be achieved.

THE BIGGER ISSUE

Unfortunately, just as Yehuda was not there in September and did not agree to the consensus then, Andreas was not there on Thursday of January and (above in this thread) does not agree to that consensus. This indicates a different failure mode we should be concerned about. Not everyone is at every meeting. If anyone absent from any meeting can veto the hard-won consensus achieved by those who did attend, it is hard to see how to make progress. OTOH, without this dynamic, we might proceed despite strongly held objections, which really is not consensus. Only our insistence on true consensus has saved us from prior disasters like ES4. I really don't know what to do about this.

# Tab Atkins Jr. (7 years ago)

On Wed, Feb 5, 2014 at 11:17 AM, Mark S. Miller <erights at google.com> wrote:

Unfortunately, just as Yehuda was not there in September and did not agree to the consensus then, Andreas was not there on Thursday of January and (above in this thread) does not agree to that consensus. This indicates a different failure mode we should be concerned about. Not everyone is at every meeting. If anyone absent from any meeting can veto the hard-won consensus achieved by those who did attend, it is hard to see how to make progress. OTOH, without this dynamic, we might proceed despite strongly held objections, which really is not consensus. Only our insistence on true consensus has saved us from prior disasters like ES4. I really don't know what to do about this.

The W3C treats in-person consensus as only a weak consensus (at least, in most working groups, most of the time). Async consensus is the only one that's worthwhile, because it avoids problems like this, plus allows people more time to think through issues than they might get when put on the spot during a meeting.

Note, though, that you can still have consensus and strong objections. Design-by-committee is still a failure mode to be avoided.

# Brendan Eich (7 years ago)

Tab Atkins Jr. wrote:

Note, though, that you can still have consensusand strong objections. Design-by-committee is still a failure mode to be avoided.

Excellent point. Argues against "do both". Can't ditch .then/resolve given library code interop constraint. That forces the conclusion from last week's meeting.

What do you think at this point? My Futures cereal next year will be delicious and nutritious!

# Tab Atkins Jr. (7 years ago)

On Wed, Feb 5, 2014 at 12:17 PM, Brendan Eich <brendan at mozilla.com> wrote:

Tab Atkins Jr. wrote:

Note, though, that you can still have consensusand strong objections. Design-by-committee is still a failure mode to be avoided.

Excellent point. Argues against "do both". Can't ditch .then/resolve given library code interop constraint. That forces the conclusion from last week's meeting.

What do you think at this point? My Futures cereal next year will be delicious and nutritious!

I still think it's a mistake, and though most APIs won't care as they won't produce nested promises, some types of APIs will be negatively affected in hard-to-workaround ways.

But I also respect champion-based design, and I've talked with Alex Russell who believes it's not as huge an impediment to some of our planned APIs as I believed. (Fixing it does involve magic, or subclassing, but those are things that can be done internally, rather than exposed to authors as terrible API.)

# Brendan Eich (7 years ago)

Tab Atkins Jr. wrote:

(Fixing it does involve magic, or subclassing, but those are things that can be done internally, rather than exposed to authors as terrible API.)

Could you say a bit more? Do you mean APIs that defensively "hand-box" so the user doesn't have to, with async map and its overloaded get return value?

# Tab Atkins Jr. (7 years ago)

On Wed, Feb 5, 2014 at 12:34 PM, Brendan Eich <brendan at mozilla.com> wrote:

Tab Atkins Jr. wrote:

(Fixing it does involve magic, or subclassing, but those are things that can be done internally, rather than exposed to authors as terrible API.)

Could you say a bit more? Do you mean APIs that defensively "hand-box" so the user doesn't have to, with async map and its overloaded get return value?

So, to get a bit more concrete, the ServiceWorker spec introduces Cache objects, which are asynchronous maps. When you get/set/etc anything, you get back a promise for the result.

ServiceWorker also has Response objects, which are what you're supposed to be putting into the Caches. Some parts of the API return Response objects directly, some return promises for Responses. So sometimes you'll be storing promises in the Cache.

The APIs for Caches and dealing with responses are designed so that they work nicely together, acting the same way whether you pass a Response or a Promise<Response>. So you can often respond to requests

with a simple "e.respondWith(cache.get('foo'))" and it Just Works®.

So, that part's not affected by the promise change, because the whole point is that this part of the API unwraps things until it hits a Response.

But say you wanted to do something a bit more complicated. For example, if the cache hits, but the result is a Promise, you might want to show a throbber until the Promise resolves. Under the old consensus, you could do this pretty easily - just do "cache.get().chain(...)" and then operate on the result. Under the new, you can't, because the cache promise won't resolve until after the response promise resolves. We can't say "just hand-box", because it breaks the easy pattern above when you don't want to do anything complicated - you have to .then() it and unbox there.

There are various ways to work around this. One way is to auto-handbox: we subclass Promise to CachePromise and hang a .chain() off of it which receives the actual value you stored in the map, while .then() silently unboxes and continues flattening as normal. Another way is to just guarantee that Cache promises never hand promises to their callbacks, and ensure that there are alternate ways to check if a Response is ready yet (maybe just forcing people to more explicitly track the Response promises directly, and make sure they're synced with the stuff in the Cache).

# Brendan Eich (7 years ago)

Tab Atkins Jr. wrote:

The APIs for Caches and dealing with responses are designed so that they work nicely together, acting the same way whether you pass a Response or a Promise<Response>. So you can often respond to requests with a simple "e.respondWith(cache.get('foo'))" and it Just Works®.

Nice.

But say you wanted to do something a bit more complicated. For example, if the cache hits, but the result is a Promise, you might want to show a throbber until the Promise resolves. Under the old consensus, you could do this pretty easily - just do "cache.get().chain(...)" and then operate on the result. Under the new, you can't, because the cache promise won't resolve until after the response promise resolves. We can't say "just hand-box", because it breaks the easy pattern above when youdon't want to do anything complicated - you have to .then() it and unbox there.

Yeah, that would suck for the pretty one-liner. But is it really that much more? Write it out and let's see A vs. B.

There are various ways to work around this. One way is to auto-handbox:

Heh, an oxymoron, like "Military Intelligence".

But I know what you mean!

we subclass Promise to CachePromise and hang a .chain() off of it which receives the actual value you stored in the map, while .then() silently unboxes and continues flattening as normal. Another way is to just guarantee that Cache promises never hand promises to their callbacks, and ensure that there are alternate ways to check if a Response is ready yet (maybe just forcing people to more explicitly track the Response promises directly, and make sure they're synced with the stuff in the Cache).

Sure, there are longer ways to go, but they seem to require if-else's or similar. I still would like to see the A-B test.

# Tab Atkins Jr. (7 years ago)

On Wed, Feb 5, 2014 at 3:51 PM, Brendan Eich <brendan at mozilla.com> wrote:

Yeah, that would suck for the pretty one-liner. But is it really that much more? Write it out and let's see A vs. B.

e.respondWith(cache.get('foo').then(x=>x[0]))

...which doesn't look like much until you realize you have to repeat the invocation on every single cache retrieval. (And remember to box with [] on every cache set, or else the retrieval will error out.)

# Mark S. Miller (7 years ago)

Just noting that with the infix ! syntax proposed for ES7, this would be

e.respondWith(cache.get('foo') ! [0])

# Kevin Smith (7 years ago)

I really don't know what to do about this.

A few facts:

  • Promises have a long history of controversy (I was there on the CommonJS list!)
  • The AP2 design was found acceptable (if not ideal) to all parties, providing us with a path through the controversy
  • Promises were a very late addition to ES6

With these facts at hand, the way forward seems clear:

  • Stick to AP2 like glue.
  • Sacrifice to deferment any non-fundamental part of the API which might re-ignite controversy.

The committee failure mode of "do-both" should be stated more precisely as: "do-both at the expense of design coherency". AP2 is, in my opinion, a coherent design and therefore does not represent such a failure mode.

# Yutaka Hirano (7 years ago)

Resolving this would be much appreciated. Gecko recently turned on Promises in our nightly builds, and already branched to our aurora builds. I believe Chrome is in a similar situation.

Chrome has "shipped" Promises, i.e. they are enabled by default on the current stable Chrome. So the situation is more serious for us.

  • Promise.cast is renamed to Promise.resolve (remove old Promise.resolve)
  • Keep then, reject chain (NOT DEFER, reject!)
  • Renaming .cast thus removes over-wrapping (always-wrap) deoptimization in old Promise.resolve

Sorry for knowing little about ES "consensus", is this the final decision? Will you change it again?

Thanks,

# Anne van Kesteren (7 years ago)

On Fri, Feb 7, 2014 at 12:45 PM, Yutaka Hirano <yhirano at chromium.org> wrote:

Sorry for knowing little about ES "consensus", is this the final decision? Will you change it again?

Yeah, some clarity would be nice.

I filed bugzilla.mozilla.org/show_bug.cgi?id=966348 on Gecko when the news broke. Mozilla can probably still make things in order before promises hit stable.

# Andreas Rossberg (7 years ago)

On 5 February 2014 11:17, Mark S. Miller <erights at google.com> wrote:

At the end of Sept mtg, my memory of the state on entry to the meeting agrees completely with Domenic's. On exit, my memory is

a) We had agreed to promote both the .then level and the .chain level to ES6. (This is probably the biggest disagreement among the memory of the attendees.)

FWIW, I agree, but it's clearly at odds with Dominic's interpretation of that meeting.

b) At the .then level, we agreed essentially to promises-unwrapping as it was at the time, which did one-level unwrapping of the output side of .then by use of internal properties. (Or what Domenic now characterizes as "by magic".) c) Domenic and Allen had talked about subclassing, and Domenic came up with a nice subclassing proposal that kept this "by magic" unwrapping.

After the September mtg

Yehuda did not attend the end-of-Sept mtg, and afterwards rejected the "by magic" unwrapping as hostile to subclassing goals I have yet to understand. Domenic responded by changing the output unwrapping of .then to use .then, which, as Kevin correctly points out, broke the AP2 consensus.

However, at my urging, Domenic initiated a private thread with the main .chain level advocates, including IIRC Andreas, Sam, and Tab, to see if any objected to the switch to .then doing unwrapping by use of .then. I was privately shocked that none did. At this point, perhaps I did the community a disservice by staying silent, and not pointing out more forcefully to the .chain advocates why they should object to this switch. But since

  • they were not objecting,
  • the switch only harmed properties that they care about and none that I care about,
  • we knew of no way to restore the AP2 consensus and also achieve Yehuda's subclassing goals I did stay silent until this problem was independently noticed by Kevin and brought to my attention and (at my urging) Domenic's attention as well.

To clarify, I may not have reacted directly (probably a mistake, but I considered .then to be too big a bag of magic anyway to really care about it per se, as long as flatMap/.chain was available as a clean and well-understood primitive, which I assumed would be the case -- as did Sam and Tab, I suppose).

However, this question actually triggered me to have a closer look at the state of the spec (again), the outcome of which was my proposal for a refactoring. That refactoring based .then on .chain and pulled it from any of the core logic, thus avoiding the problems you seem to be alluding to, while still allowing subclassing. Dominic adopted much of that refactoring for the spec, but left out the central part, namely .chain and its use, because he assumed it wouldn't be relevant for ES6.

THE BIGGER ISSUE

Unfortunately, just as Yehuda was not there in September and did not agree to the consensus then, Andreas was not there on Thursday of January and (above in this thread) does not agree to that consensus. This indicates a different failure mode we should be concerned about. Not everyone is at every meeting. If anyone absent from any meeting can veto the hard-won consensus achieved by those who did attend, it is hard to see how to make progress. OTOH, without this dynamic, we might proceed despite strongly held objections, which really is not consensus. Only our insistence on true consensus has saved us from prior disasters like ES4. I really don't know what to do about this.

I agree that this is a general issue moving forward. We actually had several occasions where consensus flipped forth and back between meetings, which is a problem, especially when implementers want to move ahead with features.

# Kevin Smith (7 years ago)

However, this question actually triggered me to have a closer look at the state of the spec (again), the outcome of which was my proposal for a refactoring. That refactoring based .then on .chain and pulled it from any of the core logic, thus avoiding the problems you seem to be alluding to, while still allowing subclassing.

For reference (since it was mentioned), my counter-proposal is here:

goo.gl/3XzX9R

and is basically just a minimization of Andreas's work. The only substantial difference is that Promise.defer is used as the subclass "create a default tuple" contract. (And less substantially, nominal type-testing is performed using a built-in symbol, rather than using internal slots.)

I agree that this is a general issue moving forward. We actually had several occasions where consensus flipped forth and back between meetings, which is a problem, especially when implementers want to move ahead with features.

If I may interject some opinions on process...

I'm not sure how this fits in with the new post-ES6 process, but from this one outsider's point of view:

  • A formal set of design goals should be completed and agreed upon before design work is begun. The design goals form the basis of measurement and the champion(s) must be bound to them at all times. Any changes to the design goal document must be carefully reviewed in order to prevent churn. A detailed change-log is an obvious requirement.

  • A working implementation should be created and solutions to real-world use cases should be programmed using the design before any spec language is authored. Spec-language is a poor medium for communicating both design intent and programming intent.

On a side note: it seems to me the the existence of the design champion, who by definition is deeply invested in the design process, implies the existence of its dual: the anti-champion, who is detached from the details of the design work and provides a vital holistic perspective. In a hierarchical organization this role would be taken on by the chief architect, I suppose, but in a committee you don't have one of those. : ) I feel that I have stepped into this role on occasion, much to the chagrin of various champions (apologies to Sam, Dave, and Domenic for any and all heat - nothing personal of course).

# Jonathan Bond-Caron (7 years ago)

On Fri Feb 7 06:50 AM, Anne van Kesteren wrote:

On Fri, Feb 7, 2014 at 12:45 PM, Yutaka Hirano <yhirano at chromium.org> wrote:

Sorry for knowing little about ES "consensus", is this the final decision? Will you change it again?

Yeah, some clarity would be nice.

From a user perspective, can someone explain what chain() does?

Recently hit an issue with Q.all() vs serial execution of promises: jbondc/cordova-plugman/commit/ad2c74b3344c8899e8ede12827f7ca4637a01b6f#diff-9f06c51911da08ebd6400ff77d629c1eR309

chain() isn't mentioned here: strawman:concurrency

Promise.all() is included in the spec but not the case for serial execution: people.mozilla.org/~jorendorff/es6-draft.html#sec-promise.all-resolve-element-functions

What seems like a preferred api / pattern from an ecma PoV to serially execute promises?

e.g. Promise.serial(), Promise.ordered(), Promise.follow()

Is it a common enough use case to be included in the spec?

# Domenic Denicola (7 years ago)

From: es-discuss <es-discuss-bounces at mozilla.org> on behalf of Jonathan Bond-Caron <jbondc at gdesolutions.com>

What seems like a preferred api / pattern from an ecma PoV to serially execute promises?

Promise.prototype.then

kriskowal/q#sequences

# Tab Atkins Jr. (7 years ago)

On Fri, Feb 7, 2014 at 8:20 AM, Jonathan Bond-Caron <jbondc at gdesolutions.com> wrote:

From a user perspective, can someone explain what chain() does?

.chain returns what is directly inside the promise, without doing any additional magic. This is different from .then, which "flattens" promises.

For a more concrete example, assume that "x" is a Promise<Promise<integer>>. That is, it's a promise, holding a promise, holding an integer. x.chain(arg=>"arg is a Promise<Integer>"), x.then(arg=>"arg is an Integer").

# Brendan Eich (7 years ago)

Kevin Smith wrote:

If I may interject some opinions on process...

Sure, it's es-discuss. Plus, you've earned your spurs.

I'm not sure how this fits in with the new post-ES6 process, but from this one outsider's point of view:

  • A formal set of design goals should be completed and agreed upon before design work is begun. The design goals form the basis of measurement and the champion(s) must be bound to them at all times. Any changes to the design goal document must be carefully reviewed in order to prevent churn. A detailed change-log is an obvious requirement.

Sounds both good in principle, and bad like waterfall. Waterfall is dead.

Would such goals have helped promises? I bet we would have fought over the goals, at best. At worst, we'd agree to vague goals and end up where we ended up.

  • A working implementation should be created and solutions to real-world use cases should be programmed using the design before any spec language is authored. Spec-language is a poor medium for communicating both design intent and programming intent.

Yes, this.

On a side note: it seems to me the the existence of the design champion, who by definition is deeply invested in the design process, implies the existence of its dual: the anti-champion, who is detached from the details of the design work and provides a vital holistic perspective.

Yes, the advocatus diaboli. We have plenty of those, though. Too many, at this point.

# David Bruant (7 years ago)

Le 07/02/2014 22:05, Brendan Eich a écrit :

Kevin Smith wrote:

  • A working implementation should be created and solutions to real-world use cases should be programmed using the design before any spec language is authored. Spec-language is a poor medium for communicating both design intent and programming intent.

Yes, this.

A working implementation is a lot of work, even a polyfill. But tests. Very recent case in point : www.w3.org/Bugs/Public/show_bug.cgi?id=20701 It was a lot of words in English, lots of HTML5 spec vocabulary with very special and detailed meaning, I had lost track at some point, even with the spec-y summary by Bobby [1]. But then, he created tests and that was suddenly fairly easy to review [2]. It was fairly easy to point places that might be under-spec'ed and needed more tests. Tests are an excellent medium to discuss feature design. The current test suite leaves room for interpretation on a corner case? throw in a new test to disambiguate!

On a side note: it seems to me the the existence of the design champion, who by definition is deeply invested in the design process, implies the existence of its dual: the anti-champion, who is detached from the details of the design work and provides a vital holistic perspective.

Yes, the advocatus diaboli. We have plenty of those, though. Too many, at this point.

woopsy...

David

[1] etherpad.mozilla.org/html5-cross-origin-objects [2] www.w3.org/Bugs/Public/show_bug.cgi?id=20701#c133

# Brendan Eich (7 years ago)

David Bruant wrote:

A working implementation is a lot of work, even a polyfill. But tests. Very recent case in point : www.w3.org/Bugs/Public/show_bug.cgi?id=20701 It was a lot of words in English, lots of HTML5 spec vocabulary with very special and detailed meaning, I had lost track at some point, even with the spec-y summary by Bobby [1]. But then, he created tests and that was suddenly fairly easy to review [2]. It was fairly easy to point places that might be under-spec'ed and needed more tests.

Yeah, a warning to auto-didactic prose-heavy spec authors.

Tests are an excellent medium to discuss feature design. The current test suite leaves room for interpretation on a corner case? throw in a new test to disambiguate!

Tests++. Until they overspecify, then --. No silver bullets. More tests when in doubt.

# Benjamin (Inglor) Gruenbaum (7 years ago)

A working implementation should be created and solutions to real-world use cases should be programmed using the design before any spec language is authored. Spec-language is apoor medium for communicating both design intent and programming intent.

Yes, this.

I just want to add my two cents on this part here.

A lot of us are following this thread from the outside, being unable to keep track of all the meetings ambiguities and agreements and the such. Tracking it is hard with all the ambiguities and 'big words' people sometimes throw and unclear use cases.

The way 'we' see it is:

  • Promises solve an interesting problem
  • There are a number of promise libraries that are working implementations that solve real world use cases. They have evolved for several years for those use cases.
  • I believe that was a big consideration for adding them to the specification to begin with.
  • Promise libraries could have added some of the methods and alternatives discussed here but didn't.
  • Q is not the only library nor it is the 'de facto standard'. Kris is very knowledgeable but was not the only one building APIs. There are at least 4 more libraries that do promises (RSVP, When, Bluebird, kew, vow...). .chain , while very interesting is not very common to say the least.
  • Tens of thousands (probably more) of people have been using these libraries for years now.

I mean no disrespect to the hard work of the people here. I know everyone who is participating in the discussion here is putting a lot of time and effort to better the language and the community. However, from my "complete" outside it sounds like:

  • There is a working tested solution that evolved through usage and solving real problems over several years*. *
  • The committee is dictating a lot of changes for it.
  • Changes are based on new design goals and theoretical considerations like "purity". I'm not saying there are no very good arguments for non-recursive assimilation and .chain (there are) , but they're still being "dictated" over existing library solutions.

I was under the impression the goal is to adopt what works in practice as much as possible and not dictate solutions from the sky. If someone wants `a different API there is nothing preventing them from creating a library and doing that - then coming back with "Hey, our solution works, we have 1000 dependencies on NPM" or "on second thought, that sounded very interesting but people are complaining about usage".

Sorry for the 'rant', will keep following closely.

# Quildreen Motta (7 years ago)

On 8 February 2014 07:37, Benjamin (Inglor) Gruenbaum <inglor at gmail.com>wrote:

  • Promises solve an interesting problem

They do. And that problem is representing the eventual value of asynchronous computations, allowing you to compose them as usual. (there are different approaches to this too)

  • There are a number of promise libraries that are working implementations that solve real world use cases. They have evolved for several years for those use cases.
  • I believe that was a big consideration for adding them to the specification to begin with.
  • Promise libraries could have added some of the methods and alternatives discussed here but didn't.
  • Q is not the only library nor it is the 'de facto standard'. Kris is very knowledgeable but was not the only one building APIs. There are at least 4 more libraries that do promises (RSVP, When, Bluebird, kew, vow...). .chain , while very interesting is not very common to say the least.
  • Tens of thousands (probably more) of people have been using these libraries for years now.

Well, "promises" are a fairly old concept, with several different implementations throughout the history of CompSci, and even in JavaScript land itself. .chain itself is not common because it doesn't need to have that name, but it is a common operation (.then encodes that operation, for example, but you can't control when it gets applied). As for working implementations of .chain-able promises, you can see: folktale/data.future#example, and fantasyland/fantasy-promises, although they are not Promises/A+, it should be easy enough to implement .cast and .then on top of those primitives.

There are three things you want to do with a promise:

  • You want to sequence asynchronous computations, such that you can specify computations that depend on a particular value. This is what .chain does.
  • You want to transform the value of a computation, but the computation itself is synchronous. This is what .map does, and .chain can generalise .map.
  • You want to assimilate the result of a different promise. This is what .cast does.

Unfortunately, Promises/A+ only provide you with a .then method, which does all of those things for you, but you can never control which of those gets applied. Thus, .chain is a more general approach, and .then can be easily derived from .chain. A specification should favour the general use cases to make it easy for people to write more specific computations in userland, instead of constraining the expressive power to particular higher-level use cases. This is what the discussion on Promises right now is about.

I mean no disrespect to the hard work of the people here. I know everyone who is participating in the discussion here is putting a lot of time and effort to better the language and the community. However, from my "complete" outside it sounds like:

  • There is a working tested solution that evolved through usage and solving real problems over several years*. *
  • The committee is dictating a lot of changes for it.
  • Changes are based on new design goals and theoretical considerations like "purity". I'm not saying there are no very good arguments for non-recursive assimilation and .chain (there are) , but they're still being "dictated" over existing library solutions.

I was under the impression the goal is to adopt what works in practice as much as possible and not dictate solutions from the sky. If someone wants `a different API there is nothing preventing them from creating a library and doing that - then coming back with "Hey, our solution works, we have 1000 dependencies on NPM" or "on second thought, that sounded very interesting but people are complaining about usage".

One of the problems with just following existing solutions is that you'd also be copying the evolution mistakes and technical debts in those solutions. The committee is considering the existing solutions, and the use cases presented by people, which is a good thing. For example, for the "asynchronous map" use case, .then() is not usable short of adding plenty of special cases, and special cases don't compose cleanly. Standards are difficult because what you decide will shape the solutions that people write in the language for decades, standards evolve slowly, that's why you have less chances to make mistakes — in a library you can make a mistake and promptly fix it in the next version, this is not true of standards.

# Benjamin (Inglor) Gruenbaum (7 years ago)

On Sat, Feb 8, 2014 at 2:57 PM, Quildreen Motta <quildreen at gmail.com> wrote:

Well, "promises" are a fairly old concept, with several different implementations throughout the history of CompSci, and even in JavaScript land itself. .chain itself is not common because it doesn't need to have that name, but it is a common operation (.then encodes that operation, for example, but you can't control when it gets applied). As for working implementations of .chain-able promises, you can see: folktale/data.future#example, and fantasyland/fantasy-promises, although they are not Promises/A+, it should be easy enough to implement .cast and .then on top of those primitives.

Phew! After that first sentence I thought I was getting a link to Liskov's paper(s) :)

Both those libraries seem to have a low number of users, very few mentions and almost zero users in npm. I was not implying that it's "impossible" to implement chain, or monadic promises - I was implying very few people use them in JavaScript. Where, on the contrary people use the current model extensively in APIs. The debate from what I can tell is about standardizing something that exists but is not used nearly as much over something that exists and used extensively.

There are three things you want to do with a promise:

  • You want to sequence asynchronous computations, such that you can specify computations that depend on a particular value. This is what .chain does.
  • You want to transform the value of a computation, but the computation itself is synchronous. This is what .map does, and .chain can generalise .map.
  • You want to assimilate the result of a different promise. This is what .cast does.

Unfortunately, Promises/A+ only provide you with a .then method, which does all of those things for you, but you can never control which of those gets applied. Thus, .chain is a more general approach, and .then can be easily derived from .chain. A specification should favour the general use cases to make it easy for people to write more specific computations in userland, instead of constraining the expressive power to particular higher-level use cases. This is what the discussion on Promises right now is about.

I'm not saying that .chain is a better or worse proposal here (or what I think at all). I'm talking about programmers in practice - those people in userland that are performing computations you speak of? They had a choice and they chose .then. Promise libraries such as Q (but not only) have been evolving for several years now. The way .then works was subject to change many times before but people in practice have found '.then' to be a simple and working solution.

(For the record, when I think of .map I don't think of a synchronous computation - in fact I don't see that the fact the computation is synchronous in a promise chain is irrelevant - but again, this is not about my opinion)

One of the problems with just following existing solutions is that you'd also be copying the evolution mistakes and technical debts in those solutions. The committee is considering the existing solutions, and the use cases presented by people, which is a good thing. For example, for the "asynchronous map" use case, .then() is not usable short of adding plenty of special cases, and special cases don't compose cleanly. Standards are difficult because what you decide will shape the solutions that people write in the language for decades, standards evolve slowly, that's why you have less chances to make mistakes -- in a library you can make a mistake and promptly fix it in the next version, this is not true of standards.

My problem with that paragraph is that promise libraries have evolved for years. There are several libraries - Bluebird, Q, when, RSVP or any other promise library with a lot of dependent libraries on NPM and more than a thousand stars on GH. They're all open source and At any point anyone could have:

  • Add 'chain' with a PR, explain why it's better and convince any one of those maintainers why it should be added. I've seen work in Promise libraries several times.
  • Open an issue with convincing real life use cases and explain them - advocate .chain and get people to use it. I've seen this work in promise libraries several times and have done so myself.
  • Fork any one of them (most, if not all have permissive MIT licenses) and replace .then with .chain (or add it). It's also no problem to interop with existing A* libraries.
  • Build a new library and get users - I personally recently converted over 100 KLOC to use a promise library that didn't exist a year ago.

I'd just like to add I did not make (at least intentionally) a single technical argument here. It's merely the process I'm concerned about. Choosing APIs that have no extensive use in the ecosystem over ones who do without first battle testing them concerns me.

# Quildreen Motta (7 years ago)

On 8 February 2014 12:26, Benjamin (Inglor) Gruenbaum <inglor at gmail.com>wrote:

My problem with that paragraph is that promise libraries have evolved for years. There are several libraries - Bluebird, Q, when, RSVP or any other promise library with a lot of dependent libraries on NPM and more than a thousand stars on GH. They're all open source and At any point anyone could have:

  • Add 'chain' with a PR, explain why it's better and convince any one of those maintainers why it should be added. I've seen work in Promise libraries several times.
  • Open an issue with convincing real life use cases and explain them - advocate .chain and get people to use it. I've seen this work in promise libraries several times and have done so myself.
  • Fork any one of them (most, if not all have permissive MIT licenses) and replace .then with .chain (or add it). It's also no problem to interop with existing A* libraries.
  • Build a new library and get users - I personally recently converted over 100 KLOC to use a promise library that didn't exist a year ago.

I'd just like to add I did not make (at least intentionally) a single technical argument here. It's merely the process I'm concerned about. Choosing APIs that have no extensive use in the ecosystem over ones who do without first battle testing them concerns me.

It has been discussed at length before ( promises-aplus/promises-spec#94), and continued in es-discuss. Specially because there are use cases for the standard library that are not covered by .then (Andreas has discussed them before). And again .then is just a special case of .chain, so the proposal that was being discussed was to give people .chain and .then, where .then would be written in terms of .chain.

# Kevin Smith (7 years ago)

Sounds both good in principle, and bad like waterfall. Waterfall is dead.

Strike the waterfall part. Still, we end up with three artifacts: a design goals document, a prototype, and a spec. The design goals should be the first to stabilize, the spec should be the last.

Would such goals have helped promises? I bet we would have fought over the goals, at best. At worst, we'd agree to vague goals and end up where we ended up.

I think it would have helped. A formal set of goals is a measuring stick for the final design. If one of those goals is "conform to AP2", then we can look at the final design and determine whether it, in fact, does conform to AP2.

Furthermore, much of the "meat" of a design follows as a straight-line consequence of a well-stated set of goals. (See that doc I linked to for an example.)

# John Barton (7 years ago)

On Sat, Feb 8, 2014 at 8:11 AM, Kevin Smith <zenparsing at gmail.com> wrote:

Strike the waterfall part. Still, we end up with three artifacts: a design goals document, a prototype, and a spec. The design goals should be the first to stabilize, the spec should be the last.

There is no first or last. There is iteration, feedback, testing, iteration. Design should be swift, prototypes many, spec as a summary.

I think it would have helped. A formal set of goals is a measuring stick for the final design. If one of those goals is "conform to AP2", then we can look at the final design and determine whether it, in fact, does conform to AP2.

Furthermore, much of the "meat" of a design follows as a straight-line consequence of a well-stated set of goals. (See that doc I linked to for an example.)

All straight lines are illusions. Well-stated goals are goals rewritten to explain the results of iteration.

But "re-written", not "ignored". Create goals, build to them, try it, re-create goals.

No pure logic process can defeat reality: the promises discussion is built into the complexity of the problem and the currently available solutions. Hope for a technical breakthrough, plan for a compromise or a tough decision.

# Jonathan Bond-Caron (7 years ago)

Thoughts on adding options/flags to then()? e.g. x.chain equivalent to x.then(promise, {unwrap: false});

Promises.all() could be similarly augmented to support serial execution. The api signatures become ~

Promise.prototype.then ( onFulfilled , onRejected|PromiseOptions , PromiseOptions )
Promise.all ( iterable , PromiseIterateOptions )

Defaults:

PromiseOptions = {
   unwrap: true
}
PromiseIterateOptions extends PromiseOptions {
   serial: false
}

Don't have convincing use case but can pass options to then() during iteration:

Promises.all([a,b,c], {unwrap: false});
# Brendan Eich (7 years ago)

Jonathan Bond-Caron wrote:

Thoughts on adding options/flags

Just say no.

ariya.ofilabs.com/2011/08/hall-of-api-shame-boolean-trap.html

This is a symptom. We are resisting the larger issue of "do both" or design-by-committee unioning or other such failure modes.

# Jonathan Bond-Caron (7 years ago)

Fair enough, would be nice to see an alternative to: apache/cordova-cli/blob/master/src/hooker.js#L155

If you multiply that pattern across many projects with different 'styles' of writing it, complexity really adds up.

Promises.join([a,b,c]).run().then(...)    // parallel
Promises.join([a,b,c]).setExecution('serial').run().then(...) // serial?

developer.android.com/reference/android/os/AsyncTask.html#SERIAL_EXECUTOR

As I was writing this, I saw the async proposal: doku.php?id=strawman:async_functions&s=async

await would be king for serial execution of animations or file system tasks.

It's all a bit confusing but going in an interesting direction,

# Anne van Kesteren (7 years ago)

On Fri, Feb 7, 2014 at 11:50 AM, Anne van Kesteren <annevk at annevk.nl> wrote:

I filed bugzilla.mozilla.org/show_bug.cgi?id=966348 on Gecko when the news broke. Mozilla can probably still make things in order before promises hit stable.

To be clear, we fixed this. And we will be going ahead and shipping promises in Firefox 29. Too many dependencies at this point to hold of longer.

# C. Scott Ananian (7 years ago)

On Fri, Feb 14, 2014 at 6:21 AM, Anne van Kesteren <annevk at annevk.nl> wrote:

To be clear, we fixed this. And we will be going ahead and shipping promises in Firefox 29. Too many dependencies at this point to hold of longer.

Since both Chrome and FIrefox have plans to support Promises, feel free to suggest any changes to es6-shim which would improve compatibility. It looks like that at the moment the es6-shim implementation is more spec-compliant than either of the shipping implementations? In particular, we support subclasses.

At the moment we don't do any feature tests beyond "does Promise exist in the global environment". For Map and Set we actually do some very specific bug-based tests to workaround old shipping implementations that aren't spec-compliant. So I'd also love to hear specifically about any bugs or missing features in your shipping implementation so that we can feature-test and workaround them.

Finally, I have started prfun (cscott/prfun) as AFAIK the first "es6-compatible" promise package. That is, it assumes the presence of basic ES6 spec-compliant Promises and concentrates on adding features on top of that, using only the published API. It includes an implementation of Promise.bind using subclasses, in ES5-compatible syntax. It has a rather extensive test suite, borrowed from bluebird, when, and q. Feedback on prfun (including suggestions for better names!) is invited.

# Anne van Kesteren (7 years ago)

It will take a long time before browsers support subclassing in general as far as I can tell.

# C. Scott Ananian (7 years ago)

I'm not talking about the class system in general. I'm talking about the ES5+ code:

  Promise.prototype.bind = function(newThis) {
    var SuperPromise = this._bindSuper || this.constructor || Promise;
    // create a new Promise subclass (this is less cumbersome in es6, sigh)
    var BoundPromise = function(exec) {
      return SuperPromise.call(this, exec);
    };
    Object.setPrototypeOf(BoundPromise, SuperPromise);
    BoundPromise.prototype = Object.create(SuperPromise.prototype);
    BoundPromise.prototype.constructor = BoundPromise;
    BoundPromise.prototype._bindSuper = SuperPromise;

    BoundPromise.prototype.then = (function(superThen) {
      return function(f, r) {
        var ff = f && f.bind(newThis);
        var rr = r && r.bind(newThis);
        return superThen.call(this, ff, rr);
      };
    })(BoundPromise.prototype.then);
    return newThis ? BoundPromise.resolve(this) : SuperPromise.resolve(this);
  };

which is already useful in practice. It would be nice if the browsers would support Promise subclassing of this sort, in the (long?) interim without ES6 class syntax.

# Yehuda Katz (7 years ago)

It will take a long time before browsers support subclassing in general as far as I can tell.

I don't know where you're getting this from. @@create is the way classes and subclassing work in ES6, and any browser that wants to implement class syntax will need to deal with it.

It may not be at the top of anyone's February 2014 hit-list, but I don't get any sense that classes are low priority for browser vendors either.

# Andreas Rossberg (7 years ago)

"Classes" consist of many parts. The @@create hook only is relevant to enable subclassing of built-ins, and classes are useful without it. I think what Anne is alluding to is that for implementations, @@create probably is a pain to implement, and the part with the lowest benefit/cost ratio, and thus not particularly high on anybody's priority list.

# Mark S. Miller (7 years ago)

On Tue, Feb 18, 2014 at 10:29 AM, C. Scott Ananian <cscott at cscott.net>wrote:

Yes, and I'm pointing out that you can subclass built-ins in a compatible manner even without the @@create hook,

How?

# Domenic Denicola (7 years ago)

You can do so in the manner you were always able to in ES5, and you were able to do in ES6 before the introduction of @@create. However, what this means is that you now allow parastic inheritance again, which @@create was created to disallow.

That is, you could have a MapPromiseWeakMap just by doing

var obj = {};
Map.call(obj);
Promise.call(obj, function () {});
WeakMap.call(obj);

So such a "solution" isn't really maintaining the ES6 semantics, since it allows strictly more things than ES6 does.

# Allen Wirfs-Brock (7 years ago)

No, even if you removed the checks in Map and Promise and WeakMap that prevent them from trying to initialize an object that lacks the appropriate internal slots it still wouldn't work because obj does not have the internal slots necessary to support the built-in operations upon those objects and implementations commonly implement such objects in a manner that prevents the dynamic addition of such slots.

The whole purpose of @@create is to allow implementations to allocate prior to invoking a constructor the correctly shaped object for built-ins.

# Mark S. Miller (7 years ago)

Yes. For these same reasons, Date.call(obj) already doesn't work in ES5.

# Domenic Denicola (7 years ago)

From: Allen Wirfs-Brock <allen at wirfs-brock.com>

No, even if you removed the checks in Map and Promise and WeakMap that prevent them from trying to initialize an object that lacks the appropriate internal slots it still wouldn't work because obj does not have the internal slots necessary to support the built-in operations upon those objects and implementations commonly implement such objects in a manner that prevents the dynamic addition of such slots.

The whole purpose of @@create is to allow implementations to allocate prior to invoking a constructor the correctly shaped object for built-ins.

Well, but you could just move the internal slot initialization into the constructor itself (as Scott has done in his es6-shim promises). Unlike e.g. Array, the objects returned by Promise[@@create] are not exotic objects; they are simply normal objects with some internal slots initialized. So the movement of code between @@create and the constructor does not impact very much.

# Brendan Eich (7 years ago)

Domenic Denicola wrote:

Well, but you could just move the internal slot initialization into the constructor itself (as Scott has done in his es6-shim promises). Unlike e.g. Array, the objects returned by Promise[@@create] are not exotic objects; they are simply normal objects with some internal slots initialized. So the movement of code between @@create and the constructor does not impact very much.

This is precisely the contentious issue, though. Should implementations be constrained to make Promises be "normal objects with some internal slots initialized"? If you subclass, you need the super's @@create to allocate the correctly shaped object, never mind setting the internal slots to default (undefined or nominal) values.

Self-hosting a shim is nice, but it can overspecify. ES6 promises are not specified as self-hosted. Internal slots plus subclassability requires @@create.

# Mark S. Miller (7 years ago)

The same rationale would apply to Date, but making this change for Date would be a breaking change. For consistency, let's not pick and choose among builtins this way. We'll have @@create soon enough.

# Domenic Denicola (7 years ago)

Oh, for sure: I was not advocating a change in actual ES6. I was just explaining on Scott's behalf how it was possible to have subclassable polyfills before implementing @@create.

Implementations could, in theory, implement subclassable promises before implementing @@create, via similar semantics. That is, they can choose between being standards non-compliant in one of two ways:

  1. Non-subclassable, because no @@create.

  2. Over-subclassable, because they allow parasitic inheritance.

Eventually they will go for 3, just-plain-subclassable, once @@create is ready. But the question is what they want to do in the meantime.

# Allen Wirfs-Brock (7 years ago)

On Feb 18, 2014, at 11:09 AM, Domenic Denicola wrote:

Well, but you could just move the internal slot initialization into the constructor itself (as Scott has done in his es6-shim promises). Unlike e.g. Array, the objects returned by Promise[@@create] are not exotic objects; they are simply normal objects with some internal slots initialized. So the movement of code between @@create and the constructor does not impact very much.

No, the whole point of @@create was to avoid that. In fact, initially I tried to specify something like that and implementors objected. The problem is that some implementations want to allocate different shaped object records for built-ins with specific internal slots and they can't dynamically change the size of the a normal object record that lacks those slots. It doesn't have anything to do with exotic objectness (which is about the behavior of the internal MOP methods) but instead about the memory layout of the objects.

This was was discussed in detail 2+ years ago and there was consensus that @@create was the way to go. Now it's time to implement it.

It probably isn't appropriate for implementors to unilaterally decide whether or not this is a low priority feature. However, I can understand how they might reach that conclusion if application and framework authors aren't vocal if they consider it to be a high priority feature.

The reason I championed this capability was because I had heard years of complaining from JS developers about not being able to "subclass" Array, Date, etc.

# Mark S. Miller (7 years ago)

On Tue, Feb 18, 2014 at 11:23 AM, Allen Wirfs-Brock <allen at wirfs-brock.com>wrote:

It probably isn't appropriate for implementors to unilaterally decide whether or not this is a low priority feature.

Hi Allen, I agree with the rest, but I'm puzzled by your statement above. ES6 is large. It has many pieces. Implementors cannot do everything first. TC39 does not state a priority order among ES6 features so implementors must. It has always been thus.

However, I can understand how they might reach that conclusion if application and framework authors aren't vocal if they consider it to be a high priority feature.

It is certainly valid and valuable for the community to provide feedback, to help implementors prioritize. But it is still the implementors that must make these priority order decisions.

# Allen Wirfs-Brock (7 years ago)

Sure, I'm just saying that these is a need to balance cost and utility when making implementation trade-offs and that while implementors should always have a very good understanding of the cost it is up to the eventual users to make it clear if they consider a feature to have high utility.

I'm also a bit concerned, that there may be a meme starting that it will be a looooooong time (years, if ever) before @@create is actually implemented so JS developers really shouldn't count on being able to use it if the foreseeable future. Such a meme could become a self-fullying prophecy.

I agree that there is a lot of work to make all of the existing built-ins subclassable. But @@create is a basic part of the ES6 class design and comes into play even when built-ins aren't involved.

My personal opinion is that classes should be a high priority features and that @@create based new behavior is a integral part of classes. Hopefully we will start seeing implementations of that soon. I'm actually considering fixing all the other subclassing issues of the existing built-ins to be a low priority. But it would be nice is new built-ins such as Promises were implemented from the start to support subclassing.

# Boris Zbarsky (7 years ago)

On 2/18/14 2:51 PM, Allen Wirfs-Brock wrote:

I'm also a bit concerned, that there may be a meme starting that it will be a looooooong time (years, if ever) before @@create is actually implemented so JS developers really shouldn't count on being able to use it if the foreseeable future. Such a meme could become a self-fullying prophecy.

The longest pole on @@create from my point of view as a DOM implementor is that the setup for it is not quite compatible with how DOM constructors work (and have worked for many a year now) in Trident- and Gecko-based browsers. Specifically, Gecko and Trident allow a DOM constructor without "new" to create a new object of the right type (in ES5 terms, they're implementing a special [[Call]], not special [[Construct]], for DOM objects).

Which means that before we can allow subclassing DOM objects we need to go through things like a deprecation period, use counters, etc, etc, to ensure that we're not breaking existing content that relies on the current behavior.

Maybe we should make DOM constructors behave sort of like the Array constructor, of course...

If we don't, then SpiderMonkey implementing @@create in the abstract will not automatically make the DOM subclassable in Gecko, and the latter cannot possibly take less than 5-6 months (to fix all the code we know depend on the current DOM constructor behavior, add telemetry and warnings, get reliable telemetry results back). That's if we discover that DOM constructors without "new" are not used in the wild. If they are used, add more time for figuring out who's using them and convincing them to stop.

Of course adding an extra internal slot to all DOM objects is not necessarily all that great either... Though maybe we have to do that no matter what to prevent double-initialization.

All this doesn't affect subclassing of things like Date, of course.

# Allen Wirfs-Brock (7 years ago)

On Feb 18, 2014, at 12:44 PM, C. Scott Ananian wrote:

ps. I'm talking about this pattern in particular:

var SubPromise = function(exec) { Promise.call(this, exec); ... };
Object.setPrototypeOf(SubPromise, Promise);
SubPromise.prototype = Object.create(Promise.prototype);
SubPromise.prototype.constructor = SubPromise;

Note that SubPromise inherits [email protected]@create and so this code will work fine in an ES6 environment. (Blame @domenic, he showed me that this should work!)

Just to be clear, you're saying that the above would work in ES5 with a polyfill implementation of Promise. Right?

It wouldn't work in an implementation where Promise was natively implemented according to the ES6 Promise spec but where 'new SubPromise(exec)' did not include a call to [email protected]@create. It might work if the native Promise implementation diverged from the ES spec. and allowed Promise(exec) to be equivalent to new Promise(exec) when the this value is not a built-in promise object. But that would essentially be a non-standard extension that probably couldn't be subsequently killed.

The purpose of standards for new features is to make sure that we don't have divergence out of the gate so let's push our implementors to avoid creating such divergence.

# Allen Wirfs-Brock (7 years ago)

On Feb 18, 2014, at 1:12 PM, Boris Zbarsky wrote:

The longest pole on @@create from my point of view as a DOM implementor is that the setup for it is not quite compatible with how DOM constructors work (and have worked for many a year now) in Trident- and Gecko-based browsers. Specifically, Gecko and Trident allow a DOM constructor without "new" to create a new object of the right type (in ES5 terms, they're implementing a special [[Call]], not special [[Construct]], for DOM objects).

Which means that before we can allow subclassing DOM objects we need to go through things like a deprecation period, use counters, etc, etc, to ensure that we're not breaking existing content that relies on the current behavior.

In that case, i don't see why the DOM is a barrier to adding @@create to JS. The normal [[Construct]] behavior to specified to fall back to essentially ES5 behavior if an @@create isn't found. So it sounds to me that the above DOMs would just keep working as is if @@create+new is implemented as specified by ES6.

The DOM classes wouldn't be subclassable out of the gate, but new things like Promises would be.

Maybe we should make DOM constructors behave sort of like the Array constructor, of course...

Do you really think you can deprecate exist call without new instantiation behavior? In ES6 every legacy constructor that had that behavior is specified to still have that behavior and always will.

Are you saying that it isn't de-facto standard behavior and so there is a hope of getting rid of it.

If we don't, then SpiderMonkey implementing @@create in the abstract will not automatically make the DOM subclassable in Gecko, and the latter cannot possibly take less than 5-6 months (to fix all the code we know depend on the current DOM constructor behavior, add telemetry and warnings, get reliable telemetry results back). That's if we discover that DOM constructors without "new" are not used in the wild. If they are used, add more time for figuring out who's using them and convincing them to stop.

SpiderMonkey implementing @@create is a prerequisite to making the DOM subclassable. But DOM use of @@create isn't a prerequisite for adding general JS @@create support.

Of course adding an extra internal slot to all DOM objects is not necessarily all that great either... Though maybe we have to do that no matter what to prevent double-initialization.

You don't necessary need a new slot to to use as an initialized flag. I most cases you can probably get away with using a "not yet initialized" token in some existing slot.

All this doesn't affect subclassing of things like Date, of course.

Write, we don't have to pre-boiled the ocean in order to start the beach fire...

# Boris Zbarsky (7 years ago)

On 2/18/14 4:34 PM, Allen Wirfs-Brock wrote:

In that case, i don't see why the DOM is a barrier to adding @@create to JS.

It's not, and I'm not quite sure where you thought I said it was....

I was talking about actual uses of subclassing for DOM objects.

Maybe we should make DOM constructors behave sort of like the Array constructor, of course...

Do you really think you can deprecate exist call without new instantiation behavior?

Maybe. Unclear. Right now, WebKit (and Blink) doesn't support calling DOM constructors without "new" and never has. Trident and Gecko do and always (at least as far back as I can test) have.

So we can deprecate it to the extent that web sites don't run different code in different browsers.... Some people are more worried about this than others, of course.

Are you saying that it isn't de-facto standard behavior and so there is a hope of getting rid of it.

It's unclear; see above.

You don't necessary need a new slot to to use as an initialized flag. I most cases you can probably get away with using a "not yet initialized" token in some existing slot.

While true, doing that requires per-object-type whack-a-mole that we don't really want to force DOM spec authors into....

# Allen Wirfs-Brock (7 years ago)

I think that could be handled generically in the WebIDL spec. You can specify it as if there was a per object initialized flag. But how that state is actually represented can be left to actual implementations to deal with.

# Jason Orendorff (7 years ago)

On Tue, Feb 18, 2014 at 3:34 PM, Allen Wirfs-Brock <allen at wirfs-brock.com> wrote:

You don't necessary need a new slot to to use as an initialized flag. I most cases you can probably get away with using a "not yet initialized" token in some existing slot.

Mmm. Well, sure, I think we can avoid bloating every Array to accomodate [[ArrayInitialisationState]], by optimizing for the case where it is true. But when you've got information, ultimately it has to be stored, somehow, somewhere.

More to the point, this adds an axis to the configuration space of Array objects. That’s inelegant. It makes new kinds of bugs possible. Now that we're talking about extending the pattern to the whole DOM, it sounds worse than it did a few months ago. :-P

Fight complexity creep!

# Allen Wirfs-Brock (7 years ago)

I believe that Array was the only ES6 built-in where I had to add such an initialization state internal slot. All others already had slots available that could do double duty for this flag. I suspect the same is true for most DOM objects.

Such initialization checks have three purposes;

  1. It prevents methods that depend upon internal state of built-ins from being applied to initialized instances. Array instances and methods actually doesn't have any such dependencies and none of its methods check for initialization.
  2. Prevents potentially destructive double initialization. Generally, not an issue for Arrays but see use case 3 below
  3. Used to distinguish calls to constructors for initialization (including super calls to the constructor) from "calling the constructor as a function".

The last one is the odd case.

We need to be able to do do:

class SubArray extends Array {
    constructor (...args) {
        this.foo = whatever;
        super(...args);
    }
}

or even more explicitly

class SubArray extends Array {
    constructor (...args) {
        this.foo = whatever;
        Array.apply(this, args);
    }
}

(for most purposes these do the same thing and in particularly the super call and the Array.apply both "call Array as a function" in exactly the same way)

But ES<6 also says that calling Array as a function allocates and initializes a new Array object. So the challenge was how to extend Array (and other existing builtin or host object constructors) to continue to have their legacy "called as a function" behavior and still work as a super callable initializer. There were a few specific use cases that had to be considered (these all might exist in existing JS code):

  1. Array(1,2,3,4)  //should create a new Array and initialize it with 4 elements
    

    this is easy, the this value is undefined. So, the spec says that if the this value passed to the constructor is not an object, it must allocate a new instance and then initializes it.

  2. var myNamespace = {Array: Array};
    myNamespace.Array(1,2,3,4)  //should create a new Array  and initialize it with 4 elements
    

    The this value passed to Array will be the myNamespace object. We don't want to initialize it as an Array, but instead we need to allocate a new array instance. So, the spec. says that if the this value passed to the constructor is not an object that is not an Array , it must allocate a new instance and then initializes it. Array objects are identified (in the spec) by the present of the [[ArrayInitializationState]] internal slot but could alternatively be identified using some sort of an "exotic array object" brand or any other implementation specific branding of array instances.

  3. Array.prototype.somethingNew = Array;
    var arr = new Array(5,6,7,8);
    arr.somethingNew(1,2,3,4)  //should create a new Array  and initialize it with 4 elements
    

    The this value pass to Array is the Array arr, so it is already branded as an Array but we don't what to reinitialize it and over-write its elements with the wrong values. So, the spec. says that if the this value passed to the constructor is branded as an Array and its [[ArrayInitializationState]] is true, the constructor must allocate a new instance and then initialize it. This works because [[ArrayInitializationState]] didn't exist in ES<6. In ES6 the only way to get a handle on an uninitialized array is by directly calling Array[Symbol.create] or by intercepting such an uninitialized instance in a subclass constructor prior to a super call to the Array constructor.

If this and other constructs didn't have "new is optional" constructor behavior, must of this would be unnecessary. That's why I generally recommend against people trying to define ES6 style class constructors that work without using new. It's too easy to screw up the above cases and not worth the effort for new code.

Overall, I think this is actually a pretty elegant solution, given the legacy compatibility constraints we have to navigate moving to classes subclassable built-ins in ES6.

Regarding the DOM, it would clearly be simpler if Gecko (or Trident) DOM constructors required use of new. If you think optional new could be deprecated as non-standard over a reasonable period, that might be a good thing to do prior to making DOM constructors subclassable. But even if that doesn't seem possible, the above pattern is something can consistently applied to all the DOM constructors.

# Boris Zbarsky (7 years ago)

On 2/18/14 4:57 PM, Allen Wirfs-Brock wrote:

I think that could be handled generically in the WebIDL spec. You can specify it as if there was a per object initialized flag. But how that state is actually represented can be left to actual implementations to deal with.

Yeah, agreed. I came to the same conclusion earlier today while thinking about how to implement it.

# C. Scott Ananian (7 years ago)

On Tue, Feb 18, 2014 at 11:12 AM, Allen Wirfs-Brock <allen at wirfs-brock.com> wrote:

Just to be clear, you're saying that the above would work in ES5 with a polyfill implementation of Promise. Right?

Yes, it works in the current git HEAD of es6-shim. There are some checks to ensure that SubPromise inherits from Promise, which would also ensure that @@create is inherited (in the future). (See paulmillr/es6-shim#218 for the grungy details.)

It wouldn't work in an implementation where Promise was natively implemented according to the ES6 Promise spec but where 'new SubPromise(exec)' did not include a call to [email protected]@create.

According to the ES6 spec, new SubPromise(exec) should include a call to [email protected]@create, right? As I read the spec, new SubPromise(exec) invokes SubPromise.[[Constructor]](exec) which would then invoke Construct(SubPromise, [exec]). In turn that would invoke CreateFromConstructor(SubPromise) which would fetch [email protected]@create which gets the inherited [email protected]@create and so would invoke AllocatePromise(SubPromise) to create the required [[PromiseStatus]] and other internal slots.

So the code above would work in a fully-compliant ES6 implementation. The question is: what should it do in the weird "not quite ES6, but more than ES5" implementations that are shipping today. This isn't an ES6 spec issue, it's a challenge to implementors to provide forward-compatibility.

# Allen Wirfs-Brock (7 years ago)

On Feb 18, 2014, at 11:38 PM, C. Scott Ananian wrote:

So the code above would work in a fully-compliant ES6 implementation. The question is: what should it do in the weird "not quite ES6, but more than ES5" implementations that are shipping today. This isn't an ES6 spec issue, it's a challenge to implementors to provide forward-compatibility.

Right, that's what I was trying to say. It all depends upon the new operator invoking @@create. It wouldn't work on an partial ES6 implementation where [email protected]@create was missing or where 'new' did not invoke @@create.