Alternative to Promise

# 韩冬 (9 years ago)

ES6 Promise is great, i just want to share my thoughts on dealing with callback hell issue with a different approach here. I’m been try to port ConT monad from haskell to javascript these days, after some work, i believe we can have a much simpler alternative to Promise, read this introduction to my approach please:

winterland1989.github.io/Action.js, winterland1989.github.io/Action.js

I’m not saying it’s better than Promise, but it’s much simpler and easier IMO, any peer are welcomed!

# Tab Atkins Jr. (9 years ago)

On Tue, Sep 29, 2015 at 10:51 PM, 韩冬 <handong05 at meituan.com> wrote:

ES6 Promise is great, i just want to share my thoughts on dealing with callback hell issue with a different approach here. I’m been try to port ConT monad from haskell to javascript these days, after some work, i believe we can have a much simpler alternative to Promise, read this introduction to my approach please:

winterland1989.github.io/Action.js

I’m not saying it’s better than Promise, but it’s much simpler and easier IMO, any peer are welcomed!

Promises already exist and are implemented in most browsers already. They won't be replaced; anything which hopes to occupy a similar niche needs to justify how it is sufficiently useful to be worth having two similar-but-not-identical things in the standard library. There's always going to be decisions that could have been made slightly differently, and which would make things more convenient for particular use-cases, but that doesn't, by itself, justify the cost of adding to the standard library.

Likely anything new will want to fit into the General Theory of Reactivity kriskowal/gtor.

# Andrea Giammarchi (9 years ago)

and moreover, there are already many patterns a new features landing based on them, like async, await, or generators through the swap gist.github.com/kypflug/7556530ff3b5b40c3753#file-async-5-js or the good old async www.promisejs.org/generators/#both where both approaches provide alternative (or evolved) simplifications on top of promises and callbacks.

TL;DR too late to swim against the current asynchronous current

# 韩冬 (9 years ago)

It seems not a lot of people understand what’s a Promise, please read these:

The design of Q: kriskowal/q/tree/v1/design, kriskowal/q/tree/v1/design

The difference between Action and Promise: winterland1989/Action.js/wiki/Difference-from-Promise, winterland1989/Action.js/wiki/Difference-from-Promise

tl;dr, Promise use an internal state, an pending callback queue to save callbacks during pending, and after pending, the resolved value are saved. Action is just continuation in other FP languages, which is a simple object just have a single field point to the continuation.

Any question are welcomed.

# 韩冬 (9 years ago)

Yes, i understand it’s too late to revise Promise design, my random thought are:

  1. Why we put Promise into language?

Promise are complex state machine, there can be so many variations, each implementation have different performance characteristic, which one should be built into language?

  1. Why we don’t look for continuation based implementation at the first place?

Lisp, haskell or even some transpiled to js language like elm use continuation solve callbacks already, why didn’t port them? Now we invent a complex state machine based solution, a lot of people will creating them everytime using an async without understand the cost.

~winter

# Tab Atkins Jr. (9 years ago)

On Wed, Sep 30, 2015 at 5:18 PM, 韩冬 <handong05 at meituan.com> wrote:

Yes, i understand it’s too late to revise Promise design, my random thought are:

  1. Why we put Promise into language?

Because it exposes useful functionality. I recommend reading some of the Promise explainers that exist for lots of examples.

Promise are complex state machine, there can be so many variations, each implementation have different performance characteristic, which one should be built into language?

There's only one variation that's standard, and every browser is or will soon be implementing that one.

The "state machine" isn't complex. "unresolved" goes to either "resolved to another promise", "fulfilled", or "rejected". "resolved to another promise" eventually turns into "fulfilled" or "rejected". Or, of course, hangs, which "unresolved" can also do.

  1. Why we don’t look for continuation based implementation at the first place?

Continuations are a different concept, and don't address what we were trying to solve when adopting promises.

Lisp, haskell or even some transpiled to js language like elm use continuation solve callbacks already, why didn’t port them? Now we invent a complex state machine based solution, a lot of people will creating them everytime using an async without understand the cost.

Those languages often also have Promises, or Futures, or Tasks, or one of the other closely-related names and concepts.

# 韩冬 (9 years ago)
# 韩冬 (9 years ago)

Now take a look at even a very simple Promise library Q form its design document: kriskowal/q/blob/v1/design/q7.js, kriskowal/q/blob/v1/design/q7.js

The "state machine" isn't complex. "unresolved" goes to either "resolved to another promise", "fulfilled", or "rejected". "resolved to another promise" eventually turns into "fulfilled" or "rejected". Or, of course, hangs, which "unresolved" can also do.

With all these state transition added, it’s quiet complex to me to figure out how does it work, i have to go back and read the design process from 1 to 7. Should we expect future js programmers just use it, or understand it before use it?

~winter

# 韩冬 (9 years ago)

There's only one variation that's standard, and every browser is or will soon be implementing that one.

How can you said so? isn’t every Promise library pass A+ test considered standard?

Continuations are a different concept, and don't address what we were trying to solve when adopting promises.

Yes, they’re different, i would like to know what is promise solved but they didn’t address ?

Those languages often also have Promises, or Futures, or Tasks, or one of the other closely-related names and concepts.

They have, but that’s totally different because they use lightweight thread. Future in haskell is just a MVar.

# Tab Atkins Jr. (9 years ago)

On Wed, Sep 30, 2015 at 5:46 PM, 韩冬 <handong05 at meituan.com> wrote:

Now take a look at even a very simple Promise library Q form its design document: kriskowal/q/blob/v1/design/q7.js

The "state machine" isn't complex. "unresolved" goes to either "resolved to another promise", "fulfilled", or "rejected". "resolved to another promise" eventually turns into "fulfilled" or "rejected". Or, of course, hangs, which "unresolved" can also do.

With all these state transition added, it’s quiet complex to me to figure out how does it work, i have to go back and read the design process from 1 to 7. Should we expect future js programmers just use it, or understand it before use it?

If you think what I said above, in your quote, is complex, then I really can't help you.

# Thomas (9 years ago)

There's only one variation that's standard, and every browser is or will soon be implementing that one.

How can you said so? isn’t every Promise library pass A+ test considered standard?

There is a specific variation of promises in the ECMAScript standard which is compatible with promises/a+. Passing the tests simply means your implementation of promises is compatible.

Continuations are a different concept, and don't address what we were trying to solve when adopting promises.

Yes, they’re different, i would like to know what is promise solved but they didn’t address ?

Those languages often also have Promises, or Futures, or Tasks, or one of the other closely-related names and concepts.

They have, but that’s totally different because they use lightweight thread. Future in haskell is just a MVar.

If the filesystem API in node.js starts to use promises then it would be using a background thread, just as it is doing for callbacks at the moment.

# 韩冬 (9 years ago)

Yeah, it seemed i do have a hard time to figure out how the state transition worked from the source code. I always can’t manage state transition in js clearly, especially under async operations.

# Yad Smood (9 years ago)

To be frankly, I can't read your doc in just 5min, it's a little obscure to me. Please don't stick on performance or internal complexity, it's not the bottleneck. The internal implementation of a library may be complex, but that's not what the end users care, most people use one thing before full understand how it works. In your opinion, does it means everyone should full understand the kernel of the linux before using it? If everyone does it, most of them will find some part of the linux is as bad as what you think about promise. If they take time to reinvent every so called not good enough part of it, they won't even have time to enjoy a movie, because the world is full of little flaws.

You say your lib is much simpler than promise. Promises/A+ have only one api: then, you have 4: go, next, guard, freeze. The purpose of promise is let people work happily without knowing the internal state of it, and it doesn't constrain the type of the error, and give users the freedom. It's easy to see that you add more rules than promise. I can't say your thoughts is bad or wrong, but what the points you say promise is bad are just not good enough to persuade me.

And I predict that as you learn more you will find your Action is still as bad as promise, it's may 10% better, but it's still a bad way to handle the real async world. I've seen a lot of libs like yours, and I created something alike when developing my own promise lib, we need a mind blow idea, not a little better idea.

# Yad Smood (9 years ago)

To be frankly, I can't read your doc in just 5min, it's a little obscure to me. Please don't stick on performance or internal complexity, it's not the bottleneck. The internal implementation of a library may be complex, but that's not what the end users care, most people use one thing before full understand how it works. In your opinion, does it means everyone should full understand the kernel of the linux before using it? If everyone does it, most of them will find some part of the linux is as bad as what you think about promise. If they take time to reinvent every so called not good enough part of it, they won't even have time to enjoy a movie, because the world is full of little flaws.

You say your lib is much simpler than promise. Promises/A+ have only one api: then, you have 4: go, next, guard, freeze. The purpose of promise is let people work happily without knowing the internal state of it, and it doesn't constrain the type of the error, and give users the freedom. It's easy to see that you add more rules than promise. I can't say your thoughts is bad or wrong, but what the points you say promise is bad are just not good enough to persuade me.

And I predict that as you learn more you will find your Action is still as bad as promise, it's may 10% better, but it's still a bad way to handle the real async world. I've seen a lot of libs like yours, and I created something alike when developing my own promise lib, we need a mind blow idea, not a little better idea.

# 韩冬 (9 years ago)

Glad to meet you here, actually the questions are not which is faster, why use my library or your library…etc, and i did’t invent "some better idea”, the question is why not check other languages first, when there’re nice solutions already there.

# Yad Smood (9 years ago)

actually the questions are not which is faster, why use my library or

your library…etc, and i did’t invent "some better idea”, the question is why not check other languages first, when there’re nice solutions already there.

What is the definition of "check other languages"? Where do you get the courage to challenge every inventor that they have to learn everything you've learned before they making decisions? I don't think it's enough on you opinion, why don't you learn all the 20 languages I've learned before you start coding? Why don't you read all the books I've read before you start talking? You need to learn to understand others, rather than waving your narcissistick around.

We don't need you to tell us the truth, of course, we should learn as much as we can before we making decisions. Even a child knows this truth, you are just wasting your time to talk about it. No one want to hear philosophies here, we want to hear the sound of real flying wheels.

# Benjamin Gruenbaum (9 years ago)

Where do you get the courage to challenge every inventor that they have

to learn everything you've learned before they making decisions?

Can we please keep it civil?

the question is why not check other languages first, when there’re nice

solutions already there.

Promises are rooted in the 1980s and have been pretty much adopted in every mainstream programming language one way or another:

  • Task - C#
  • Future - Scala
  • Deferred - Python
  • CompletableFuture - Java
  • Future - C++

And so on and so on. The technical committee also includes people who pioneered the concept. Practically everyone on this list knows Haskell, and ConT isn't really anything new to any of us. We can all explore various alternatives that are the continuation monad ( blog.sigfpe.com/2008/12/mother-of-all-monads.html) all day long - but JavaScript already has continuations through promises and they are already in the standard so there is absolutely zero chance they'll be "swapped" for something else at this point.

There are about 3 years of discussions to read about the choices and the rationale for why promises behave exactly the way they behave and you're welcome to read those on the specific choices.

If you're interested in current proposals for other async primitives for the language - there is currently an observable proposal and an async iterator proposal - both solve different issues than promises (multiple values over push/pull) and are currently in design stages.

In general, the list frowns upon people who "plug their library" in the list - so I suggest that in the future you start your email with the specific problem you want to address and what you do differently. The more concise you write (and external links aren't that great for this) the better chance you'll get more responses from people involved.

Cheers and good luck, Benjamin

# Forbes Lindesay (9 years ago)

You seem to be picking on q as the standard promise implementation, but it's actually one of the most complex implementation of a promise and does far more than just implement the standard.

If you want to see how a simpler implementation (which just implements promises/A+) would look, I suggest you read www.promisejs.org/implementing

# 韩冬 (9 years ago)

Great introduction on history of Promise, your suggestion are also very informative, thank you very much!

~winter

# Matthew Phillips (9 years ago)

On Thu, Oct 01, 2015 at 01:57:01PM +0800, Yad Smood wrote:

To be frankly, I can't read your doc in just 5min, it's a little obscure to me. Please don't stick on performance or internal complexity, it's not the bottleneck.

Performance could be a bottle neck in some situations. I'm particularly worried about what the performance implications will be on the Loader spec with it's heavy use of promises. With that spec there will be a minimum of (I think?) 5 Promises created for every module, and perhaps many more if custom loaders are used. When you do the math that's going to wind up being a lot of Promises, which always have to go on a task queue, to a load non-trivial sized applications and the performance might be much worse than a non-async continuation passing API.

I'd like to see this data before I conclude Promises solve all of our continuation needs.

# Andrea Giammarchi (9 years ago)

FWICT Promises and Generators aren't designed at all for best performance, which is why nobody cares much on the IoT world of micro controller where "5 objects" instead of just one callback makes practically no sense (Espruino or Duktape just to name few).

Where there is a lot of RAM and a decently fast CPU thought, these extra objects are AFAIK usually quickly trashed so that abstraction wins, whenever the hosting environment doesn't care about RAM and CPU, over extreme performance, and usually developers don't care.

the performance might be much worse than a non-async continuation passing

API

naaaaa, events works just fine at "speed-light" like they've always done and still non blocking, which is what brought us here today, together with nodejs and stuff.

You can choose to not lock you inside a Promise only pattern/system (which I hope is not where ES will end up neither)

Anyway, I've been writing and testing (and playing with) most micro JS capable controllers ... maybe Rick might actually tell you even more on where these things are indeed a performance concern but I guess he'll agree regular machines, as well as most modern phones, shouldn't have any trouble.

Just my thoughts, best .

.

# 韩冬 (9 years ago)

I want to know more about the implementation about Promise after two day of research, there’re two different ways of implementing a chain style control structure, one is based on an internal state machine, which save the callback for a moment, and resolve them after async function finish, the other one is based on continuation, every node on the chain are a new continuation contain the computation on the chain, some kind of porting the ConT monad from haskell to javascript, i’d like to compare them and get to know why the state-machine based solution eventually won?

Here is my summary:

Pros for state machine based solutions:

  • Auto memorization.
  • Easy sementics.

Cons for state machine based solutions:

  • Bad reusability.
  • Larger overhead.

Pros for continuation based solutions:

  • Good reusability, since continuation are just functions.
  • Lower overhead.

Cons for continuation based solutions:

  • Complex sementics.
  • No memorization(can be done by other ways).

Do you agree with me on this summary? and suppose in future javascript will get multicore support, will the state-machine based solution subject to race condition?

Thanks again for giving me lots of detail about the history, now i need more : )

# Benjamin Gruenbaum (9 years ago)

The state machine solution is not susceptible to race conditions. If you care about the roots of what promises were originally designed for, you might want to start with Mark's work at: www.erights.org/talks/thesis/markm-thesis.pdf

I'm not sure why you'd look at promises as a state machine solution, it's really just a proxy for a value, the way a IO monad in Haskell wraps a value in an IO operation - a promise wraps a value in an async operation in JavaScript.

Promises also do two more things - they cache the value and dispatch then handlers asynchronously. Both these things exist to prevent race conditions where you subscribe to a promise after it resolves which would otherwise create race conditions.

This is not a complex state machine, it's a very simple unidirectional flow graph where you transition from "Pending" to either "fulfilled" or "rejected" and I can draw a similar "state machine" for callback based solutions (even a ConT port).

A continuation at its core is just a callback that also happens to signal that a function has completed. There is nothing inherently clever about a continuation - a promise is kind of a continuation monad. In fact, if you look at promises, apart from the fact .then does both map and bind (flatMap here) it is the continuation monad, had the monadic-promises camp one and we would have gotten .chain instead of .then (still in Chrome BTW) we could have formally said promises are an instance of a continuation monad.

Also note that promises are really fast, if you look at bluebird 3 promises for instance - creating a promise takes less slots than an empty array and you can create a million ones concurrently without any issues. In fact, oftentimes the way people code uses closures which tend to be more expensive than creating promises anyway.

# 韩冬 (9 years ago)

A continuation at its core is just a callback that also happens to signal that a function has completed. There is nothing inherently clever about a continuation - a promise is kind of a continuation monad. In fact, if you look at promises, apart from the fact .then does both map and bind (flatMap here) it is the continuation monad, had the monadic-promises camp one and we would have gotten .chain instead of .then (still in Chrome BTW) we could have formally said promises are an instance of a continuation monad.

Yes, i also mentions this fact that javascript can use instanceof to dynamic choose fmap or >>= here: winterland1989/Action.js/wiki/Difference-from-Promise, winterland1989/Action.js/wiki/Difference-from-Promise, the difference is that a Promise can’t be re-entered, while ConT can always be re-entered by runConT.

Promises also do two more things - they cache the value and dispatch then handlers asynchronously. Both these things exist to prevent race conditions where you subscribe to a promise after it resolves which would otherwise create race conditions.

This is exactly where i’m getting puzzled, suppose we have thread in javascript(or whatever to run different things on different cores), consider following code:

var p = new Promise(…) // now on one thread we have p.then(func1)

// on another thread p.then(func2)

During pending stage, both func1 and func2 are going to pushed into an internal array, shouldn’t this array resize operation be protected by a lock?

Thanks for your explanations.

~winter

# Andrea Giammarchi (9 years ago)

well

In fact, oftentimes the way people code uses closures which tend to be

more expensive than creating promises anyway.

you pass "closures" to create a promise, a then and a catch, not sure what you mean here.

The Promise itself cannot be counted without counting the amount of middle-callbacks needed to make it work, right? Am I misunderstanding that sentence?

# Benjamin Gruenbaum (9 years ago)

On Sat, Oct 3, 2015 at 5:37 PM, 韩冬 <handong05 at meituan.com> wrote:

This is exactly where i’m getting puzzled, suppose we have thread in javascript(or whatever to run different things on different cores), consider following code:

We don't have threads in JavaScript, and there is no shared memory in most current environments that support multicore calculations. Those who do have shared memory do not have arbitrary shared memory. If we do choose to support threads in the future, the algorithms in Promise, as well as in many other places in the specification would have to drastically change. Currently, the specification is not thread aware in any way. ECMAScript implementations typically deal with concurrency though a single threaded non-blocking event loop model where other heavy computation is offloaded to processes or to the platform.

During pending stage, both func1 and func2 are going to pushed into an internal array, shouldn’t this array resize operation be protected by a lock?

If there are multiple threads then yes. As explained above, ECMAScript itself has no locks.

# Benjamin Gruenbaum (9 years ago)

On Sat, Oct 3, 2015 at 6:00 PM, Andrea Giammarchi < andrea.giammarchi at gmail.com> wrote:

well

In fact, oftentimes the way people code uses closures which tend to be more expensive than creating promises anyway.

you pass "closures" to create a promise, a then and a catch, not sure what you mean here.

The Promise itself cannot be counted without counting the amount of middle-callbacks needed to make it work, right? Am I misunderstanding that sentence?

Apologies, I should have been more precise in my wording. By closures I did not mean anonymous functions. I meant functions that actually need to capture bindings from the outside environment. The cost of functions capturing their environment, at least in some modern engines, is bigger than the cost of allocating a promise.

A lot of time, people write code like:

doFoo(function(err, data){
    if(err) return callback(err);
    doBar(data, function(err, data2){
        if(err) return callback(err);
         doBaz(data, function(err, data3){
             if(err) return callback(err);
             callback(process(data3));
         });
    });
});

Where each function needs to reference the outside variables of the function calling it and doBaz keeps a reference (at least in some current engines) to data although it does not actually need it. This is an optimizations engines can make in the future of course and not a penalty of callbacks themselves. With promises the code would be written as:

doFoo().then(doBar).then(doBaz);

And no variables would be captured (in current engines). Of course the callback version can also be flattened and written in a way that does not cause extra allocation:


doFoo(function(err, data){
    if(err) return callback(err);
    doBar(data, handleBar);
});
function handleBar(err, data2){
    if(err) return callback(err);
    doBaz(data, handleBaz);
}
function handleBaz (err, data3){
    if(err) return callback(err);
    callback(process(data3));
};

This is however not how people write their code. So in regular code I find that callbacks in their regular usage are often slower than promises, and generators/coroutines often tend to be faster than both because of the flat writing style.

I think Babel even performs this as an optimizations when it can prove it to be the case.

Here is some (slightly outdated) material about how closures work in v8, I assume you don't need it, but I figured it would be good read for some of the list followers: mrale.ph/blog/2012/09/23/grokking-v8-closures-for-fun.html

# Forbes Lindesay (9 years ago)

There are no problems with reusability that I can think of as a result of the internal state machine. The functions passed to .then are just plain functions, so they are perfectly reusable.

I don't think there is a significant overhead to promises once properly optimised. I don't see how your solution would lead to lower overhead.

The state machine approach won't lead to race conditions in a multi-threaded environment because the promise state machine will always live on a single thread. If JavaScript ever gets shared memory multi-threading it will be carefully restricted to prevent this kind of problem.

By contrast, systems built without a carefully engineered state machine (e.g. Thunks and your continuation based system) tend to lead to race conditions in user land code. Once people try and write parallel code or caching/memoised using lazy continuation based systems they quickly end up needing to convert them into some kind of eager data structure. This is often done badly (it's very difficult to get right) and can lead to very hard to debug race conditions. By contrast, promises are very carefully designed and tested implementations of exactly this functionality.

# 韩冬 (9 years ago)

There are no problems with reusability that I can think of as a result of the internal state machine. The functions passed to .then are just plain functions, so they are perfectly reusable.

Sorry, i should be more clear about reusability i’m referring are not the functions you passed, but the Promise itself, suppose we have an async operation readDB, we can construct a web api using Promise like this:

express = require('express'); express = require('express'); express = require('express'); express = require('express'); var Action, Promise, action, callback, express, fs, http, promise, testAction;

express = require('express');

http = require('http');

fs = require('fs');

Action = require('action-js');

Promise = require('bluebird');

promise = express();

action = express();

callback = express();

promise.get('/', function(req, res) { return new Promise(function(resolve, reject) { return fs.readFile('./test', { encoding: 'utf8' }, function(err, data) { return resolve(data + new Date().getTime()); }); }).then(function(data) { return res.send(data); }); });

http.createServer(promise).listen(8123);

testAction = new Action(function(cb) { return fs.readFile('./test', { encoding: 'utf8' }, function(err, data) { return cb(data + new Date().getTime()); }); });

action.get('/', function(req, res) { return testAction.go(function(data) { return res.send(data); }); }); var express = require('express'); var testApp = express()

testApp.get(‘/test', function(req, res) { new Promise(function(resolve, reject) { readDB('test', function(data) { resolve(someProcess(data)); }); }) .then(function(data) { res.send(data); }); });

So you’re creating a Promise for every request, using my solution you can reuse Action like this:

var express = require('express'); var testApp = express()

var testAction = new Action(function(cb) { return readDB(‘test’, function(err, data) { cb(process(data)); }); });

testApp.get(‘/test', function(req, res){ testAction .go(function(data) { res.send(data); }); });

Every time you call go, the readDB run again, you won’t waste time on recreating whole callback chain.

I even benchmarked such situation, where Action can get close performance to raw callback, Promise will pay about 10~15% punishment depend on readDB cost(this high readDB cost, the lower Promise creation cost), but i’m not going to attack on Promise performance problem anymore, since it’s not meaningful without statistic.

The state machine approach won't lead to race conditions in a multi-threaded environment because the promise state machine will always live on a single thread. If JavaScript ever gets shared memory multi-threading it will be carefully restricted to prevent this kind of problem.

No doute on that.

By contrast, systems built without a carefully engineered state machine (e.g. Thunks and your continuation based system) tend to lead to race conditions in user land code. Once people try and write parallel code or caching/memoised using lazy continuation based systems they quickly end up needing to convert them into some kind of eager data structure. This is often done badly (it's very difficult to get right) and can lead to very hard to debug race conditions. By contrast, promises are very carefully designed and tested implementations of exactly this functionality.

Well, i understand that Promise are javascript’s choice, and i think i understand the reason now, but i really don’t think lazy data structure lead to race conditions ; ), but again i won’t going to defend for lazy data structure without any statistic either.

~winter

# 韩冬 (9 years ago)

Sorry i messed up with the code format in previous mail, this one should work:

There are no problems with reusability that I can think of as a result of the internal state machine. The functions passed to .then are just plain functions, so they are perfectly reusable.

Sorry, i should be more clear about reusability i’m referring are not the functions you passed, but the Promise itself, suppose we have an async operation readDB, we can construct a web api using Promise like this:

var express = require('express'); var testApp = express()

testApp.get(‘/test', function(req, res) { new Promise(function(resolve, reject) { readDB('test', function(data) { resolve(someProcess(data)); }); }) .then(function(data) { res.send(data); }); });

So you’re creating a Promise for every request, using my solution you can reuse Action like this:

var express = require('express'); var testApp = express()

var testAction = new Action(function(cb) { return readDB(‘test’, function(err, data) { cb(process(data)); }); });

testApp.get(‘/test', function(req, res){ testAction .go(function(data) { res.send(data); }); });

Every time you call go, the readDB run again, you won’t waste time on recreating whole callback chain.

I even benchmarked such situation, where Action can get close performance to raw callback, Promise will pay about 10~15% punishment depend on readDB cost(this high readDB cost, the lower Promise creation cost), but i’m not going to attack on Promise performance problem anymore, since it’s not meaningful without statistic.

The state machine approach won't lead to race conditions in a multi-threaded environment because the promise state machine will always live on a single thread. If JavaScript ever gets shared memory multi-threading it will be carefully restricted to prevent this kind of problem.

No doubt on that.

By contrast, systems built without a carefully engineered state machine (e.g. Thunks and your continuation based system) tend to lead to race conditions in user land code. Once people try and write parallel code or caching/memoised using lazy continuation based systems they quickly end up needing to convert them into some kind of eager data structure. This is often done badly (it's very difficult to get right) and can lead to very hard to debug race conditions. By contrast, promises are very carefully designed and tested implementations of exactly this functionality.

Well, i understand that Promise are javascript’s choice, and i think i understand the reason now, but i really don’t think lazy data structure lead to race conditions ; ), but again i won’t going to defend for lazy data structure without any statistic either.

~winter

# Morningstar, Chip (9 years ago)

Sorry to be coming into this discussion a bit late, but I'd like to point out one idea which seems to have gone unmentioned during the furious debate, and which merits keeping in mind as promises come into wider spread use:

My sense is that many in the JS community seem to regard promises principally as an abstraction for dealing with asynchrony. While they certainly are that, the original driving use case for their invention/adoption (when I first encountered them, 25 or so years ago) was as an abstraction for dealing with network latency, and I believe this consideration loomed large in the minds of some of the TC39 committee members who championed the adoption of the ES6 Promise API in its current form. In contrast to many of the closure-based callback mechanisms which have been brought up variously as alternatives to promises or as sugarings or implementations of them, one of the great virtues of promises is that they can be pipelined. In particular, they can be pipelined over the network when invoking operations on remote objects, which means that a sufficiently clever or aggressive implementation can speculatively deliver requests over the network to the result of an earlier request before that result has been determined, potentially short circuiting one and potentially many network round trips. This can yield substantial (several orders of magnitude, in some cases) speedup to some kinds of heavily networked applications. This is easy to lose track of if you're just thinking of promises as a different notation for writing callbacks.

Chip

# 韩冬 (9 years ago)

In particular, they can be pipelined over the network when invoking operations on remote objects, which means that a sufficiently clever or aggressive implementation can speculatively deliver requests over the network to the result of an earlier request b efore that result has been determined, potentially short circuiting one and potentially many network round trips.

Can you elaborate on this? Maybe a short code snippet?

~winter