DOM EventStreams (take two on Streams): Request for feedback
Condensing my suggestions for additional variants into one small post:
-
EventStream is an abstraction to represent streams of events as a first-class value. It's lossy, and forgets about its history. (Though we probably want a switch allowing it to at least remember its history before the first listener is attached.) It's multi-listener. This will be used for event-like interfaces that don't need the complexity of DOM Events (because they're not tree-based). I think we should also allow a way to extract an EventStream from any element/event pair - the stream is updated whenever an event successfully reaches the element without being cancelled by an ancestor.
-
UpdateStream is an EventStream specifically focused on watching updates of some given value. When you add a new listener, the stream replays its most recent update to the new listener. It adds a .value() function, which is identical to .next() but also gets the most recent update replayed first. It's multi-listener, and probably also wants the remember-initial-history switch.) This is intended for something like my proposal for "UpdateStream.watch(object, property)", which turns any JS property into an event stream of value changes. (This can be built on top of Object.observe, so there's nothing fundamentally new there.)
-
ValueStream is a single-listener non-lossy stream. Only one .listen() callback can be active at a time; trying to call .listen() again before the first is unlistened throws an error. If there's no listener, .next() can be used to pull values out of the stream in succession without loss. Successive calls to .next() return successive values from the stream. (This is unlike EventStream, where multiple .next() calls in the same tick will return equivalent futures for whatever the very next value is.) This is intended for a lot of general use-cases, like pulling tokens out of a token stream.
Have I missed any major use-cases?
why is that?
void forEach(mapCB);
if that's to make it consistent with Array#forEach you should accept the context argument in both map and forEach too?
However, I don't get why this should not work:
str.forEach(callback).then(notify);
Said that, I think is quite good after some renaming, but probably you need to write some more concrete example?
P.S.
had already thoughts about a possible
$(obj).on('loadStatus', updateLoadingUI); ^__^
naaaaa, just kidding
On Tue, Apr 16, 2013 at 5:12 PM, Andrea Giammarchi <andrea.giammarchi at gmail.com> wrote:
why is that?
void forEach(mapCB);
if that's to make it consistent with Array#forEach you should accept the context argument in both map and forEach too?
Sorry, I forgot to update their signatures together. You're right that it should have the same signature as .map().
However, I don't get why this should not work:
str.forEach(callback).then(notify);
Hm, so .forEach() just returns the same stream? Makes sense to me.
Said that, I think is quite good after some renaming, but probably you need to write some more concrete example?
I need to point to a few example pages I've already found, actually. The ACM page I link in the discussion of .switch() is very good!
P.S.
had already thoughts about a possible
$(obj).on('loadStatus', updateLoadingUI); ^__^
naaaaa, just kidding
UpdateStream.watch(obj, 'loadStatus').listen(updateLoadingUI);
yep, I was wrapping there already ... ain't needed.
Looking forward to see some example and read others opinion.
all the best
On Tue, Apr 16, 2013 at 4:28 PM, Tab Atkins Jr. <jackalmage at gmail.com> wrote:
I think these cases will be fairly common, and further, that a good solution to the problem for DOM will be pretty useful for general programming as well.
I agree.
I have a ton of minor feedback and questions. Sorry it's so long. There's more stuff I'd like to reply to but don't have time.
interface EventStreamResolver {
I get the parallel with FutureResolver but this name doesn't work for me. "FutureResolver" makes sense because the whole point of a Future is to be resolved eventually. EventStreams aren't like that: many (most?) want to keep pushing events indefinitely, and only ever become "resolved" by freak accident.
void continueWith(optional any value);
I'm having trouble following the parallel with FutureResolver.resolve() here. What's this for?
This seems like a mixing of layers to me. Here's how I interpret this whole design:
-
Event producers implement an extremely simple interface consisting of a single subscribe() function (the StreamInit callback) that interacts with a small listener interface (EventStreamResolver).
-
EventStream is a concrete class that builds a dazzling array of useful high-level operations on top of the aforementioned low-level protocol. EventStream is what event consumers use in practice.
So—again this is all just how I see it right now—there are two nicely independent things going on: a minimal low-level protocol, and a high-level convenience class. "Obviously" you wouldn't have EventStreams as part of the low-level protocol. Hence my confusion about continueWith.
Separately: it seems like if you're an event producer, and you want to stop sending events to a resolver and have some other event producer send it events instead, you shouldn't have to tell the resolver "please subscribe yourself to that stream over there". You can just subscribe it. Instead of: resolver.continueWith(otherStream); just write: otherStreamInit(resolver);
Then again, you could say exactly the same things about FutureResolver.resolve(), and I don't understand its purpose either. Probably just me.
An EventStream pushes out 0 or more updates, then optionally completes or rejects. The .listen() function is the basic way to respond to an event stream, allowing you to register callbacks for any of those three events. It returns the same event stream back, for chaining.
Mmmm. The interesting thing about Futures is... well, I'll just link to domenic.me/2012/10/14/youre-missing-the-point-of-promises
Not that I really think you're missing the point— but .listen() is only a sink. The combinators in your blog post are the exciting new ability on offer here. EventStreams are cool because they compose.
Like Futures, EventStreams separate the power to read/respond to an event stream and the power to update an event stream into two separate objects. The former is returned by the EventStream constructor, while the latter is passed into the constructor's callback argument.
Rhetorical nit: This explanation makes it seem more complicated than it is. The way to explain and justify the design is to show simple examples. Bacon's readme does this well. You only have to read the first 5 lines of code here to see what Bacon is about:
raimohanska/bacon.js/blob/master/README.md#intro
complete/reject/continueWith all kill the resolver, so that none of the methods work afterwards (maybe they throw?).
That seems sensible. OTOH FutureResolver seems to make all those methods no-ops instead. (Step 1 of each method's implementation: "If the context object's resolved flag is set, terminate these steps.") I'm not sure what that's about. Maybe worth asking.
- It seems that a bunch of manual use-cases would benefit from auto-buffering any updates until the first listener is attached (via .listen() or .next()).
I can imagine that being true, but concrete example use cases would help. It is easier to think of cases where buffering doesn't matter or where you really don't want buffering (e.g. because it chews up a lot of memory that you can never free).
All streams need some way of unlistening. Suggestions welcome as to how best to do this.
Bacon offers two equivalent ways of unsubscribing.
-
Bacon's equivalent of the StreamInit callback returns an unsubscribe function. Each subscriber therefore gets its very own unsubscribe callback.
-
Additionally, Bacon's equivalent of the EventStreamResolver.push() method can return a special value (Bacon.noMore) that means "unsubscribe me".
On Wed, Apr 17, 2013 at 5:50 PM, Jason Orendorff <jason.orendorff at gmail.com> wrote:
On Tue, Apr 16, 2013 at 4:28 PM, Tab Atkins Jr. <jackalmage at gmail.com> wrote:
interface EventStreamResolver {
I get the parallel with FutureResolver but this name doesn't work for me. "FutureResolver" makes sense because the whole point of a Future is to be resolved eventually. EventStreams aren't like that: many (most?) want to keep pushing events indefinitely, and only ever become "resolved" by freak accident.
Suggestions welcome.
void continueWith(optional any value);
I'm having trouble following the parallel with FutureResolver.resolve() here. What's this for?
It's the direct equivalent of FutureResolver#resolve.
This seems like a mixing of layers to me. Here's how I interpret this whole design:
- Event producers implement an extremely simple interface consisting of a single subscribe() function (the StreamInit callback) that interacts with a small listener interface (EventStreamResolver).
I don't want to have to write "new EventStream({subscribe: function(r){...}})", if that's what you're thinking.
EventStreamResolver has nothing to do with listening. It's an ocap (object capability) that represents the ability to update the stream.
This is equivalent to Bacon.js's Bacon.fromCallback() method of creation, except that Bacon basically only provides "accept" functionality. (You reject by throwing in the callback, I think.) The resolver just abstracts one level - rather than passing the accept function directly, it passes an object with the accept function on it, plus a few others for convenience. (Streams need at least two functions - one for updating and one for completing.)
- EventStream is a concrete class that builds a dazzling array of useful high-level operations on top of the aforementioned low-level protocol. EventStream is what event consumers use in practice.
So?again this is all just how I see it right now?there are two nicely independent things going on: a minimal low-level protocol, and a high-level convenience class. "Obviously" you wouldn't have EventStreams as part of the low-level protocol. Hence my confusion about continueWith.
Separately: it seems like if you're an event producer, and you want to stop sending events to a resolver and have some other event producer send it events instead, you shouldn't have to tell the resolver "please subscribe yourself to that stream over there". You can just subscribe it. Instead of: resolver.continueWith(otherStream); just write: otherStreamInit(resolver);
Then again, you could say exactly the same things about FutureResolver.resolve(), and I don't understand its purpose either. Probably just me.
FutureResolver#resolve is syntax sugar for saying "just accept if this other future accepts, or reject if it rejects".
In other words, rather than "r.resolve(someOtherFuture)", you could always just write "someOtherFuture.then(r.accept.bind(r), r.reject.bind(r))". "resolve()" is just easier to read and write, and has the additional useful semantics that you can't accidentally update the future after delegating, which might offer some optimization potential for implementations.
Exact same thing for EventStreamResolver#continueWith, except it's three callbacks rather than two.
An EventStream pushes out 0 or more updates, then optionally completes or rejects. The .listen() function is the basic way to respond to an event stream, allowing you to register callbacks for any of those three events. It returns the same event stream back, for chaining.
Mmmm. The interesting thing about Futures is... well, I'll just link to domenic.me/2012/10/14/youre-missing-the-point-of-promises
Not that I really think you're missing the point? but .listen() is only a sink. The combinators in your blog post are the exciting new ability on offer here. EventStreams are cool because they compose.
Note that I've revised this in the newest version, on my blog. .listen() returns a brand new stream, slaved to the original. I need to define that throwing an error in any callback causes the stream to unslave and reject. This maintains the "errors are passed along until someone can deal with them" semantic Domenic brings up.
Like Futures, EventStreams separate the power to read/respond to an event stream and the power to update an event stream into two separate objects. The former is returned by the EventStream constructor, while the latter is passed into the constructor's callback argument.
Rhetorical nit: This explanation makes it seem more complicated than it is. The way to explain and justify the design is to show simple examples. Bacon's readme does this well. You only have to read the first 5 lines of code here to see what Bacon is about:
Actually, the equivalent code in Bacon doesn't appear until the "Creating Streams" section. When I'm writing real document, it might make sense to split the API descriptions along those lines. ^_^
complete/reject/continueWith all kill the resolver, so that none of the methods work afterwards (maybe they throw?).
That seems sensible. OTOH FutureResolver seems to make all those methods no-ops instead. (Step 1 of each method's implementation: "If the context object's resolved flag is set, terminate these steps.") I'm not sure what that's about. Maybe worth asking.
Hm, we should be consistent. Unsure which is better; I'll ping Anne.
- It seems that a bunch of manual use-cases would benefit from auto-buffering any updates until the first listener is attached (via .listen() or .next()).
I can imagine that being true, but concrete example use cases would help. It is easier to think of cases where buffering doesn't matter or where you really don't want buffering (e.g. because it chews up a lot of memory that you can never free).
In my blog post, I bring up that auto-buffering by default is still easy enough to defeat if you want to - just call it like "(new EventStream(cb)).listen()". The empty listener will still trigger the "being listened to" bit and make it flush its buffer. Unsure if this is too magical to rely on or not.
All streams need some way of unlistening. Suggestions welcome as to how best to do this.
Bacon offers two equivalent ways of unsubscribing.
- Bacon's equivalent of the StreamInit callback returns an unsubscribe function. Each subscriber therefore gets its very own unsubscribe callback.
Ah, that's an interesting idea.
I'm unsure how each subscriber gets its own unsubscribe callback. The init callback is only called once, and so returns only the single value, right?
Or is the subscribe callback called every time someone starts listening, so the stream can potentially act different to different listeners? That seems like it would be hard to make compatible with a multi-listener approach.
- Additionally, Bacon's equivalent of the EventStreamResolver.push() method can return a special value (Bacon.noMore) that means "unsubscribe me".
That just kicks out all the listeners to the stream? Or does it end the stream? Or do you mean something else, given that you use the pronoun "me", which implies it's the listener with somehow sends the signal? If the latter, you're confused about the role of a stream resolver. (I think, based on earlier reactions, you might be, because you talk about "sending events to a resolver". Resolvers generate events, listeners receive them.)
~TJ
On Wed, Apr 17, 2013 at 7:11 PM, Tab Atkins Jr. <jackalmage at gmail.com> wrote:
Mmmm. The interesting thing about Futures is... well, I'll just link to domenic.me/2012/10/14/youre-missing-the-point-of-promises
Actually, thanks a bunch for this link. It pointed out an aspect I hadn't consciously considered, but which makes a ton of sense. My current API proposal is not fully consistent with the principles revealed in this post.
I'll do a moderate refactor tonight. Luckily, this means I can come back to .then() being the "primary" API, which is nice due to Futures, while still keeping it as the monadic op as well. ^_^
On Thu, Apr 18, 2013 at 3:11 AM, Tab Atkins Jr. <jackalmage at gmail.com> wrote:
On Wed, Apr 17, 2013 at 5:50 PM, Jason Orendorff <jason.orendorff at gmail.com> wrote:
That seems sensible. OTOH FutureResolver seems to make all those methods no-ops instead. (Step 1 of each method's implementation: "If the context object's resolved flag is set, terminate these steps.") I'm not sure what that's about. Maybe worth asking.
Hm, we should be consistent. Unsure which is better; I'll ping Anne.
They do not throw so the resolver can intentionally race.
(narrowing to the part that seems most productive)
On Wed, Apr 17, 2013 at 9:11 PM, Tab Atkins Jr. wrote:
On Wed, Apr 17, 2013 at 5:50 PM, Jason Orendorff wrote:
Bacon offers two equivalent ways of unsubscribing.
- Bacon's equivalent of the StreamInit callback returns an unsubscribe function. Each subscriber therefore gets its very own unsubscribe callback.
Ah, that's an interesting idea.
I'm unsure how each subscriber gets its own unsubscribe callback. The init callback is only called once, and so returns only the single value, right?
Or is the subscribe callback called every time someone starts listening, so the stream can potentially act different to different listeners? That seems like it would be hard to make compatible with a multi-listener approach.
Right. It's not quite per-listener; as in your design, the EventStream class copes with multiple simultaneous listeners.
But when the number of listeners on an EventStream goes from 0 to 1, it calls the subscribe hook; when it goes from 1 to 0, it calls the unsubscribe hook.
This is because Bacon turns off all the taps when no one's listening. Futures are not like that.
Example:
var clock = Bacon.interval(100, "tick");
clock.take(5).log();
setTimeout(() => clock.take(5).log(), 2000);
The call to .log() on line 2 causes .take(5) to have a consumer, so clock's subscribe hook is called. We log some events; 500 msec later, .take(5) receives its fifth event, so it's all done. It unsubscribes itself and floats away.
clock goes to 0 consumers, so it calls the unsubscribe hook.
At 2000 msec, clock once again has a downstream consumer, so its subscribe hook is called a second time.
- Additionally, Bacon's equivalent of the EventStreamResolver.push() method can return a special value (Bacon.noMore) that means "unsubscribe me".
That just kicks out all the listeners to the stream? Or does it end the stream? Or do you mean something else, given that you use the pronoun "me", which implies it's the listener with somehow sends the signal? If the latter, you're confused about the role of a stream resolver.
Bacon's equivalent of EventStreamResolver.push() returns Bacon.noMore when it finds that the number of listeners has gone to 0.
This is indeed usually caused by the last listener unsubscribing by returning Bacon.noMore; and this is the point where I think it'll be quickest to cut short the discussion and just read some Bacon source code.
This is called for every event: raimohanska/bacon.js/blob/6318160839d76ed4ce4eceeefe5d0d78b8e45403/src/Bacon.coffee#L777
Bacon's equivalent of EventStreamResolver.push() returns Bacon.noMore
when it finds that the number of listeners has gone to 0.
It should also be noted that streams2 in node return false
from push()
when the stream is "full".
Generally returning a value from push()
as some kind of message to the
source to either "pause" or "abort" is a good strategy.
One bigger question: what is the DOM use case for event streams?
That is, it's very clear what the DOM use cases are for binary data streams. (Most urgently, streaming XHR, but also perhaps unifying the many interfaces that use object URLs as a means of connecting separate streams of data; also exposing the browser's GZIP capabilities; and so on 1.)
But for event streams it's less clear what urgent problem they solve. The example you've shown so far is basically just a different way of doing Object.observe, with some nice sugar and of course those combinators. But the basic capabilities of the platform are not expanded, and sugar seems like a library-level concern. Nevertheless, there's many allusions to DOM use cases in your blog posts, so a listing of those would be helpful.
In other words: if there are many use cases for the DOM where event streams make sense, great! In the spirit of standardizing promises, it's good to standardize a common idiom so we don't do things in many different ways across the DOM APIs. But if the only use case is just to notify of property changes, Object.observe handles that nicely without streams. What else needs event streams?
I've updated my blog post with a refactoring of the API: www.xanthir.com/b4PV0.
I hadn't fully appreciated the underlying abstraction of futures (that they are the representation of async control flow), and so I was designing the stream API in an inconsistent way. This has been fixed, so now Event Streams are far more consistent. The core function is .then(), and its callbacks can all be called multiple times. (Streams can adopt other streams, and when those streams reject, listener's reject callbacks are called.) If I have engineered things correctly, this makes Streams consistently represent an async loop.
I've also reorganized the blog post for easier reading, separating out the "basic" operations from the constructor functions and the combinators. I've also made a saner API for ValueStream (previously called UpdateStream), and temporarily killed the text about a lossless single-listener stream until I'm sure I've got the semantics right for the lossy multi-listener cases.
The big thing I'm concerned with is the semantics of adoption. I'm very unsure I've gotten it right, but I've stared at it too long for now, and can't think about it anymore.
On Thu, Apr 18, 2013 at 4:26 PM, Jason Orendorff <jason.orendorff at gmail.com> wrote:
(narrowing to the part that seems most productive) On Wed, Apr 17, 2013 at 9:11 PM, Tab Atkins Jr. wrote:
On Wed, Apr 17, 2013 at 5:50 PM, Jason Orendorff wrote:
Bacon offers two equivalent ways of unsubscribing.
- Bacon's equivalent of the StreamInit callback returns an unsubscribe function. Each subscriber therefore gets its very own unsubscribe callback.
Ah, that's an interesting idea.
I'm unsure how each subscriber gets its own unsubscribe callback. The init callback is only called once, and so returns only the single value, right?
Or is the subscribe callback called every time someone starts listening, so the stream can potentially act different to different listeners? That seems like it would be hard to make compatible with a multi-listener approach.
Right. It's not quite per-listener; as in your design, the EventStream class copes with multiple simultaneous listeners.
But when the number of listeners on an EventStream goes from 0 to 1, it calls the subscribe hook; when it goes from 1 to 0, it calls the unsubscribe hook.
This is because Bacon turns off all the taps when no one's listening. Futures are not like that.
Hm, I'm not sure I'm grasping exactly what this does or how it does it. I'm not sure what Bacon's Dispatcher concept maps to, or how its API actually works.
This is indeed usually caused by the last listener unsubscribing by returning Bacon.noMore; and this is the point where I think it'll be quickest to cut short the discussion and just read some Bacon source code.
This is called for every event: raimohanska/bacon.js/blob/6318160839d76ed4ce4eceeefe5d0d78b8e45403/src/Bacon.coffee#L777
I don't understand how the subscribers reply to the dispatcher with a value. I think there's a decent mismatch in API shapes, which is making it hard for me to follow.
~TJ
On Thu, Apr 18, 2013 at 5:11 PM, Domenic Denicola <domenic at domenicdenicola.com> wrote:
One bigger question: what is the DOM use case for event streams?
That is, it's very clear what the DOM use cases are for binary data streams. (Most urgently, streaming XHR, but also perhaps unifying the many interfaces that use object URLs as a means of connecting separate streams of data; also exposing the browser's GZIP capabilities; and so on [1].)
But for event streams it's less clear what urgent problem they solve. The example you've shown so far is basically just a different way of doing Object.observe, with some nice sugar and of course those combinators. But the basic capabilities of the platform are not expanded, and sugar seems like a library-level concern. Nevertheless, there's many allusions to DOM use cases in your blog posts, so a listing of those would be helpful.
In other words: if there are many use cases for the DOM where event streams make sense, great! In the spirit of standardizing promises, it's good to standardize a common idiom so we don't do things in many different ways across the DOM APIs. But if the only use case is just to notify of property changes, Object.observe handles that nicely without streams. What else needs event streams?
Almost every use-case for streams could be done by just exposing the data as a property on some object and using Object.observe. That loses all the really wonderful control-flow properties of streams, though. One simple but very nice example is explained in my blog while illustrating the switch() method. I'll reproduce it here.
Say you want to provide autocomplete suggestions as the user types into some field, based on data on your server. Using today's technologies, this is absolutely possible:
- Register an "input" listener on the input.
- Possibly throttle the input events to keep them from coming in too quickly.
- As the input events come in, construct an XHR to retrieve suggestions from your server.
- As each XHR finishes, verify that the results aren't already obsolete by a later XHR returning faster.
- If they're not obsolete, update your UI with the returned suggestions.
Actually writing the code to do all this is surprisingly non-trivial. With EventStream and Future, though, it becomes trivial:
EventStream.listen(input, "input") .throttle(100) .map(e=>getJSON('example.com', e.target.value)) .switch() .then(updateUI);
(Assuming that XHR defines a function named getJSON() which returns a Future.)
That's literally all the code you need. You can even trivially handle and recover from XHR errors by adding a second callback to the map() call.
Even for the "basic" use-cases like I've pointed to in my Font Load API proposal, event streams make code way easier to write. Try writing code that just updates the UI to tell whether there are currently any fonts loading, based on Object.observe(). It's less trivial than one would think:
Object.observe(document.fonts, changes=> { if( changes.filter(change=> change.type==="name" &&
change.name==="loadStatus").length ) updateLoadingUI(document.fonts.loadStatus); });
Most of this is completely boilerplate, and occupied solely with "fixing" the data from Object.observe into a better form. On the other hand, with a ValueStream:
ValueStream.watch(document.fonts, 'loadStatus').squash().then(updateLoadingUI);
(And I'm considering folding "squash" into the default behavior of ValueStreams, too, so it would be even simpler.)
Yes, a lot of the weight of Object.observe could be lessened by syntax sugar. At least part of the point of ValueStream is to be that syntax sugar.
Ultimately, though, I'll just point you back to your own blog post at domenic.me/2012/10/14/youre-missing-the-point-of-promises. ^_^ Just like how Futures capture the notion of async control flow and errors, Stream capture the same for loops.
Makes sense, and thanks for clarifying!
I guess my only hesitation is that promises evolved over many years, with the design we see in Promises/A+ today and its many implementations, including DOM Futures, being the result of convergent evolution in library-space. While your sugar is nice, I'd be hesitant to bless it as the one true async-loops sugar without at least some library-space evolution.
In other words, why not try releasing your event streams as a library, and see what kind of adoption they get? If libraries as diverse as jQuery, WinJS, Ember, Angular, Dojo, and YUI end up all having some form of event stream in roughly that format, you know you're on to a winner :).
It's a different scenario from promises, since they're part of the API contract that needs to be exposed when designing web APIs. It sounds like the fundamental API contract for your case is just a changing property, and the method of consuming that changing property could be done with sugar from any library. (Indeed, this is somewhat like how the fundamental API contract for async DOM APIs is now to return a relatively feature-less DOMFuture, which consumers can use alongside any full-featured promise library in order to get the sugar they desire.)
From another perspective, it would be somewhat of a shame to quash the nascent FRP-in-JS industry in its tracks by handing down an API from on high in the WHATWG. Already we're seeing very diverse implementations, from RxJS to Bacon.js to the various Node.js experiments (which range from "core" object-mode streams to the user-land experiments that seem to have sprung up in the last few months, like Dominic Tarr's pull-stream). It reminds me of the .NET space, where Microsoft's entry into an area—whether it be ORMs with Entity Framework, or package managers with NuGet, or build tools with MSBuild—immediately grabbed the majority of developer mindshare, despite arguably-better options being in development and use.
Finally, on the issue of async loops, I'd argue that it's a tentative analogy. It's at least not nearly as clear how you could do a syntactic transformation like you can with promises, e.g. by introducing coroutines (cf. await
). In fact, async loops to me seem like they'd be expressed as
while (await asyncCondition()) {
await asyncAction();
}
or, of course, its ES6 counterpart with yield
and task.js-style wrappers.
On Thu, Apr 18, 2013 at 7:20 PM, Domenic Denicola <domenic at domenicdenicola.com> wrote:
Makes sense, and thanks for clarifying!
I guess my only hesitation is that promises evolved over many years, with the design we see in Promises/A+ today and its many implementations, including DOM Futures, being the result of convergent evolution in library-space. While your sugar is nice, I'd be hesitant to bless it as the one true async-loops sugar without at least some library-space evolution.
In other words, why not try releasing your event streams as a library, and see what kind of adoption they get? If libraries as diverse as jQuery, WinJS, Ember, Angular, Dojo, and YUI end up all having some form of event stream in roughly that format, you know you're on to a winner :).
Yes, I'm going to start collecting event stream-like usages among existing libraries now, and documenting what all they do. This thread has already been quite useful. ^_^
It's a different scenario from promises, since they're part of the API contract that needs to be exposed when designing web APIs. It sounds like the fundamental API contract for your case is just a changing property, and the method of consuming that changing property could be done with sugar from any library. (Indeed, this is somewhat like how the fundamental API contract for async DOM APIs is now to return a relatively feature-less DOMFuture, which consumers can use alongside any full-featured promise library in order to get the sugar they desire.)
Hmm, good point. The other major use of event streams - capturing DOM Events - is also doable purely with sugar for now.
Finally, on the issue of async loops, I'd argue that it's a tentative analogy. It's at least not nearly as clear how you could do a syntactic transformation like you can with promises, e.g. by introducing coroutines (cf.
await
). In fact, async loops to me seem like they'd be expressed as
while (await asyncCondition()) { await asyncAction(); }
or, of course, its ES6 counterpart with
yield
and task.js-style wrappers.
"await" floats all async boats, so it's a fully general counter-argument to async-helper concepts. ^_^
The more direct analogy for event streams is generators and the consumption/manipulation APIs you can run over them. They capture the notion of "loops" in a more general, functional way, and event streams are just a push-based generator (that is, a generator with an async data source).
My first attempt at feedback for my Stream proposal was unfortunately bogged down with a lot of confusion over terminology and meaning. I'd like to start fresh and hopefully head off a lot of confusion early-on, so here's take two.
Now that DOM has added Futures, I've started looking into converting various event-based APIs into being future-based. This has been a great success, but there are some things that can't be turned into futures (because they update multiple times, or don't have a notion of "completing"), but have the same lack-of-need for the full DOM Event baggage. I think these cases will be fairly common, and further, that a good solution to the problem for DOM will be pretty useful for general programming as well.
This is explicitly not an attempt to solve the "binary"/"IO" stream use-case, as exemplified by Node Streams. While structurally similar, binary streams have a lot of unique features and pitfalls that make event streams a poor fit for them: they need to batch up data by default, they need to be able to apply backfill data, etc. I think we also need to develop such an API, but it'll be separate from this one. (I suspect it may look very similar, though, so it's good to keep that in mind when naming.)
So, without further ado, here's the basic API for my proposal for EventStreams:
This API is intentionally very similar to that of Futures, because it's intended to solve similar problems, and I think the shape of the Futures API is pretty good.
An EventStream represents a stream of events or values. It's roughly equivalent to the concept of "signals" or "event streams" from functional reactive programming, or the concept of an "observable" or "task" from several functional async programming models.
An EventStream pushes out 0 or more updates, then optionally completes or rejects. The .listen() function is the basic way to respond to an event stream, allowing you to register callbacks for any of those three events. It returns the same event stream back, for chaining.
For convenience, event streams have several functions that let you listen to just a single event, returning a Future. You can listen for the stream completing, rejecting, or for the next update (possibly filtered). Important note: consuming an event stream using repeated .next() calls rather than a single .listen() call is lossy - multiple updates can happen between the tick that .next() is called and the tick that the future resolves, and the future will only contain the value of the first one.
Like Futures, EventStreams separate the power to read/respond to an event stream and the power to update an event stream into two separate objects. The former is returned by the EventStream constructor, while the latter is passed into the constructor's callback argument. The resolver's methods control the event stream's state - .push() puts an update on the stream, which'll be passed to all the listen callbacks, .complete() and .reject() end the stream with the passed value/reason, while .continueWith() delegates to another stream. complete/reject/continueWith all kill the resolver, so that none of the methods work afterwards (maybe they throw?).
Additional Work
In my blog post, I sketch out several event stream combinators, which showcase the true usefulness of this kind of abstraction, as you can manipulate and combine event streams far more easily, readably, and possibly performantly than the same tasks done with normal DOM Events or callbacks.
Some degree of buffering seems desirable, both for DOM use-cases and general ones:
Several DOM use-cases really want to be able to remember the "current" value (from the most recent update) - this applies to all the "watch a value changing" APIs, like I suggest at the end of my blog post. I think this can just be a subclass of EventStream, perhaps named UpdateStream, which automatically calls new listenerCBs with the current value before any updates, and which exposes a .value() function which is identical to .next(), but checks the current value first.
It seems that a bunch of manual use-cases would benefit from auto-buffering any updates until the first listener is attached (via .listen() or .next()). How do we accommodate the choice? Should this just be the default for manually-created event streams, with DOM use-cases defaulting to not buffering?
Right now, consuming streams piecemeal with .next() (rather than .listen()) is lossy. Should we have some way to force full buffering, so that if you're consumign it piecemeal, it waits until the next .next() call to inform you of updates?
The previous point is probably something we only want to expose for "single-listener" streams, which we should allow the creation of somehow. A single-listener stream could default to full buffering. Enforcing single-listening is easy - if someone calls .listen(), it's sealed to future .listen() or .next() calls until you unlisten. If it's not sealed, anyone can call .next() for the very next value, which means you can chain .next() calls safely. Maybe calling .next() multiple times should return futures for successive values? That sounds like it would match a lot of people's intuitions, and would match up well with using event streams for things like parsing streams
All streams need some way of unlistening. Suggestions welcome as to how best to do this. Maybe calling .listen() should actually return a new stream slaved to the original, and you can just call .unlisten() on it to destroy the listeners? That avoids the "just pass the original callback" problem when you have anonymous callbacks. That still means that "x.listen(); x.listen(); x.unlisten();" would destroy both sets of callbacks, but I dunno how best to solve this.
I think that's about it for now. ^_^ Hopefully this time my intentions and goals are clearer, so we can start from a common slate rather than arguing about definitions and getting confused!
~TJ