Performance of iterator .next() as specified

# Katelyn Gadd (9 years ago)

As specified, iterator .next() seems to be required to return a new object instance for each iteration.

In my testing (and in my theory, as an absolute) this is a real performance defect in the spec and it will make iterators inferior to all other forms of sequence iteration, to the extent that they may end up being used very rarely, and developers will be biased away from Map and Set as a result.

The issue here is that the new object requirement means that every iteration produces GC pressure. I think that past APIs with this problem (for example TypedArray.set) have proven that 'a sufficiently smart VM can optimize this' is not representative of real VMs or real use cases.

In the specific case of .next(), the method returning a new object on every iteration does not produce any actual improvement to usability: There is no realistic use case that requires saving multiple next() results from the same sequence, as the sequence itself represents (at least in most cases) a container or generated value sequence that is fully reproducible on demand.

I think allowing (or requiring) implementations to return the same object instance from every .next() call, or perhaps as a usability compromise, reusing a pair of objects on a round-robin basis (so that you can keep around the current and prior result) would be a very good decision here.

In my testing Map and Set are outperformed by a trivial Object or Array based data structure in every case, despite the fact that using an Object as a Map requires the use of Object.keys() to be able to sequentially iterate elements. The cost of iterator.next() in v8 and spidermonkey is currently extremely profound and profiling shows all the time is being spent in object creation and GC. (To be fair, self-hosting of iterations might improve on this some.)

Oddly enough, I consider the ES iterator spec to be a big improvement over C#'s IEnumerable, in terms of usability/API. But this is an area where it is intrinsically worse performance-wise than IEnumerable and that's unfortunate.

# Andrea Giammarchi (9 years ago)

+1 and I've raised same concerns 2 years ago [1]

IIRC the outcome was that VM should be good enough to handle objects with very short lifecycle, I'm still convinced (behind tests) that generators are overkill for IoT devices (low clock and way lower RAM).

Having always same object per iteration makes sense to me at least until it's done so that could be just a struct-like {done: false, value: null} object and GC will be happier than ever.

[1] webreflection.blogspot.co.uk/2013/06/on-harmony-javascript-generators.html

# Katelyn Gadd (9 years ago)

I'm certainly in favor of VMs improving to handle that, and adding pressure for it is good. However, optimizing a TypedArray temporary arg to .set() is a much simpler problem than doing the escape analysis necessary to be certain a .next() result doesn't escape from a calling scope and isn't used after a later next() call. Applying pressure will be a good way to make sure VM authors do the work necessary for this to happen, but if iterators are unacceptably slow in shipping implementations for a year+ I think the odds are good that most shipping software will avoid using them, at which point VM authors will have no reason to optimize for primitives nobody uses. =[

The fixed layout of the iterator object would allow the GC to allocate it cheaply and in the case of values (like ints) it wouldn't need to trace it either - so that helps a lot. But I don't know how realistic those optimizations are in practice.

# Yusuke SUZUKI (9 years ago)

It's very timely to me.

I'm now upgrading JavaScriptCore's iterator interface to align to the latest ES6 spec. However, since it allocates JS objects, it would cause performance degradation.

Currently, I'm planning to add a loophole to improve performance of iterating builtin JS objects. But if possible, I think it is preferable to change the spec to improve performance of iteration for user provided iterators.

# Andreas Rossberg (9 years ago)

On 15 February 2015 at 12:07, Katelyn Gadd <kg at luminance.org> wrote:

I'm certainly in favor of VMs improving to handle that, and adding pressure for it is good. However, optimizing a TypedArray temporary arg to .set() is a much simpler problem than doing the escape analysis necessary to be certain a .next() result doesn't escape from a calling scope and isn't used after a later next() call. Applying pressure will be a good way to make sure VM authors do the work necessary for this to happen, but if iterators are unacceptably slow in shipping implementations for a year+ I think the odds are good that most shipping software will avoid using them, at which point VM authors will have no reason to optimize for primitives nobody uses. =[

Engines are still ramping up on ES6 features, and most probably haven't been able to put much resources into optimisations yet (ES6 is large!). You can't compare it to old features that have been tuned for a decade and expect equal performance. This situation is unfortunate but unavoidable.

Either way, it doesn't justify cutting corners in the semantics in a naive and premature attempt to optimise (which might even harm more highl-level optimisations on the long run). Shared mutable result objects would be a horrible API with footgun potential, especially when you start to build stream abstractions on top. FWIW, this has been discussed at length at some of the meetings.

The fixed layout of the iterator object would allow the GC to allocate

it cheaply and in the case of values (like ints) it wouldn't need to trace it either - so that helps a lot. But I don't know how realistic those optimizations are in practice.

Not sure what you mean here. Unfortunately, result objects are still mutable and extensible, so anything can happen.

# Andrea Giammarchi (9 years ago)

Shared mutable result objects

FWIW to me that could be even be a singleton frozen {done: false, value: null} constant during iteration and a new object only once done.

Would this work or speed-up anything at all?

# Andreas Rossberg (9 years ago)

Frozen value null? I don't understand. You'd get the actual values from... where?

# Andrea Giammarchi (9 years ago)

then frozen {done: false} without even the value property ... Would this work or speed-up anything at all?

# Bradley Meck (9 years ago)

I think it is important to not attempt to pre-optimize and create partial solutions. I is fairly easy for the compiler to understand when [Symbol.iterator] is invoked and optimize it at that point, same concept as on the stack replacement. A better question in my book, is there any engine author who is concerned about the difficulty of implementing such an optimization.

Having a shared object means much more escape tracking than if the object was not shared.

# Andrea Giammarchi (9 years ago)

Uhm, early send, so I guess I didn't answer.

Common pattern is to poll.next() a yield until its done property is true so that a value can be used.

This is I believe the common case that will create thousands of objects to be quickly trashed as garbage ... so I was wondering if those are all needed

# Andrea Giammarchi (9 years ago)

Agreed, I see the .next() issue similar to an array.length in a for loop where the code inside does not mutate the array anyhow. In that case there's no need to invoke the getter for the length at each iteration, so I guess one day we might have similar optimizations in place for generators.

'till that day though ... I rather use old school events for bigger IoT projects, those always played quite nice (and have been sharing the same event object for years now without a problem)

# Andreas Rossberg (9 years ago)

On 16 February 2015 at 15:41, Andrea Giammarchi <andrea.giammarchi at gmail.com> wrote:

Common pattern is to poll.next() a yield until its done property is true so that a value can be used.

This is I believe the common case that will create thousands of objects to be quickly trashed as garbage ... so I was wondering if those are all needed

Er, I don't think this is the common use case at all. You iterate over something to process values, otherwise there isn't much point in using iterators in the first place.

# Andrea Giammarchi (9 years ago)

In fact, you can specify a generator function with an infinite loop (like the infamous while (true) { .. }) that essentially never finishes. While that's usually madness or a mistake in a normal JS program, with generator functions it's perfectly sane and sometimes exactly what you want to do!

just probably the most read piece about JS generators .... davidwalsh.name/es6-generators

Hence the potential problem caused by while (!yielded.next().done) thousands of objects ... I agree that's not the best way to go with generators, but I've seen many code-bases based on similar automagic resolution.

Other example in going async and the runGenerator function: davidwalsh.name/async-generators


// run (async) a generator to completion// Note: simplified approach:
no error handling herefunction runGenerator(g) {
    var it = g(), ret;

    // asynchronously iterate over generator    (function iterate(val){
        ret = it.next( val );

        if (!ret.done) {
            // poor man's "is it a promise?" test            if
("then" in ret.value) {
                // wait on the promise                ret.value.then( iterate );
            }
            // immediate value: just send right back in            else {
                // avoid synchronous recursion
setTimeout( function(){
                    iterate( ret.value );
                }, 0 );
            }
        }
    })();}

And since one of the purpose/side-effect of generators is to make code look synchronous, I don't think Kyle went too far or anything, he simply showed some pattern that I've seen already applied here and there.

So yes, behind libraries or on users hands, the amount of garbage can be "too damn high"

# Brendan Eich (9 years ago)

Andreas Rossberg wrote:

Er, I don't think this is the common use case at all. You iterate over something to process values, otherwise there isn't much point in using iterators in the first place.

Right.

Instead of coming up with some bogo-API with a mutable singleton object having pigeon-hole problems, how about engine hackers actually do some optimizing? I remember in the Mosaic days, Eric Bina replied to someone who decried the difficulty of fixing some ostensibly hard bug by writing the patch. :-|

# Till Schneidereit (9 years ago)

FWIW I have a patch pending in bug 1129313 that makes for .. of 4x as fast for Maps. That is without even having scalar replacement working yet, which probably explains why v8 is still 2x as fast. JSC is about 10% slower than patched SpiderMonkey. As explained in comment 12 of that bug, I haven't found a way to do the same thing - iterating over all entries (i.e., properties) and getting both key and value - anywhere nearly as fast on a plain object. That probably means I'm overlooking something, but at the very least I think the concerns about this being hard to optimize might be slightly overblown.

# Filip Pizlo (9 years ago)

On Feb 16, 2015, at 3:30 AM, Yusuke SUZUKI <utatane.tea at gmail.com> wrote:

It's very timely to me.

I'm now upgrading JavaScriptCore's iterator interface to allign to the latest ES6 spec[1]. However, since it allocates JS objects, it would cause performance degradation.

JavaScriptCore has a pretty good object escape analysis and will not allocate short-lived objects in hot code (see here for the phase that eliminates allocations: trac.webkit.org/browser/trunk/Source/JavaScriptCore/dfg/DFGObjectAllocationSinkingPhase.cpp, trac.webkit.org/browser/trunk/Source/JavaScriptCore/dfg/DFGObjectAllocationSinkingPhase.cpp). If you find that your changes cause object allocation even after the optimizing JIT kicks in, then please file a bug to that effect.

Currently, I'm planning to add a loophole to improve perofrmance of iterating builtin JS objects. But if possible, I think it is preferable to change the spec to improve performance of iteration for user provided iterators.

I'm skeptical that spec changes will be necessary.

# David Bonnet (9 years ago)

Sorry for bringing this up again, especially now that the specification has been finalised. However, I can’t really understand the rationale behind the design of Iterator.next().

There are at least three possible options for the behaviour of Iterator.next():

  1. Returning a value and throwing StopIteration when done.
  2. Returning {value, done} object with done = true when done.
  3. Returning a value and returning Symbol.iterationDone when done.

I understand that option 1 would require trapping the exception.

Option 2 would require creating an object at each call, which as discussed in this thread, would require non-trivial optimisations.

Option 3, however, seems IMHO to have no caveats, and would require adding another symbol (Symbol.iterationDone) along with Symbol.iterator.

So why was option 2 chosen? Why not option 3? All I could find is a discussion that still refers option 1: harmony:iterators

# Brendan Eich (9 years ago)

Generators (in similar manner and for similar reasons as in PEP-380) can return a value via a return statement. This requires a pair {value, done}, you cannot do it with a sentinel such as your item (3).

BTW at ECOOP in Prague this past week, I heard from folks (I forget whether it was Till Schneiderheit or Andy Wingo) that the object-per-iteration cost avoidance optimizations are about to land in one of the top engines.