Katelyn Gadd (2015-02-15T15:06:58.000Z)
Replies inline

On 15 February 2015 at 06:16, Benjamin (Inglor) Gruenbaum
<inglor at gmail.com> wrote:
>> In my testing (and in my theory, as an absolute) this is a real
> performance defect in the spec and it will make iterators inferior to
> all other forms of sequence iteration, to the extent that they may end
> up being used very rarely, and developers will be biased away from Map
> and Set as a result.
>
> Why?

To quote my original message:

In my testing Map and Set are outperformed by a trivial Object or
Array based data structure in every case, *despite the fact* that
using an Object as a Map requires the use of Object.keys() to be able
to sequentially iterate elements. The cost of iterator.next() in v8
and spidermonkey is currently extremely profound and profiling shows
all the time is being spent in object creation and GC. (To be fair,
self-hosting of iterations might improve on this some.)


To make this clearer: I have an existing dictionary container based on
Object and Object.keys. Replacing this with Map and Map iterator
produced a slowdown, and profiles showed almost all the time was being
spent in result object creation and GC.

>
> Allocating a new object isn't really expensive, in cases it is and it
> introduces GC pressure a clever engine can optimize it away by not really
> allocating a new object where it is not required. Your suggestion can be
> fully implemented by the engine already in the cases you speak of (where no
> reference is kept) so there is no real speed benefit from it performance
> wise. However in the case an iteration result is kept for later use this can
> be destructive and create unpredictable results.

It's expensive. In general object allocation & GCs show up in profiles
for almost every performance-sensitive application I work on. I'm sure
there are applications where this is not the case, though.
Also, to quote my original message:

I think that past APIs with this problem (for example TypedArray.set)
have proven that 'a sufficiently smart VM can optimize this' is not
representative of real VMs or real use cases.


I am sure that given some sufficient amount of compiler engineering,
the allocations could be entirely optimized out, but this is far from
a trivial case so odds are it will not happen soon.

>
> Also - of course for... of and iterating iterators right now is slower than
> iterating in ways that have been there for years. Modern JS engines usually
> add support for features and only then optimize them.

My point is that for my current test cases, were the .next()
allocation optimized out via a spec revision and *very trivial* change
to current implementations, the performance would be great. The
bottleneck would go away. This is much, much easier than having to
introduce sophisticated escape analysis & allocation removal into JS
VMs.
d at domenic.me (2015-02-21T00:41:00.468Z)
On 15 February 2015 at 06:16, Benjamin (Inglor) Gruenbaum <inglor at gmail.com> wrote:

> Why?

To quote my original message:

In my testing Map and Set are outperformed by a trivial Object or
Array based data structure in every case, *despite the fact* that
using an Object as a Map requires the use of Object.keys() to be able
to sequentially iterate elements. The cost of iterator.next() in v8
and spidermonkey is currently extremely profound and profiling shows
all the time is being spent in object creation and GC. (To be fair,
self-hosting of iterations might improve on this some.)


To make this clearer: I have an existing dictionary container based on
Object and Object.keys. Replacing this with Map and Map iterator
produced a slowdown, and profiles showed almost all the time was being
spent in result object creation and GC.

> Allocating a new object isn't really expensive, in cases it is and it
> introduces GC pressure a clever engine can optimize it away by not really
> allocating a new object where it is not required. Your suggestion can be
> fully implemented by the engine already in the cases you speak of (where no
> reference is kept) so there is no real speed benefit from it performance
> wise. However in the case an iteration result is kept for later use this can
> be destructive and create unpredictable results.

It's expensive. In general object allocation & GCs show up in profiles
for almost every performance-sensitive application I work on. I'm sure
there are applications where this is not the case, though.
Also, to quote my original message:

I think that past APIs with this problem (for example TypedArray.set)
have proven that 'a sufficiently smart VM can optimize this' is not
representative of real VMs or real use cases.


I am sure that given some sufficient amount of compiler engineering,
the allocations could be entirely optimized out, but this is far from
a trivial case so odds are it will not happen soon.

> Also - of course for... of and iterating iterators right now is slower than
> iterating in ways that have been there for years. Modern JS engines usually
> add support for features and only then optimize them.

My point is that for my current test cases, were the .next()
allocation optimized out via a spec revision and *very trivial* change
to current implementations, the performance would be great. The
bottleneck would go away. This is much, much easier than having to
introduce sophisticated escape analysis & allocation removal into JS
VMs.