Benjamin (Inglor) Gruenbaum (2015-02-15T15:16:27.000Z)
On Sun, Feb 15, 2015 at 3:06 PM, Katelyn Gadd <kg at luminance.org> wrote:

> In my testing Map and Set are outperformed by a trivial Object or
> Array based data structure in every case, *despite the fact* that
> using an Object as a Map requires the use of Object.keys() to be able
> to sequentially iterate elements. The cost of iterator.next() in v8
> and spidermonkey is currently extremely profound and profiling shows
> all the time is being spent in object creation and GC. (To be fair,
> self-hosting of iterations might improve on this some.)
>

Yes, this is of course true simply because iterating Set/Map was not
optimized yet in V8/SpiderMoney since usually features are first
implemented and then optimized. Benchmarking current naive engine
implementations to make conclusions about the spec is a very risky thing to
do. It's entirely possible for engines to optimize this. By the same logic
I could say that `const` has bad performance because how it is currently
implemented as much much slower than `var` in V8.

> I am sure that given some sufficient amount of compiler engineering,
the allocations could be entirely optimized out, but this is far from
a trivial case so odds are it will not happen soon.

Right, I agree with you there it probably won't happen soon. Changing the
spec won't change that fact. Remember that the engine doesn't actually need
to allocate a new object in cases where it can reuse the old one safely
since it's not used anymore - again, this can be solved at the engine level.

> My point is that for my current test cases, were the .next()
allocation optimized out via a spec revision and *very trivial* change
to current implementations, the performance would be great. The
bottleneck would go away. This is much, much easier than having to
introduce sophisticated escape analysis & allocation removal into JS
VMs.

This is something that can be solved at the VM level. VMs perform much much
smarter optimizations than this already. There is no reason to sacrifice
code correctness and the principle of least astonishment in order to get
acceptable performance. Benchmarking against "proof of concept"
implementations is counter-productive in my opinion. The engines are
already 'allowed' to use the same object when there is no observable
difference. Even if we allow this 'optimization' it will take the engines
as much time to implement it so there is no gain from this anyway.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.mozilla.org/pipermail/es-discuss/attachments/20150215/f15ba635/attachment.html>
d at domenic.me (2015-02-21T00:41:23.596Z)
On Sun, Feb 15, 2015 at 3:06 PM, Katelyn Gadd <kg at luminance.org> wrote:

> In my testing Map and Set are outperformed by a trivial Object or
> Array based data structure in every case, *despite the fact* that
> using an Object as a Map requires the use of Object.keys() to be able
> to sequentially iterate elements. The cost of iterator.next() in v8
> and spidermonkey is currently extremely profound and profiling shows
> all the time is being spent in object creation and GC. (To be fair,
> self-hosting of iterations might improve on this some.)
>

Yes, this is of course true simply because iterating Set/Map was not
optimized yet in V8/SpiderMoney since usually features are first
implemented and then optimized. Benchmarking current naive engine
implementations to make conclusions about the spec is a very risky thing to
do. It's entirely possible for engines to optimize this. By the same logic
I could say that `const` has bad performance because how it is currently
implemented as much much slower than `var` in V8.

> I am sure that given some sufficient amount of compiler engineering,
> the allocations could be entirely optimized out, but this is far from
> a trivial case so odds are it will not happen soon.

Right, I agree with you there it probably won't happen soon. Changing the
spec won't change that fact. Remember that the engine doesn't actually need
to allocate a new object in cases where it can reuse the old one safely
since it's not used anymore - again, this can be solved at the engine level.

> My point is that for my current test cases, were the .next()
> allocation optimized out via a spec revision and *very trivial* change
> to current implementations, the performance would be great. The
> bottleneck would go away. This is much, much easier than having to
> introduce sophisticated escape analysis & allocation removal into JS
> VMs.

This is something that can be solved at the VM level. VMs perform much much
smarter optimizations than this already. There is no reason to sacrifice
code correctness and the principle of least astonishment in order to get
acceptable performance. Benchmarking against "proof of concept"
implementations is counter-productive in my opinion. The engines are
already 'allowed' to use the same object when there is no observable
difference. Even if we allow this 'optimization' it will take the engines
as much time to implement it so there is no gain from this anyway.