Andreas Rossberg (2014-10-24T11:19:54.000Z)
On 22 October 2014 16:45, Mark S. Miller <erights at google.com> wrote:
> On Tue, Oct 21, 2014 at 11:59 PM, Andreas Rossberg <rossberg at google.com>
> wrote:
>>
>> On 21 October 2014 22:31, Mark S. Miller <erights at google.com> wrote:
>> >> in saying that [weak maps] are not designed to work efficiently in that
>> >> manner.
>> >
>> > Actually, they are. The problem is that initial implementations date
>> > from
>> > before we appreciated the need for the transposed representation. The
>> > best
>> > thing TC39 can do for the future of WeakMaps is to proceed assuming the
>> > transposed representation.
>>
>> While I sympathise, let me clarify that this still remains a
>> conjecture. AFAIK, nobody has proved it empirically in a JS
>> implementation yet, we don't know in detail how complex such an
>> implementation would be, and what side effects it might have on
>> general performance (e.g., via added polymorphism). It's most likely
>> not as easy or as clear a win as you may think it is.
>
>
> The following code is an existence proof of sorts that, given only the
> WeakMap mechanisms you've already built + one bit of magic whose feasibility
> I hope we don't need to debate. I use ES5 here to make clear that no other
> magic or unengineered features are assumed.
>
>
> var WeakMap;
>
> (function() {
>    'use strict';
>
>    var SlowWeakMap = WeakMap;
>
>    function FastWeakMap() {
>      var token = Object.freeze(Object.create(null));
>      return Object.freeze({
>        get: function(key) {
>          var shadow = key.[[Shadow]];
>          return shadow ? shadow.get(token) : void 0;
>        },
>        set: function(key, value) {
>          var shadow = key.[[Transposer]]
>          if (!shadow) {
>            shadow = new SlowWeakMap();
>            key.[[Transposer]] = shadow;
>          }
>          shadow.set(token, value);
>        },
>        clear: function() {
>          token = Object.freeze(Object.create({}));
>        }
>      });
>    }
>
>    // Don't do this until it is a complete shim
>    // WeakMap = FastWeakMap;
>
>  }());
>
>
>
> The magic is that each object, whether frozen or not, would need in effect
> one extra internal mutable [[Shadow]] property.
>
> Clearly, this expository implementation is suboptimal in many ways. But it
> demonstrates the following:
>
> * It provides the full complexity measure gains that a realistic
> implementation would have.
>
> * For each of these objects, an extra SlowWeakMap instance is allocated as
> its shadow.
>
> * For each access, an extra indirection through this SlowWeakMap is
> therefore needed.
>
> * Only objects that have been used as keys in FastWeakMaps would ever have
> their [[Shadow]] set, so this could also be allocated on demand, given only
> a bit saying whether it is present. Besides this storage of this bit, there
> is no other effect or cost on any non-weakmap objects.
>
> * Since non-weakmap code doesn't need to test this bit, there is zero
> runtime cost on non-weakmap code.
>
> * Whether an object has been used as a key or not (and therefore whether an
> extra shadow has been allocated or not), normal non-weak property lookup on
> the object is unaffected, and pays no additional cost.
>
> A realistic implementation should seek to avoid allocating the extra shadow
> objects. However, even if not, we are much better off with the above scheme
> than we are with the current slow WeakMap.
>
> Of course, we should proceed towards realistic implementations asap and get
> actual empirical data. But the above demonstration establishes that the
> issue in this thread should be considered settled.

I appreciate your analysis, but I respectfully disagree with your conclusion.

- The extra slot and indirection for the [[Shadow]] property is an
extra cost (V8 already has a very similar mechanism to implement
"hidden properties" as provided by its API (and in fact, hash codes),
but it is known to be (too) slow).

- Optimising away this slot is difficult -- for starters, because it
will have various secondary effects (e.g., every add/remove of an
object to/from a weak map will potentially result in a layout change
for that object, increasing spurious polymorphism, and potentially
invalidating/deoptimising existing code).

- Worse, when you flatten weak properties into objects, then even GC
could cause object layouts to change, which is a whole new dimension
of complexity.

- On the other hand, if you do _not_ do this flattening, performance
is unlikely to ever be competitive with true private properties (we
internally introduced private symbols in V8, exactly because the extra
indirection for hidden properties was too costly).

I'm still sceptical that we can actually resolve this dilemma,
especially when this is supposed to be a solution for private state.
It seems to me that the performance implications of weakness are
fundamentally at odds with the desire to have efficient private state.

/Andreas
d at domenic.me (2014-11-18T22:59:17.631Z)
On 22 October 2014 16:45, Mark S. Miller <erights at google.com> wrote:

> Clearly, this expository implementation is suboptimal in many ways. But it
> demonstrates the following:
>
> * It provides the full complexity measure gains that a realistic
> implementation would have.
>
> * For each of these objects, an extra SlowWeakMap instance is allocated as
> its shadow.
>
> * For each access, an extra indirection through this SlowWeakMap is
> therefore needed.
>
> * Only objects that have been used as keys in FastWeakMaps would ever have
> their [[Shadow]] set, so this could also be allocated on demand, given only
> a bit saying whether it is present. Besides this storage of this bit, there
> is no other effect or cost on any non-weakmap objects.
>
> * Since non-weakmap code doesn't need to test this bit, there is zero
> runtime cost on non-weakmap code.
>
> * Whether an object has been used as a key or not (and therefore whether an
> extra shadow has been allocated or not), normal non-weak property lookup on
> the object is unaffected, and pays no additional cost.
>
> A realistic implementation should seek to avoid allocating the extra shadow
> objects. However, even if not, we are much better off with the above scheme
> than we are with the current slow WeakMap.
>
> Of course, we should proceed towards realistic implementations asap and get
> actual empirical data. But the above demonstration establishes that the
> issue in this thread should be considered settled.

I appreciate your analysis, but I respectfully disagree with your conclusion.

- The extra slot and indirection for the [[Shadow]] property is an
extra cost (V8 already has a very similar mechanism to implement
"hidden properties" as provided by its API (and in fact, hash codes),
but it is known to be (too) slow).

- Optimising away this slot is difficult -- for starters, because it
will have various secondary effects (e.g., every add/remove of an
object to/from a weak map will potentially result in a layout change
for that object, increasing spurious polymorphism, and potentially
invalidating/deoptimising existing code).

- Worse, when you flatten weak properties into objects, then even GC
could cause object layouts to change, which is a whole new dimension
of complexity.

- On the other hand, if you do _not_ do this flattening, performance
is unlikely to ever be competitive with true private properties (we
internally introduced private symbols in V8, exactly because the extra
indirection for hidden properties was too costly).

I'm still sceptical that we can actually resolve this dilemma,
especially when this is supposed to be a solution for private state.
It seems to me that the performance implications of weakness are
fundamentally at odds with the desire to have efficient private state.