Andreas Rossberg (2014-10-24T11:19:54.000Z)
d at domenic.me (2014-11-18T22:59:17.631Z)
On 22 October 2014 16:45, Mark S. Miller <erights at google.com> wrote: > Clearly, this expository implementation is suboptimal in many ways. But it > demonstrates the following: > > * It provides the full complexity measure gains that a realistic > implementation would have. > > * For each of these objects, an extra SlowWeakMap instance is allocated as > its shadow. > > * For each access, an extra indirection through this SlowWeakMap is > therefore needed. > > * Only objects that have been used as keys in FastWeakMaps would ever have > their [[Shadow]] set, so this could also be allocated on demand, given only > a bit saying whether it is present. Besides this storage of this bit, there > is no other effect or cost on any non-weakmap objects. > > * Since non-weakmap code doesn't need to test this bit, there is zero > runtime cost on non-weakmap code. > > * Whether an object has been used as a key or not (and therefore whether an > extra shadow has been allocated or not), normal non-weak property lookup on > the object is unaffected, and pays no additional cost. > > A realistic implementation should seek to avoid allocating the extra shadow > objects. However, even if not, we are much better off with the above scheme > than we are with the current slow WeakMap. > > Of course, we should proceed towards realistic implementations asap and get > actual empirical data. But the above demonstration establishes that the > issue in this thread should be considered settled. I appreciate your analysis, but I respectfully disagree with your conclusion. - The extra slot and indirection for the [[Shadow]] property is an extra cost (V8 already has a very similar mechanism to implement "hidden properties" as provided by its API (and in fact, hash codes), but it is known to be (too) slow). - Optimising away this slot is difficult -- for starters, because it will have various secondary effects (e.g., every add/remove of an object to/from a weak map will potentially result in a layout change for that object, increasing spurious polymorphism, and potentially invalidating/deoptimising existing code). - Worse, when you flatten weak properties into objects, then even GC could cause object layouts to change, which is a whole new dimension of complexity. - On the other hand, if you do _not_ do this flattening, performance is unlikely to ever be competitive with true private properties (we internally introduced private symbols in V8, exactly because the extra indirection for hidden properties was too costly). I'm still sceptical that we can actually resolve this dilemma, especially when this is supposed to be a solution for private state. It seems to me that the performance implications of weakness are fundamentally at odds with the desire to have efficient private state.