Do we really need the [[HasOwnProperty]] internal method and hasOwn trap

# Allen Wirfs-Brock (11 years ago)

It isn't clear to me why the [[HasOwnProperty]] internal method and the hasOwn Proxy trap need to exist as object behavior extension points.

Within my current ES6 draft, [[HasOwnProperty]] is only used within the definition of the ordinary [[HasProperty]] internal method and in the definition of Object.prototype.hasOwnProperty.

The first usage, could be replaced by an inline non-polymorphic property existence check while the latter could be replaced by a [[GetOwnProperty]] call with a check for an undefined result.

The existence of both [[HasOwnProperty]] and [[HasProperty]] creates the possibility of object that exhibit inconsistent results for the two operations. A reasonable expectation would seem to be:

If O.[HasProperty] is false, then O.[HasOwnProperty] should also be false.

This is in fact the case for all ordinary objects that only have ordinary object in their inheritance chain. Now consider a proxy defined like so:

let p = new Proxy({}, { hasOwn(target,key) {return key === 'foo' ? false: Reflect.hasOwn(target,key)} has(target,key) {return key === 'foo' ? true: Reflect.has(target,key)} });

console.log(Reflect.hasOwn(p,"foo")); //will display: false console.log(Reflect.has(p,"foo")); //will display: true

Given,that [[HasOwnProperty]] is not really essential, has good alternatives, and exposes the potential for such inconsistency I think we should just eliminate it. Does anyone want to argue that we need to keep it?

(of course, this isn't the only situation where a proxy can be defined that exhibits inconsistent responses for the internal methods. I'm thinking about some others too, but this one seems fairly straightforward to eliminate).

# Tom Van Cutsem (11 years ago)

2012/11/12 Allen Wirfs-Brock <allen at wirfs-brock.com>

It isn't clear to me why the [[HasOwnProperty]] internal method and the hasOwn Proxy trap need to exist as object behavior extension points.

[...] let p = new Proxy({}, { hasOwn(target,key) {return key === 'foo' ? false: Reflect.hasOwn(target,key)} has(target,key) {return key === 'foo' ? true: Reflect.has(target,key)} });

console.log(Reflect.hasOwn(p,"foo")); //will display: false console.log(Reflect.has(p,"foo")); //will display: true

I think you intended to swap the outcomes: the above is perfectly consistent with a normal object that inherits a "foo" property. The more awkward behavior would be an object that says it does have a "foo" own property, but no "foo" own or inherited property.

Given,that [[HasOwnProperty]] is not really essential, has good alternatives, and exposes the potential for such inconsistency I think we should just eliminate it. Does anyone want to argue that we need to keep it?

I would argue to keep it, on the general principle that all "derived" traps (like "hasOwn") enable a more "efficient" virtualization (concretely: they avoid unnecessary object allocations). In this particular case, the alternative solution would allocate a property descriptor only to test whether it's not undefined.

We could of course revisit this principle, but that would argue in favor of abandoning all the derived traps, as they can be dealt with similarly (see < harmony:virtual_object_api>).

Or we could make ad hoc exceptions on the general principle. For instance you could argue that "hasOwn" is a much less common operation than "has" (i.e. the in-operator), such that we don't need a "hasOwn" trap but we do still need a "has" trap. I'm not a fan of this option though.

(of course, this isn't the only situation where a proxy can be defined that

exhibits inconsistent responses for the internal methods. I'm thinking about some others too, but this one seems fairly straightforward to eliminate).

This is a general issue with all derived traps. Derived traps avoid allocations, but always open the way to inconsistencies with the fundamental traps from which they are normally derived.

Note that the "invariant enforcement" technique still doesn't allow a proxy with non-config/non-extensibility invariants to lie about its own properties. There are post-condition assertions on both "getOwnPropertyDescriptor" and "hasOwn" that ensure this. More generally: as long as a proxy p doesn't answer "true" to Object.isFrozen(p), its behavior can be arbitrary. Only when some sensible invariants are established on the proxy's target is the proxy's behavior restrained.

# Andreas Rossberg (11 years ago)

On 12 November 2012 02:17, Allen Wirfs-Brock <allen at wirfs-brock.com> wrote:

It isn't clear to me why the [[HasOwnProperty]] internal method and the hasOwn Proxy trap need to exist as object behavior extension points.

Within my current ES6 draft, [[HasOwnProperty]] is only used within the definition of the ordinary [[HasProperty]] internal method and in the definition of Object.prototype.hasOwnProperty.

The first usage, could be replaced by an inline non-polymorphic property existence check while the latter could be replaced by a [[GetOwnProperty]] call with a check for an undefined result.

The existence of both [[HasOwnProperty]] and [[HasProperty]] creates the possibility of object that exhibit inconsistent results for the two operations. A reasonable expectation would seem to be:

If O.[HasProperty] is false, then O.[HasOwnProperty] should also be false.

This is in fact the case for all ordinary objects that only have ordinary object in their inheritance chain. Now consider a proxy defined like so:

let p = new Proxy({}, { hasOwn(target,key) {return key === 'foo' ? false: Reflect.hasOwn(target,key)} has(target,key) {return key === 'foo' ? true: Reflect.has(target,key)} });

console.log(Reflect.hasOwn(p,"foo")); //will display: false console.log(Reflect.has(p,"foo")); //will display: true

Well, this is only one such example, there are plenty of similar ways to screw up the object model with proxies. For example, there is no guarantee that the has, get, getOwnProperty{Descriptor,Names} traps behave consistently either. When you buy into proxies, you have to buy into that as well.

Nevertheless, I agree that [[HasOwnProperty]] and the hasOwn trap are dispensable, but do not feel strongly about it. Just looking at the traps from a programmer POV, it may feel a bit asymmetric to have hasOwn but not e.g. getOwn (although I know where the difference is coming from).

# Brandon Benvie (11 years ago)

Shouldn't it be the reverse, based on the removal of getPropertyDescriptor/Names? The proxy controls what i's [[Prototype]] is which indirectly directs how non-own lookups proceed. The functionality of has can (and usually should) be derived from proto-walking hasOwn until true or null.

# Allen Wirfs-Brock (11 years ago)

On Nov 12, 2012, at 1:52 AM, Tom Van Cutsem wrote:

2012/11/12 Allen Wirfs-Brock <allen at wirfs-brock.com> It isn't clear to me why the [[HasOwnProperty]] internal method and the hasOwn Proxy trap need to exist as object behavior extension points.

[...] let p = new Proxy({}, { hasOwn(target,key) {return key === 'foo' ? false: Reflect.hasOwn(target,key)} has(target,key) {return key === 'foo' ? true: Reflect.has(target,key)} });

console.log(Reflect.hasOwn(p,"foo")); //will display: false console.log(Reflect.has(p,"foo")); //will display: true

I think you intended to swap the outcomes: the above is perfectly consistent with a normal object that inherits a "foo" property. The more awkward behavior would be an object that says it does have a "foo" own property, but no "foo" own or inherited property.

right, I flipped the example... if hasOwn(k) is true, then has(k) should also be true.

Given,that [[HasOwnProperty]] is not really essential, has good alternatives, and exposes the potential for such inconsistency I think we should just eliminate it. Does anyone want to argue that we need to keep it?

I would argue to keep it, on the general principle that all "derived" traps (like "hasOwn") enable a more "efficient" virtualization (concretely: they avoid unnecessary object allocations). In this particular case, the alternative solution would allocate a property descriptor only to test whether it's not undefined.

Yes, but we are defining the MOP that is necessary to support the ES language, it doesn't have to do much else. A this point in time, there are no core features of the language that makes independent use of [[HasOwnProperty]]/hasOwn. The only situation currently using [[HasOwnProperty]] where replacing it with [[GetOwnProperty]] would result in allocation of a property descriptor is in the implementation of O.p.hasOwnProperty. A engine's implementation of that built-in could easily short-circut that allocation for ordinary objects.

Another approach would be to change the shape of the API and provide a single trap that is parameterized to determine whether the query was restricted to own properties:

[[HasProperty]](P, onlyOwn) has: function(target, name, onlyOwn) -> boolean

If onlyOwn is true, return true if P/name is the key of an existing own property. If onlyOwn is false, return true if P/name is the key of an existing own or inherited property

This sort of API may seem less elegant, but it eliminates the footgun of forgetting to handle has when a proxy has over-ridden hasOwn (or visa versa).

In particular, I'm thinking about this in the context of the Proxy implementation which automatically delegates to the target object if the handler does not have method for a corresponding trap. Someone might reasonably expect that handling hasOwn is sufficient because they assume that the default implementation of has will call back to the hasOwn they provide. However, the target's [[HasProperty]] (assume for this discussion that the target is an ordinary object) will call the target's [[HasOwnProperty]] not the original proxy object's handed hasOwn handler method.

We could of course revisit this principle, but that would argue in favor of abandoning all the derived traps, as they can be dealt with similarly (see harmony:virtual_object_api).

Yes, I just started this discussion with [[HasProperty]]/[[HasOwnProperty]] because it is the simplest case. There are actually much worse possible inconsistencies (for example, [[GetOwnProperty]] and [[GetP]] exposing different values for the same own property).

I think the root issue relates to polymorphic dispatch of the internal methods and the transparent forwarding of unhanded traps to the Proxy target object.

Every-time the spec. calls an internal method it is implicitly doing a polymorphic dispatch using that object's "handler" as the "type". (we can think of ordinary objects as having a intrinsic handler that dispatches to the ordinary implementation of the internal methods). The specification's implementation of the ordinary internal methods generally assumes that the "this object" that the internal method is operating upon is the object that did the original polymorphic dispatch (remember, no inheritance of internal methods). This means that within those internal method algorithms it can be assume that a self-calll to an internal method will dispatch to the same ordinary handler. [[HasProperty]] can self-call [[HasOwnProperty]] safely assuming that it will be using the same two coordinated implementation.

However, if the ordinary [[HasProperty]] is being evaluated on a Proxy target because it was forwarded by the Proxy [[HasProperty]], its "this object" is the target, not the proxy and so its call to [[HasOwnProperty]] dispatches via the target and not the proxy. We get an answer that is consistent, relative to the target object but not consistent relative to the proxy-based object that [[HasProperty]] was originally invoked upon. What we have is a situation that is very similar to that which requires the inclusion of a receiver argument in [[GetP]]/[[SetP]] calls, but along a different dimension.

Or we could make ad hoc exceptions on the general principle. For instance you could argue that "hasOwn" is a much less common operation than "has" (i.e. the in-operator), such that we don't need a "hasOwn" trap but we do still need a "has" trap. I'm not a fan of this option though.

or, as described above, combine them into a single trap. It would solve this case but necessarily others.

(of course, this isn't the only situation where a proxy can be defined that exhibits inconsistent responses for the internal methods. I'm thinking about some others too, but this one seems fairly straightforward to eliminate).

This is a general issue with all derived traps. Derived traps avoid allocations, but always open the way to inconsistencies with the fundamental traps from which they are normally derived.

Note that the "invariant enforcement" technique still doesn't allow a proxy with non-config/non-extensibility invariants to lie about its own properties. There are post-condition assertions on both "getOwnPropertyDescriptor" and "hasOwn" that ensure this. More generally: as long as a proxy p doesn't answer "true" to Object.isFrozen(p), its behavior can be arbitrary. Only when some sensible invariants are established on the proxy's target is the proxy's behavior restrained.

Yes, and the concern is that most Proxy uses will not be in the context of frozen object so in most cases none of the "invariant enforcement" will not help with this problem.

I've never been a big fan of dynamic invariant enforcement for proxies and I'm still not. I am concerned that the current interfaces and factoring of derived/fundamental along with the target forwarding semantics is a footgun that is going to led to difficult to identify bugs. For now, I can spec. the current factoring I think we should continue to look at it from this perspective and see if we can minimize the size of the footgun.

# Allen Wirfs-Brock (11 years ago)

On Nov 12, 2012, at 2:21 AM, Brandon Benvie wrote:

Shouldn't it be the reverse, based on the removal of getPropertyDescriptor/Names? The proxy controls what i's [[Prototype]] is which indirectly directs how non-own lookups proceed. The functionality of has can (and usually should) be derived from proto-walking hasOwn until true or null.

yes, that would be a good fix for this particular problem. [[HasOwnProperty]] could remain as an internal method and HasProperty could be defined as an abstraction operation that uses the [[Prototype]] internal accessor and [[HasOwnProperty]].

Another potential consistency issue is between [[HasOwnProperty]] and [[GetOwnProperty]]. That could be eliminated by combining them into one operation:

[[GetOwnPropety]](name, descriptorNeeded) -> descriptor | boolean | undefined

If descriptorNeeded is true it acts as the current [[GetOwnProperty]]. If descriptorNeeded is false it acts as the current [[HasOwnProperty]].

At the handler-level it makes it impossible to override "hasOwn"without also overriding "getOwnPropertyDescriptor"

# Tom Van Cutsem (11 years ago)

2012/11/12 Allen Wirfs-Brock <allen at wirfs-brock.com>

I think the root issue relates to polymorphic dispatch of the internal methods and the transparent forwarding of unhanded traps to the Proxy target object.

Every-time the spec. calls an internal method it is implicitly doing a polymorphic dispatch using that object's "handler" as the "type". (we can think of ordinary objects as having a intrinsic handler that dispatches to the ordinary implementation of the internal methods). The specification's implementation of the ordinary internal methods generally assumes that the "this object" that the internal method is operating upon is the object that did the original polymorphic dispatch (remember, no inheritance of internal methods). This means that within those internal method algorithms it can be assume that a self-calll to an internal method will dispatch to the same ordinary handler. [[HasProperty]] can self-call [[HasOwnProperty]] safely assuming that it will be using the same two coordinated implementation.

However, if the ordinary [[HasProperty]] is being evaluated on a Proxy target because it was forwarded by the Proxy [[HasProperty]], its "this object" is the target, not the proxy and so its call to [[HasOwnProperty]] dispatches via the target and not the proxy. We get an answer that is consistent, relative to the target object but not consistent relative to the proxy-based object that [[HasProperty]] was originally invoked upon.

Yes, this is a good summary of the issue, which I'll call the "forwarding-footgun". But let me make a further distinction as there are two ways in which an inconsistency between fundamental and derived traps can occur:

  1. a proxy handler implements a derived trap, but forgets to implement the associated fundamental trap.

  2. a proxy handler implements a fundamental trap, but forgets to implement the associated derived traps.

  3. is what you described above.

  4. can be solved by having the handler subclass the standard Handler (see < harmony:virtual_object_api>): by "subclassing" the Handler and overriding some fundamental traps, one is guaranteed that any derived traps will automatically call the overridden fundamental trap.

What we have is a situation that is very similar to that which requires the inclusion of a receiver argument in [[GetP]]/[[SetP]] calls, but along a different dimension.

I don't yet see how the "receiver" argument is similar to fusing derived and fundamental traps into a single trap.

Note that the "invariant enforcement" technique still doesn't allow a proxy with non-config/non-extensibility invariants to lie about its own properties. There are post-condition assertions on both "getOwnPropertyDescriptor" and "hasOwn" that ensure this. More generally: as long as a proxy p doesn't answer "true" to Object.isFrozen(p), its behavior can be arbitrary. Only when some sensible invariants are established on the proxy's target is the proxy's behavior restrained.

Yes, and the concern is that most Proxy uses will not be in the context of frozen object so in most cases none of the "invariant enforcement" will not help with this problem.

I may have been misunderstood: I didn't claim that invariant enforcement solves the forwarding-footgun, only that the inconsistent behavior caused by the forwarding-footgun is not in itself an issue. It doesn't invalidate code that relies on invariants.

I've never been a big fan of dynamic invariant enforcement for proxies and I'm still not. I am concerned that the current interfaces and factoring of derived/fundamental along with the target forwarding semantics is a footgun that is going to led to difficult to identify bugs.

Hold on: the invariant enforcement mechanism to me feels entirely independent of the derived/fundamental trap distinction. We still need invariant enforcement even if we fuse all derived traps into the fundamentals (or get rid of the derived traps entirely).

For now, I can spec. the current factoring I think we should continue to

look at it from this perspective and see if we can minimize the size of the footgun.

I agree we should keep the options open. As I mentioned above, the 'Handler' standard object, when used appropriately, can remedy one half of the forwarding-footgun.

# Tom Van Cutsem (11 years ago)

2012/11/12 Allen Wirfs-Brock <allen at wirfs-brock.com>

On Nov 12, 2012, at 2:21 AM, Brandon Benvie wrote:

Shouldn't it be the reverse, based on the removal of getPropertyDescriptor/Names? The proxy controls what i's [[Prototype]] is which indirectly directs how non-own lookups proceed. The functionality of has can (and usually should) be derived from proto-walking hasOwn until true or null.

yes, that would be a good fix for this particular problem. [[HasOwnProperty]] could remain as an internal method and HasProperty could be defined as an abstraction operation that uses the [[Prototype]] internal accessor and [[HasOwnProperty]].

I also like it. So this implies that we keep "hasOwn", get rid of the "has" trap, and that |name in proxy| triggers the proxy's hasOwn() trap. If that trap returns false, the proxy's |getPrototypeOf| trap is called and the search continues.

Another potential consistency issue is between [[HasOwnProperty]] and [[GetOwnProperty]]. That could be eliminated by combining them into one operation:

[[GetOwnPropety]](name, descriptorNeeded) -> descriptor | boolean | undefined

If descriptorNeeded is true it acts as the current [[GetOwnProperty]]. If descriptorNeeded is false it acts as the current [[HasOwnProperty]].

At the handler-level it makes it impossible to override "hasOwn"without also overriding "getOwnPropertyDescriptor"

I understand the goals, but I'm not a big fan of fusing traps like this. It trades complexity on one level (inconsistencies between traps) for complexity on another level (if-tests in fused trap bodies to determine what type of value to return).

Maybe I just need to get used to the fused trap signatures. I'll try to experiment by writing some code using such an API.

# Brendan Eich (11 years ago)

Tom Van Cutsem wrote:

Another potential consistency issue is between [[HasOwnProperty]]
and [[GetOwnProperty]].  That could be eliminated by combining
them into one operation:

[[GetOwnPropety]](name, descriptorNeeded) -> descriptor | boolean
| undefined

If descriptorNeeded is true it acts as the current
[[GetOwnProperty]]. If descriptorNeeded is false it acts as the
current [[HasOwnProperty]].

At the handler-level it makes it impossible to override
"hasOwn"without also overriding "getOwnPropertyDescriptor"

I understand the goals, but I'm not a big fan of fusing traps like this. It trades complexity on one level (inconsistencies between traps) for complexity on another level (if-tests in fused trap bodies to determine what type of value to return).

Agreed. That kind of return value overloading has a smell.

# Allen Wirfs-Brock (11 years ago)

On Nov 12, 2012, at 1:20 PM, Brendan Eich wrote:

Tom Van Cutsem wrote:

Another potential consistency issue is between [[HasOwnProperty]] and [[GetOwnProperty]]. That could be eliminated by combining them into one operation:

[[GetOwnPropety]](name, descriptorNeeded) -> descriptor | boolean | undefined

If descriptorNeeded is true it acts as the current [[GetOwnProperty]]. If descriptorNeeded is false it acts as the current [[HasOwnProperty]].

At the handler-level it makes it impossible to override "hasOwn"without also overriding "getOwnPropertyDescriptor"

I understand the goals, but I'm not a big fan of fusing traps like this. It trades complexity on one level (inconsistencies between traps) for complexity on another level (if-tests in fused trap bodies to determine what type of value to return).

Agreed. That kind of return value overloading has a smell.

Yes it does! But allowing separate inconsistent implementations of [[HasOwnProperty]] and [[GetOwnProperty]] enables the creation a big mess that may spell even worse.

Particularly, in this case where [[HasOwnProperty]] adds nothing essential as [[HasOwnProperty]] should always be equivalent the ToBoolean of the result of [[GetOwnProperty]]. It is really just an optimization to avoid reifying a descriptor in cases where it isn't needed. From that perspective, a parameter that advises whether or a descriptor is needed does seem too smelly. In C++ this might be expressed as two overloads for GetOwnProperty:

PropertyDescriptor* HasOwnProperty(name k) {...}

bool HasOwnProperty(name k, bool needDescriptor) (...)

# Brendan Eich (11 years ago)

Allen Wirfs-Brock wrote:

On Nov 12, 2012, at 1:20 PM, Brendan Eich wrote:

Tom Van Cutsem wrote:

Another potential consistency issue is between [[HasOwnProperty]]
and [[GetOwnProperty]].  That could be eliminated by combining
them into one operation:

[[GetOwnPropety]](name, descriptorNeeded) ->  descriptor | boolean
| undefined

If descriptorNeeded is true it acts as the current
[[GetOwnProperty]]. If descriptorNeeded is false it acts as the
current [[HasOwnProperty]].

At the handler-level it makes it impossible to override
"hasOwn"without also overriding "getOwnPropertyDescriptor"

I understand the goals, but I'm not a big fan of fusing traps like this. It trades complexity on one level (inconsistencies between traps) for complexity on another level (if-tests in fused trap bodies to determine what type of value to return). Agreed. That kind of return value overloading has a smell.

Yes it does! But allowing separate inconsistent implementations of [[HasOwnProperty]] and [[GetOwnProperty]] enables the creation a big mess that may spell even worse.

Spelling aside, it smells better to me.

Particularly, in this case where [[HasOwnProperty]] adds nothing essential as [[HasOwnProperty]] should always be equivalent the ToBoolean of the result of [[GetOwnProperty]]. It is really just an optimization to avoid reifying a descriptor in cases where it isn't needed. From that perspective, a parameter that advises whether or a descriptor is needed does seem too smelly.

"does not"?

In C++ this might be expressed as two overloads for GetOwnProperty:

PropertyDescriptor* HasOwnProperty(name k) {...}

bool HasOwnProperty(name k, bool needDescriptor) (...)

Sorry, this is JS not C++ and even in C++ flag parameters are considered a smell in many style guides.

We now have over two years experience with derived and fundamental traps in Firefox's proxy implementation, and the hazard of inconsistent traps is minor from what I can see. Cc'ing jorendorff for his view.

# Allen Wirfs-Brock (11 years ago)

On Nov 12, 2012, at 1:00 PM, Tom Van Cutsem wrote:

2012/11/12 Allen Wirfs-Brock <allen at wirfs-brock.com>

...

  1. can be solved by having the handler subclass the standard Handler (see harmony:virtual_object_api): by "subclassing" the Handler and overriding some fundamental traps, one is guaranteed that any derived traps will automatically call the overridden fundamental trap.

What we have is a situation that is very similar to that which requires the inclusion of a receiver argument in [[GetP]]/[[SetP]] calls, but along a different dimension.

I don't yet see how the "receiver" argument is similar to fusing derived and fundamental traps into a single trap.

the similarity I meant was between the delegation along the [[Prototype]] chain and delegation along the [[Target]] chain.

For inheritance we need (for some operations) to know both the object from which we retrieved the method and the original this object (ie, self vs super binding) and we deal with it by passing an explicit receiver argument. What is showing up here is that an analogous situation occurs for [[Target]] delegation and we haven't accounted for that in the design.

I believe things work out better with the standard Handler because it users the inheritance dimension to eliminate [[Target]] chain delegation.

Note that the "invariant enforcement" technique still doesn't allow a proxy with non-config/non-extensibility invariants to lie about its own properties. There are post-condition assertions on both "getOwnPropertyDescriptor" and "hasOwn" that ensure this. More generally: as long as a proxy p doesn't answer "true" to Object.isFrozen(p), its behavior can be arbitrary. Only when some sensible invariants are established on the proxy's target is the proxy's behavior restrained.

Yes, and the concern is that most Proxy uses will not be in the context of frozen object so in most cases none of the "invariant enforcement" will not help with this problem.

I may have been misunderstood: I didn't claim that invariant enforcement solves the forwarding-footgun, only that the inconsistent behavior caused by the forwarding-footgun is not in itself an issue. It doesn't invalidate code that relies on invariants.

I've never been a big fan of dynamic invariant enforcement for proxies and I'm still not. I am concerned that the current interfaces and factoring of derived/fundamental along with the target forwarding semantics is a footgun that is going to led to difficult to identify bugs.

Hold on: the invariant enforcement mechanism to me feels entirely independent of the derived/fundamental trap distinction. We still need invariant enforcement even if we fuse all derived traps into the fundamentals (or get rid of the derived traps entirely).

I agree they are independent mechanisms. I just meant that I didn't think that trying to added dynamic enforcement of derived/fundamental consistency was an interesting path.

# Allen Wirfs-Brock (11 years ago)

On Nov 12, 2012, at 1:59 PM, Brendan Eich wrote:

Allen Wirfs-Brock wrote:

On Nov 12, 2012, at 1:20 PM, Brendan Eich wrote:

Tom Van Cutsem wrote:

Another potential consistency issue is between [[HasOwnProperty]] and [[GetOwnProperty]]. That could be eliminated by combining them into one operation:

[[GetOwnPropety]](name, descriptorNeeded) -> descriptor | boolean | undefined

If descriptorNeeded is true it acts as the current [[GetOwnProperty]]. If descriptorNeeded is false it acts as the current [[HasOwnProperty]].

At the handler-level it makes it impossible to override "hasOwn"without also overriding "getOwnPropertyDescriptor"

I understand the goals, but I'm not a big fan of fusing traps like this. It trades complexity on one level (inconsistencies between traps) for complexity on another level (if-tests in fused trap bodies to determine what type of value to return). Agreed. That kind of return value overloading has a smell.

Yes it does! But allowing separate inconsistent implementations of [[HasOwnProperty]] and [[GetOwnProperty]] enables the creation a big mess that may spell even worse.

Spelling aside, it smells better to me.

Particularly, in this case where [[HasOwnProperty]] adds nothing essential as [[HasOwnProperty]] should always be equivalent the ToBoolean of the result of [[GetOwnProperty]]. It is really just an optimization to avoid reifying a descriptor in cases where it isn't needed. From that perspective, a parameter that advises whether or a descriptor is needed does seem too smelly.

"does not"?

right "does not"

In C++ this might be expressed as two overloads for GetOwnProperty:

PropertyDescriptor* HasOwnProperty(name k) {...}

bool HasOwnProperty(name k, bool needDescriptor) (...)

Sorry, this is JS not C++ and even in C++ flag parameters are considered a smell in many style guides.

As I covered, this is really all about optimizing away the need to reify a descriptor. Often smelling things are done in the name of optimization.

We now have over two years experience with derived and fundamental traps in Firefox's proxy implementation, and the hazard of inconsistent traps is minor from what I can see. Cc'ing jorendorff for his view.

How many users of what skill levels and how much of that code has been fully debugged and put into production use. How much of it has been with direct proxies (the issues arise because for forwarding to the target)? It isn't clear to me how much feedback we have on the current direct proxy API, particularly in regard to potential footguns.

# Brandon Benvie (11 years ago)

A better trap than getOwnPropertyDescriptor would be something like "query" which is used to look up the attributes of a property and its existence. JS shies away from constants generally, and especially numeric ones, but this is a place I have found really benefits from a bitfield.

const ENUMERABLE = 0x01, CONFIGURABLE = 0x02, WRITABLE = 0x04 ACCESSOR = 0x08;

const E__ = 1, C = 2, EC_ = 3, __W = 4, E_W = 5, _CW = 6, ECW = 7, ___A = 8, E__A = 9, _CA = 10, ECA = 11; function query(target, property){ // returns a value corresponding to the own property attributes, or undefined if no own property }

"has" is removed in favor of hasOwn and proto walking "hasOwn" is then replaced with query, where undefined means false and any value corresponding to a flag is true "getOwnPropertyDescriptor" becomes a "query" followed by "get" or "set" if the query result isn't undefined

The one loss is this prevents custom descriptor attributes. That seems like a fringe use case to me though. The benefit is the removal of all the traffic back and forth allocating objects and internal descriptors. It also reduces the number of traps and separates the concerns of property values from property attributes.

# Herby Vojčík (11 years ago)

Brandon Benvie wrote:

A better trap than getOwnPropertyDescriptor would be something like "query" which is used to look up the attributes of a property and its existence. JS shies away from constants generally, and especially numeric ones, but this is a place I have found really benefits from a bitfield.

const ENUMERABLE = 0x01, CONFIGURABLE = 0x02, WRITABLE = 0x04 ACCESSOR = 0x08;

const E__ = 1, C = 2, EC_ = 3, __W = 4, E_W = 5, _CW = 6, ECW = 7, ___A = 8, E__A = 9, _CA = 10, ECA = 11; function query(target, property){ // returns a value corresponding to the own property attributes, or undefined if no own property }

problem is, there can be ___=0; which will evaluate to false. It would benefit from THEREISADESCRIPTOR = 0x10;

and all your enums from above being 16 higher.

And btw, no need for predefined consts, you can simply Reflect.flagsOf({enumerable:true, value:5}) returning 17, for example. This result you can (and should) const for your later use, of course.

"has" is removed in favor of hasOwn and proto walking "hasOwn" is then replaced with query, where undefined means false and any value corresponding to a flag is true "getOwnPropertyDescriptor" becomes a "query" followed by "get" or "set" if the query result isn't undefined

The one loss is this prevents custom descriptor attributes. That seems

It does not, why? If the descriptor is there, get returns them. You meant "custom queriable attributes"?

In the case of Reflect.flagsOf, it does not ;-)

# Brandon Benvie (11 years ago)

Woops I forgot ___ which is 0. Non-enumerable, non-configurable, non-writable. Since undefined is what indicates lack of a property, this isn't an issue. As for bounding, it's simple enough to specify the valid range to be 0-12. Or use strings with the names instead of numbers, though that loses the ability to combine flags via bitwise ops.

I also forgot to mention how accessors could work. In my experience using this model in place of descriptors, accessors live where the value would be. If a query indicates that a property is an accessor then the value returned from 'get' needs to conform to something like { get, set }. Another option would be to use another trap or another param to indicate 'get' is to return the getter/setter for a property instead of the calculated value. Data properties are far more common and it's worth optimizing for their use in terms of both API complexity and performance.

# David Bruant (11 years ago)

Le 12/11/2012 19:17, Allen Wirfs-Brock a écrit :

On Nov 12, 2012, at 2:21 AM, Brandon Benvie wrote:

Shouldn't it be the reverse, based on the removal of getPropertyDescriptor/Names? The proxy controls what i's [[Prototype]] is which indirectly directs how non-own lookups proceed. The functionality of has can (and usually should) be derived from proto-walking hasOwn until true or null. yes, that would be a good fix for this particular problem. [[HasOwnProperty]] could remain as an internal method and HasProperty could be defined as an abstraction operation that uses the [[Prototype]] internal accessor and [[HasOwnProperty]].

Indeed. Otherwise, what happens with Object.prototype.hasOwnProperty.call(new Proxy({}, handler), 'a') would be unclear.

Another potential consistency issue is between [[HasOwnProperty]] and [[GetOwnProperty]]. That could be eliminated by combining them into one operation:

[[GetOwnPropety]](name, descriptorNeeded) -> descriptor | boolean | undefined

If descriptorNeeded is true it acts as the current [[GetOwnProperty]]. If descriptorNeeded is false it acts as the current [[HasOwnProperty]].

At the handler-level it makes it impossible to override "hasOwn"without also overriding "getOwnPropertyDescriptor"

The boolean isn't needed. It's possible to get rid of the [[hasOwnProperty]] internal method and only keep [GetOwnPropety].

  • "Object.prototype.hasOwnProperty.call(o, name)" would be defined as ToBoolean(o.[GetOwnProperty])
  • "name in o" would be the same thing with proto-climbing Both hasOwnProperty.call and the in operator would call only the getOwnPropertyDescriptor trap, the rest would be left to implementors.

It may be weird to get rid of both hasOwn and has traps and that hasOwnProperty.call and the in operator call the getOwnPropertyDescriptor trap, but it's the cost of wanting consistency between [[GetOwnProperty]] and [[HasOwnProperty]]. I'm personally fine with this; I have no preference for one side or the other.

From the spec point of view, it's possible to remove [[HasOwnProperty]] as an internal method, but to define a helper function. The idea behind doing this (apparently cosmetic) change could be to reach 1-to-1 correspondence between object internals and proxy traps.

I'd like to point out that long discussions have already occurred about invariants and trap results consistency. By essence of allowing to provide functions for handler traps, it's possible to allow arbitrary inconsistencies. For instance:

 // "temporal inconsistency"
 var a = new WeirdObject();
 console.log('yo' in a); // true
 console.log('yo' in a); // false
 console.log('yo' in a); // false
 console.log('yo' in a); // true

 // "get/set" inconsistency
 var b = new WeirdObject();
 b.yo = 26;
 console.log(b.yo); // logs "what's up es-discuss!!"

These are other inconsistencies with the expectations we currently have from objects. I can't help wondering why this inconsistency (or any other inconsistency) would be considered more or less important than the one discussed in this thread. IIRC, previous discussions led to the conclusion that it's impossible to get rid of all inconsistencies. The minimum consistency required was defined for security purposes and led to the invariant enforcement checks. I'm fine if we're discussing removing some other inconsistencies, but I'd be more interested in seeing a principled way to decide which inconsistency we're getting rid of and which we still allow. In other word, what makes an given inconsistency a bad thing? The previous answer we had was in essence "if it prevents defensive programming" (Tom or Mark will correct me if I'm misinterpreting or miswording it) and with the current proxy design, it's possible to offer freedom to proxies as long as it's within the boundaries of respecting non-configurable properties and non-extensible invariants.

What would be the new answer to my question?

# Allen Wirfs-Brock (11 years ago)

On Nov 13, 2012, at 1:44 AM, David Bruant wrote:

Le 12/11/2012 19:17, Allen Wirfs-Brock a écrit :

On Nov 12, 2012, at 2:21 AM, Brandon Benvie wrote:

Shouldn't it be the reverse, based on the removal of getPropertyDescriptor/Names? The proxy controls what i's [[Prototype]] is which indirectly directs how non-own lookups proceed. The functionality of has can (and usually should) be derived from proto-walking hasOwn until true or null. yes, that would be a good fix for this particular problem. [[HasOwnProperty]] could remain as an internal method and HasProperty could be defined as an abstraction operation that uses the [[Prototype]] internal accessor and [[HasOwnProperty]]. Indeed. Otherwise, what happens with Object.prototype.hasOwnProperty.call(new Proxy({}, handler), 'a') would be unclear.

O.p.hasOwnProperty can be respecified in terms of [[GetOwnProperty]]

Another potential consistency issue is between [[HasOwnProperty]] and [[GetOwnProperty]]. That could be eliminated by combining them into one operation:

[[GetOwnPropety]](name, descriptorNeeded) -> descriptor | boolean | undefined

If descriptorNeeded is true it acts as the current [[GetOwnProperty]]. If descriptorNeeded is false it acts as the current [[HasOwnProperty]].

At the handler-level it makes it impossible to override "hasOwn"without also overriding "getOwnPropertyDescriptor" The boolean isn't needed. It's possible to get rid of the [[hasOwnProperty]] internal method and only keep [GetOwnPropety].

  • "Object.prototype.hasOwnProperty.call(o, name)" would be defined as ToBoolean(o.[GetOwnProperty])
  • "name in o" would be the same thing with proto-climbing Both hasOwnProperty.call and the in operator would call only the getOwnPropertyDescriptor trap, the rest would be left to implementors.

There is only so much that an implementation can do. Each occurrence of an internal method call in the specification must dynamically dispatch through an object is there is any possibility that the object isn't an ordinary object. In general, internal methods calls (or at least the test for an ordinary object) can't be optimized away.

It may be weird to get rid of both hasOwn and has traps and that hasOwnProperty.call and the in operator call the getOwnPropertyDescriptor trap, but it's the cost of wanting consistency between [[GetOwnProperty]] and [[HasOwnProperty]]. I'm personally fine with this; I have no preference for one side or the other.

I think there is agreement that [[HasOwnProperty]] is just an optimization of ToBoolean([[GetOwnPropertuy]]). Its only purpose is to avoid unnecessary reification of property descriptors. If that optimization isn't important we should just eliminate [[HasOwnProperty]].

From the spec point of view, it's possible to remove [[HasOwnProperty]] as an internal method, but to define a helper function. The idea behind doing this (apparently cosmetic) change could be to reach 1-to-1 correspondence between object internals and proxy traps.

and between other providers of exotic object implementations such as "native" DOM implementors.

Having a single MOP seems like an important goal to me.

I'd like to point out that long discussions have already occurred about invariants and trap results consistency. By essence of allowing to provide functions for handler traps, it's possible to allow arbitrary inconsistencies. For instance:

// "temporal inconsistency" var a = new WeirdObject(); console.log('yo' in a); // true console.log('yo' in a); // false console.log('yo' in a); // false console.log('yo' in a); // true

// "get/set" inconsistency var b = new WeirdObject(); b.yo = 26; console.log(b.yo); // logs "what's up es-discuss!!"

These are other inconsistencies with the expectations we currently have from objects. I can't help wondering why this inconsistency (or any other inconsistency) would be considered more or less important than the one discussed in this thread. IIRC, previous discussions led to the conclusion that it's impossible to get rid of all inconsistencies. The minimum consistency required was defined for security purposes and led to the invariant enforcement checks. I'm fine if we're discussing removing some other inconsistencies, but I'd be more interested in seeing a principled way to decide which inconsistency we're getting rid of and which we still allow. In other word, what makes an given inconsistency a bad thing? The previous answer we had was in essence "if it prevents defensive programming" (Tom or Mark will correct me if I'm misinterpreting or miswording it) and with the current proxy design, it's possible to offer freedom to proxies as long as it's within the boundaries of respecting non-configurable properties and non-extensible invariants.

What would be the new answer to my question?

footgun-ness

I totally agree with you. The possibility of inconsistency is inherent in this style of metaprogramming. There is no way we can practically prevent somebody from coding a proxy handler that intentionally produces inconsistent results. The lowest layers of the specification/implementations must robust in the presence of such inconsistencies.

But we should also try to avoid making it too easy for developers to unintentionally create such inconsistencies. Proxies have a API model that (intentionally) permits handlers that only implement of a subset of the traps. This is good, up to a point but if the traps don't have orthogonal functionality it becomes easy to unintentionally introduce inconsistencies simply by leaving out a handler method. We can engineer the API to reduce the occurrence of unintended inconsistency. One thing we can do is minimize redundant traps that are expected to produce consistent results. We may also fine it use for to merge traps in ways that centralize such semantics in one place. We can eliminate traps (eg [[Iterate]] by promoting the functionality out of the meta layer. Overall, the surface area of the MOP should probably be as small as possible.

We've explored the security issues or proxies in depth. Now is the time to explore the usability issues. Proxies are not intended for novices but there still will be thousands of developers trying to use proxies who are neither metaprogramming experts or have a deep understand of the ES MOP. We should try to minimize the number of footguns and other hazards that they have to face.

# Tom Van Cutsem (11 years ago)

2012/11/13 Allen Wirfs-Brock <allen at wirfs-brock.com>

I think there is agreement that [[HasOwnProperty]] is just an optimization of ToBoolean([[GetOwnPropertuy]]). Its only purpose is to avoid unnecessary reification of property descriptors. If that optimization isn't important we should just eliminate [[HasOwnProperty]].

I understand your point that on normal objects, implementations are free to avoid actually allocating a property descriptor for has or hasOwn checks. As far as the spec is concerned, the derived traps are completely unnecessary: we could hypothetically rewrite the entire spec to use only fundamental traps, and implementors would be free to cut as many corners as they want for non-proxy objects.

When I mentioned that derived traps avoid unnecessary allocations, I was really thinking about this from the point-of-view of the ES6 metaprogrammer. Say I'm creating my own virtual object abstraction whose properties reside in an external map, so I dutifully implement all the traps:

var store = new Map() var p = new Proxy( { } /* target doesn't matter */ , { hasOwn: function(ignoreTarget, name) { return store.has(name); }, getOwnPropertyDescriptor: function(ignoreTarget, name) { var e = store.get(name); return e === undefined ? undefined : {value:e, ...}; }, ... };

Say we remove the "hasOwn()" trap, calling the getOwnPropertyDescriptor trap instead. Now my virtual object abstraction does need to allocate a descriptor. Your proposal of adding a flag to the trap fixes that, but IMHO in a way that confuses the intent of the operation being intercepted:

var store = new Map() var p = new Proxy( { } /* target doesn't matter */ , { getOwnPropertyDescriptor: function(ignoreTarget, name, needDescriptor) { if (!needDescriptor) { return store.has(name); } var e = store.get(name); return e === undefined ? undefined : {value:e, ...}; }, ... };

It's dawning on me that the main reason I dislike adding flags to the fundamental traps is that it breaks the 1-to-1 symmetry with the operations that they intercept:

Object.getOwnPropertyDescriptor(proxy, name) // traps as getOwnPropertyDescriptor(target, name)

This symmetry should make it relatively easy for an ES5 programmer to pick up the handler API. The signatures look familiar (at least for all operations that are expressed as function calls). Adding extra flags breaks the symmetry.

We've explored the security issues or proxies in depth. Now is the time to

explore the usability issues. Proxies are not intended for novices but there still will be thousands of developers trying to use proxies who are neither metaprogramming experts or have a deep understand of the ES MOP. We should try to minimize the number of footguns and other hazards that they have to face.

Allen, I'm completely on the same page here. I myself have previously expressed concerns about the inconsistencies introduced by derived traps. I think our disagreement lies only with how to deal with the inconsistencies.

So if I don't like adding extra flags, what's my counterproposal?

I knew I previously wrote about some of these issues, and I finally found out where: an old strawman for the old Proxy API: < strawman:derived_traps_forwarding_handler

.

That strawman defines two possible semantics for derived traps:

  • “forwarding” semantics: forward the derived operation to the target
  • “fallback” semantics: implement the “default” semantics in terms of the fundamental forwarding traps

"Forwarding" semantics is what you get 'out of the box' with direct proxies. To obtain "fallback" semantics for derived traps, you need to subclass Handler.

I thought this fixed the footgun, except it doesn't deal with the case where one overrides only the derived trap, but leaves the fundamental trap in "forwarding" mode.

I just realized that in a previous version of the Handler API, the fundamental traps were specced to be "abstract" methods (i.e. they would throw when called). Given that semantics, the Handler API would avoid the footgun entirely. For example:

  1. if you subclass Handler and override the derived "hasOwn" trap, but forget to implement the fundamental "getOwnPropertyDescriptor" trap, you'll get a noisy exception (instead of inconsistent behavior).
  2. if you subclass Handler and override the fundamental "getOwnPropertyDescriptor", but forget to override "hasOwn", things will just work (the subclass automatically inherits a compatible "hasOwn" trap).

So, my proposal: let's revert the fundamental traps of Handler to become abstract methods again. This forces subclasses of Handler to provide all fundamentals at once, avoiding the footgun.

I'm fully aware that this fixes the footgun only if the programmer decided to subclass Handler. I think we need to do a better job evangelizing the Handler API and calling out the benefits of using it (I now know the topic of my next blog post on proxies ;-)

# David Bruant (11 years ago)

Le 13/11/2012 19:28, Allen Wirfs-Brock a écrit :

On Nov 13, 2012, at 1:44 AM, David Bruant wrote:

I'm fine if we're discussing removing some other inconsistencies, but I'd be more interested in seeing a principled way to decide which inconsistency we're getting rid of and which we still allow. In other word, what makes an given inconsistency a bad thing? The previous answer we had was in essence "if it prevents defensive programming" (Tom or Mark will correct me if I'm misinterpreting or miswording it) and with the current proxy design, it's possible to offer freedom to proxies as long as it's within the boundaries of respecting non-configurable properties and non-extensible invariants.

What would be the new answer to my question? footgun-ness

I totally agree with you. The possibility of inconsistency is inherent in this style of metaprogramming. There is no way we can practically prevent somebody from coding a proxy handler that intentionally produces inconsistent results. The lowest layers of the specification/implementations must robust in the presence of such inconsistencies.

But we should also try to avoid making it too easy for developers to unintentionally create such inconsistencies. Proxies have a API model that (intentionally) permits handlers that only implement of a subset of the traps. This is good, up to a point but if the traps don't have orthogonal functionality it becomes easy to unintentionally introduce inconsistencies simply by leaving out a handler method. We can engineer the API to reduce the occurrence of unintended inconsistency. One thing we can do is minimize redundant traps that are expected to produce consistent results. We may also fine it use for to merge traps in ways that centralize such semantics in one place. We can eliminate traps (eg [[Iterate]] by promoting the functionality out of the meta layer. Overall, the surface area of the MOP should probably be as small as possible.

We've explored the security issues or proxies in depth. Now is the time to explore the usability issues.

Very good point.

Proxies are not intended for novices but there still will be thousands of developers trying to use proxies who are neither metaprogramming experts or have a deep understand of the ES MOP. We should try to minimize the number of footguns and other hazards that they have to face.

Ok, this rationale sounds good to me. My original impression was that this thread was taking one inconsistency out of the blue, but I'm completely fine with what you just stated.

# David Bruant (11 years ago)

Le 13/11/2012 21:25, Tom Van Cutsem a écrit :

2012/11/13 Allen Wirfs-Brock <allen at wirfs-brock.com <mailto:allen at wirfs-brock.com>>

I think there is agreement that [[HasOwnProperty]] is just an
optimization of ToBoolean([[GetOwnPropertuy]]).  Its only purpose
is to avoid unnecessary reification of property descriptors. If
that optimization isn't important we should just eliminate
[[HasOwnProperty]].

I understand your point that on normal objects, implementations are free to avoid actually allocating a property descriptor for has or hasOwn checks. As far as the spec is concerned, the derived traps are completely unnecessary: we could hypothetically rewrite the entire spec to use only fundamental traps, and implementors would be free to cut as many corners as they want for non-proxy objects.

When I mentioned that derived traps avoid unnecessary allocations, I was really thinking about this from the point-of-view of the ES6 metaprogrammer. Say I'm creating my own virtual object abstraction whose properties reside in an external map, so I dutifully implement all the traps:

var store = new Map() var p = new Proxy( { } /* target doesn't matter */ , { hasOwn: function(ignoreTarget, name) { return store.has(name); }, getOwnPropertyDescriptor: function(ignoreTarget, name) { var e = store.get(name); return e === undefined ? undefined : {value:e, ...}; }, ... };

Say we remove the "hasOwn()" trap, calling the getOwnPropertyDescriptor trap instead. Now my virtual object abstraction does need to allocate a descriptor.

For the particular case you've written, when going for hasOwnProperty.call or the in operator, the JS engine knows it needs to output a boolean, so it can "rewrite" (or contextually compile) your trap last line as "e===undefined" (since "undefined" is falsy and objects created by object literals are truthy). In that particular case, the allocation isn't necessary provided some simple static analysis. Maybe type inference can be of some help to prevent this allocation in more dynamic/complicated cases too. I would really love to have implementors POV here.

Your proposal of adding a flag to the trap fixes that, but IMHO in a way that confuses the intent of the operation being intercepted:

var store = new Map() var p = new Proxy( { } /* target doesn't matter */ , { getOwnPropertyDescriptor: function(ignoreTarget, name, needDescriptor) { if (!needDescriptor) { return store.has(name); } var e = store.get(name); return e === undefined ? undefined : {value:e, ...}; }, ... };

It's dawning on me that the main reason I dislike adding flags to the fundamental traps is that it breaks the 1-to-1 symmetry with the operations that they intercept:

Object.getOwnPropertyDescriptor(proxy, name) // traps as getOwnPropertyDescriptor(target, name)

This symmetry should make it relatively easy for an ES5 programmer to pick up the handler API. The signatures look familiar (at least for all operations that are expressed as function calls). Adding extra flags breaks the symmetry.

I agree. As I suggested in my other answer, the flag isn't necessary. Very much like it's not for the internal operation.

# Erik Arvidsson (11 years ago)

On Tue, Nov 13, 2012 at 3:25 PM, Tom Van Cutsem <tomvc.be at gmail.com> wrote:

So, my proposal: let's revert the fundamental traps of Handler to become abstract methods again. This forces subclasses of Handler to provide all fundamentals at once, avoiding the footgun.

Maybe we should skip the derived traps for ES6 and see if the extra allocation really becomes an issue in reality? This way we are keeping the API smaller and the risk of errors lower. We can always add the derived traps later, can't we?

# Allen Wirfs-Brock (11 years ago)

On Nov 13, 2012, at 12:25 PM, Tom Van Cutsem wrote:

2012/11/13 Allen Wirfs-Brock <allen at wirfs-brock.com>

I think there is agreement that [[HasOwnProperty]] is just an optimization of ToBoolean([[GetOwnPropertuy]]). Its only purpose is to avoid unnecessary reification of property descriptors. If that optimization isn't important we should just eliminate [[HasOwnProperty]].

I understand your point that on normal objects, implementations are free to avoid actually allocating a property descriptor for has or hasOwn checks. As far as the spec is concerned, the derived traps are completely unnecessary: we could hypothetically rewrite the entire spec to use only fundamental traps, and implementors would be free to cut as many corners as they want for non-proxy objects.

When I mentioned that derived traps avoid unnecessary allocations, I was really thinking about this from the point-of-view of the ES6 metaprogrammer. Say I'm creating my own virtual object abstraction whose properties reside in an external map, so I dutifully implement all the traps:

var store = new Map() var p = new Proxy( { } /* target doesn't matter */ , { hasOwn: function(ignoreTarget, name) { return store.has(name); }, getOwnPropertyDescriptor: function(ignoreTarget, name) { var e = store.get(name); return e === undefined ? undefined : {value:e, ...}; }, ... };

Say we remove the "hasOwn()" trap, calling the getOwnPropertyDescriptor trap instead. Now my virtual object abstraction does need to allocate a descriptor. Your proposal of adding a flag to the trap fixes that, but IMHO in a way that confuses the intent of the operation being intercepted:

var store = new Map() var p = new Proxy( { } /* target doesn't matter */ , { getOwnPropertyDescriptor: function(ignoreTarget, name, needDescriptor) { if (!needDescriptor) { return store.has(name); } var e = store.get(name); return e === undefined ? undefined : {value:e, ...}; }, ... };

It's dawning on me that the main reason I dislike adding flags to the fundamental traps is that it breaks the 1-to-1 symmetry with the operations that they intercept:

Object.getOwnPropertyDescriptor(proxy, name) // traps as getOwnPropertyDescriptor(target, name)

This symmetry should make it relatively easy for an ES5 programmer to pick up the handler API. The signatures look familiar (at least for all operations that are expressed as function calls). Adding extra flags breaks the symmetry.

Interesting, the 1-to-1 symmetry I've been look for is between the internal methods and the proxy traps. To me, that is the MOP and things like Object.getOwnPropertyDescriptor are a connivence layer a level above the MOP. From that perspective, it feels fine to me to have a MOP operation (a trap or internal method), called say, CheckOwnProperty that has a flag argument that controls whether you want to get a property descriptor or a boolean result. That MOP operation can be used at the language level to implement the in operator and other language level-semantics and at the library level to implement convenience functions such as O.p.hasOwnProperty and Object.getOwnPropertyDescriptor. The MOP programmer (whether implementing ordinary objects in the implementation, host objects using low level extension points or using Proxy to implement exotic objects)r sits in the middle between those two layers. All they really need to know about is the MOP.

Looking at it another way. The existing reflection APIs: Object., O.p.hasOwnProperty, etc. weren't really designed as a conceptual unit. They aren't a complete or self consistent MOP. I'm not sure we can design a good MOP that doesn't differ in significant ways with those existing APIs. That doesn't worry me too much if we can get a consistent MOP that manifests itself in the spec., in the Proxy traps, and in the Reflect. APIs. Then we would have a clean foundation that can also be used to define the semantics of the existing APIs that have a less consistent API and build new higher level facilities in the future.

We've explored the security issues or proxies in depth. Now is the time to explore the usability issues. Proxies are not intended for novices but there still will be thousands of developers trying to use proxies who are neither metaprogramming experts or have a deep understand of the ES MOP. We should try to minimize the number of footguns and other hazards that they have to face.

Allen, I'm completely on the same page here. I myself have previously expressed concerns about the inconsistencies introduced by derived traps. I think our disagreement lies only with how to deal with the inconsistencies.

And, I not even so sure we disagree on how to deal. I've just been exploring alternatives. Condensing partially redundant traps into a single trap with more parameters is just one approach but not something I'm particularly in love with. I mostly interesting in finding a more usable, less footgun-ish alternative to parts of the current design.

So if I don't like adding extra flags, what's my counterproposal?

I knew I previously wrote about some of these issues, and I finally found out where: an old strawman for the old Proxy API: strawman:derived_traps_forwarding_handler.

That strawman defines two possible semantics for derived traps:

  • “forwarding” semantics: forward the derived operation to the target
  • “fallback” semantics: implement the “default” semantics in terms of the fundamental forwarding traps

"Forwarding" semantics is what you get 'out of the box' with direct proxies. To obtain "fallback" semantics for derived traps, you need to subclass Handler.

I thought this fixed the footgun, except it doesn't deal with the case where one overrides only the derived trap, but leaves the fundamental trap in "forwarding" mode.

I just realized that in a previous version of the Handler API, the fundamental traps were specced to be "abstract" methods (i.e. they would throw when called). Given that semantics, the Handler API would avoid the footgun entirely. For example:

  1. if you subclass Handler and override the derived "hasOwn" trap, but forget to implement the fundamental "getOwnPropertyDescriptor" trap, you'll get a noisy exception (instead of inconsistent behavior).
  2. if you subclass Handler and override the fundamental "getOwnPropertyDescriptor", but forget to override "hasOwn", things will just work (the subclass automatically inherits a compatible "hasOwn" trap).

So, my proposal: let's revert the fundamental traps of Handler to become abstract methods again. This forces subclasses of Handler to provide all fundamentals at once, avoiding the footgun.

I'm fully aware that this fixes the footgun only if the programmer decided to subclass Handler. I think we need to do a better job evangelizing the Handler API and calling out the benefits of using it (I now know the topic of my next blog post on proxies ;-)

This sounds like a quite reasonable approach to explore. Note that subclassing isn't all that much more ugly to write:

var p = new Proxy( { } /* target doesn't matter */ , new class extends Handler { getOwnPropertyDescriptor: function(ignoreTarget, name, needDescriptor) { ... }, ... };

Also something I have been thinking about is moving some derived traps out of the the handler interface and into Reflect. For example, in the spec. I current have [[HasOwnProerty]] as as internal method and HasProperty as an abstract operation that takes an object and property key as arguments. It doesn't need to be handler dispatched becase the proto chain walking algorithm is so generic. A simply function could be add to Reflect

# Allen Wirfs-Brock (11 years ago)

On Nov 13, 2012, at 12:39 PM, David Bruant wrote:

Le 13/11/2012 19:28, Allen Wirfs-Brock a écrit :

... We've explored the security issues or proxies in depth. Now is the time to explore the usability issues. Very good point.

Proxies are not intended for novices but there still will be thousands of developers trying to use proxies who are neither metaprogramming experts or have a deep understand of the ES MOP. We should try to minimize the number of footguns and other hazards that they have to face. Ok, this rationale sounds good to me. My original impression was that this thread was taking one inconsistency out of the blue, but I'm completely fine with what you just stated.

I just start with [[HasOwnProperty]] because it was the simplest of the issues that I noticed. Figured I see how it plays out before tackling some of the others

I think Tom in his discussion of fundamental and derived traps is getting closer to the core issues.

# Allen Wirfs-Brock (11 years ago)

On Nov 13, 2012, at 12:52 PM, David Bruant wrote:

Le 13/11/2012 21:25, Tom Van Cutsem a écrit :

2012/11/13 Allen Wirfs-Brock <allen at wirfs-brock.com>

I think there is agreement that [[HasOwnProperty]] is just an optimization of ToBoolean([[GetOwnPropertuy]]). Its only purpose is to avoid unnecessary reification of property descriptors. If that optimization isn't important we should just eliminate [[HasOwnProperty]].

I understand your point that on normal objects, implementations are free to avoid actually allocating a property descriptor for has or hasOwn checks. As far as the spec is concerned, the derived traps are completely unnecessary: we could hypothetically rewrite the entire spec to use only fundamental traps, and implementors would be free to cut as many corners as they want for non-proxy objects.

When I mentioned that derived traps avoid unnecessary allocations, I was really thinking about this from the point-of-view of the ES6 metaprogrammer. Say I'm creating my own virtual object abstraction whose properties reside in an external map, so I dutifully implement all the traps:

var store = new Map() var p = new Proxy( { } /* target doesn't matter */ , { hasOwn: function(ignoreTarget, name) { return store.has(name); }, getOwnPropertyDescriptor: function(ignoreTarget, name) { var e = store.get(name); return e === undefined ? undefined : {value:e, ...}; }, ... };

Say we remove the "hasOwn()" trap, calling the getOwnPropertyDescriptor trap instead. Now my virtual object abstraction does need to allocate a descriptor. For the particular case you've written, when going for hasOwnProperty.call or the in operator, the JS engine knows it needs to output a boolean, so it can "rewrite" (or contextually compile) your trap last line as "e===undefined" (since "undefined" is falsy and objects created by object literals are truthy). In that particular case, the allocation isn't necessary provided some simple static analysis. Maybe type inference can be of some help to prevent this allocation in more dynamic/complicated cases too. I would really love to have implementors POV here.

If you can trace/inline across internal method calls and are smart enough in your analysis you probably can eliminate the allocation. The internal method calls come into play because you may have Proxies or other exotic objects on a [[Prototype]] chain so things that involve proto climbing will bounce back and force between internal methods and trap handlers.

My general philosophy has always been that when programming in an object-based language, a programmer shouldn't be afraid of creating and using objects. It's the implementation's job to make sure that perspective is justified. Given everything else that is likely to be going on in a rich proxy, this particular point may be unnecessary early optimization.

Regardless, my main concern isn't this optimization issue, but instead a concern about it being too easy to inadvertently define an internally inconsistent Proxy.

# Brendan Eich (11 years ago)

Erik Arvidsson wrote:

On Tue, Nov 13, 2012 at 3:25 PM, Tom Van Cutsem <tomvc.be at gmail.com <mailto:tomvc.be at gmail.com>> wrote:

So, my proposal: let's revert the fundamental traps of Handler to
become abstract methods again. This forces subclasses of Handler
to provide all fundamentals at once, avoiding the footgun.

Maybe we should skip the derived traps for ES6 and see if the extra allocation really becomes an issue in reality? This way we are keeping the API smaller and the risk of errors lower. We can always add the derived traps later, can't we?

Allocations always matter, in my experience -- even with V8.

Also Tom's point about precision counts, independent of allocations. A "hasOwn" test in the MOP should not call the same thing that Object.getOwnPropertyDescriptor calls, if we can help it -- especially without a flag argument to hint the difference, but that flag argument is blecherous anyway.

# Tom Van Cutsem (11 years ago)

2012/11/13 David Bruant <bruant.d at gmail.com>

For the particular case you've written, when going for hasOwnProperty.call or the in operator, the JS engine knows it needs to output a boolean, so it can "rewrite" (or contextually compile) your trap last line as "e===undefined" (since "undefined" is falsy and objects created by object literals are truthy). In that particular case, the allocation isn't necessary provided some simple static analysis. Maybe type inference can be of some help to prevent this allocation in more dynamic/complicated cases too. I would really love to have implementors POV here.

I'm very skeptical of this.

If I may summarize, you're arguing that we can get away with spurious allocations in handler traps because JS implementations can in theory optimize (e.g. partially evaluate trap method bodies). I think that's wishful thinking. Not that implementors can't do this, I just think the cost/benefit ratio is way too high.

# Tom Van Cutsem (11 years ago)

2012/11/13 Erik Arvidsson <erik.arvidsson at gmail.com>

On Tue, Nov 13, 2012 at 3:25 PM, Tom Van Cutsem <tomvc.be at gmail.com>wrote:

So, my proposal: let's revert the fundamental traps of Handler to become abstract methods again. This forces subclasses of Handler to provide all fundamentals at once, avoiding the footgun.

Maybe we should skip the derived traps for ES6 and see if the extra allocation really becomes an issue in reality? This way we are keeping the API smaller and the risk of errors lower. We can always add the derived traps later, can't we?

I would subscribe to this point of view if only all derived traps were as trivial to remove as the "has" or "hasOwn" traps that were brought up by Allen in this thread.

Looking at the full list of fundamental vs derived traps < harmony:virtual_object_api>, it

quickly becomes apparent that there are far more derived than fundamental traps. It'll be a challenge to try and get rid of all derived traps.

Now scroll down to the "non-normative implementation" section of that wiki page and have a look at the default implementation of "hasOwn". It's only 3 lines, making it by far the shortest derived trap. Many of the other ones (e.g. keys() and enumerate()), perform spurious property descriptor allocations in a for-loop, making the allocation overhead linear instead of constant.

As Allen mentioned himself, the "hasOwn" trap was a good case to bootstrap the discussion because of its simplicity: it's easy to understand the inconsistencies, it's easy to get rid of, and the extra allocation cost is minimal. The story is not so simple for the other derived traps.

All of this is just to adjust the bias that getting rid of all derived traps would be an easy thing to do.

# Tom Van Cutsem (11 years ago)

2012/11/14 Allen Wirfs-Brock <allen at wirfs-brock.com>

On Nov 13, 2012, at 12:25 PM, Tom Van Cutsem wrote:

It's dawning on me that the main reason I dislike adding flags to the fundamental traps is that it breaks the 1-to-1 symmetry with the operations that they intercept:

Object.getOwnPropertyDescriptor(proxy, name) // traps as getOwnPropertyDescriptor(target, name)

This symmetry should make it relatively easy for an ES5 programmer to pick up the handler API. The signatures look familiar (at least for all operations that are expressed as function calls). Adding extra flags breaks the symmetry.

Interesting, the 1-to-1 symmetry I've been look for is between the internal methods and the proxy traps. To me, that is the MOP and things like Object.getOwnPropertyDescriptor are a connivence layer a level above the MOP. From that perspective, it feels fine to me to have a MOP operation (a trap or internal method), called say, CheckOwnProperty that has a flag argument that controls whether you want to get a property descriptor or a boolean result. That MOP operation can be used at the language level to implement the in operator and other language level-semantics and at the library level to implement convenience functions such as O.p.hasOwnProperty and Object.getOwnPropertyDescriptor. The MOP programmer (whether implementing ordinary objects in the implementation, host objects using low level extension points or using Proxy to implement exotic objects)r sits in the middle between those two layers. All they really need to know about is the MOP.

I think I understand your point of view. I'll try to restate it differently:

You are arguing for 2 levels of abstraction ("layers"):

  • Most JS developers are "base-level" programmers: they use ES5 APIs like Object.getOwnPropertyDescriptor to reflect on their objects.
  • A small fraction of JS developers will be "meta-level" programmers: these include at least proxy authors and host object authors. These programmers would use a different API, i.e. the MOP, which consists of methods such as a "checkOwnProperty" trap and presumably a corresponding Reflect.checkOwnProperty method. There would be a correspondence between the base-level API and the internal MOP methods, but this correspondence is not a straightforward 1-to-1 correspondence.

Now, there are two routes to optimize for, with opposite pay-offs:

  1. Make the MOP API correspond as much as possible to the base-level ES5 API. This benefits ES5 programmers, who can easily pick up the meta-level API as it's recognizable, at the expense of ES6 meta-level programmers, who must deal with a larger API surface (incl. the burden to keep fundamental and derived traps in sync).

  2. Make the MOP API as minimal as possible. This benefits ES6 meta-programmers, but at the expense of increasing the learning curve for ES5 programmers to become ES6 meta-level programmers (because the difference between base-level and meta-level API will necessarily be bigger)

Approach 1) corresponds roughly to the current setup with fundamental + derived traps. Approach 2) corresponds roughly to "provide only fundamental traps".

Now consider an ES6 meta-programmer interested in intercepting the "in"-operator.

Given approach 1) she can look at the full list of traps (e.g. the list at the top of harmony:direct_proxies)

and quickly figure out that she must implement the "has" trap.

Given approach 2), our meta-programmer must figure out that the "in"-operator in fact calls the "getOwnPropertyDescriptor" trap instead, and tests for its result to be undefined or not.

In effect, approach 2) requires that meta-programmers understand the default implementation of the derived traps (i.e. this code: < harmony:virtual_object_api#non-normative_implementation>)

before being able to intercept a derived operation.

To be fair, if our meta-programmer in approach 1) implements the "has" trap, and that trap is not a simple before/after/around method that forwards to the target, then she must also override "getOwnPropertyDescriptor" to return consistent results. If she subclassed Handler (and we make the fundamental traps abstract methods again), she'll get an exception prompting her to investigate further. If she did not subclass Handler, she just shot herself in the foot. So eventually, she will have to learn about distinction between fundamental and derived methods anyway.

The difference between approach 1) and 2) is at what point you must learn about the differences.

Approach 1) favors ES5 programmers. Approach 2) favors ES6 meta-programmers. I think we should optimize for ES5 programmers.

# Tom Van Cutsem (11 years ago)

2012/11/14 Allen Wirfs-Brock <allen at wirfs-brock.com>

Regardless, my main concern isn't this optimization issue, but instead a concern about it being too easy to inadvertently define an internally inconsistent Proxy.

Indeed. So let's get back to alternative solutions to avoid the inconsistent Proxy issue.

As I mentioned previously, I think knowing about the Handler API, and when to use it, is key:

  • You don't need to subclass Handler if you are building proxy abstractions that only do before/after/around-style wrapping of a target object. Even if you implement only some of the traps and leave the others in "forwarding mode", as even your own traps eventually "forward", the resulting proxy is still consistent.

  • You want to subclass Handler if you are building proxies that are either not backed by a wrapped target object, or you do want to wrap an object but do not want to forward all operations to it. At that point, you want to make sure that no trap lingers accidentally in the "forwarding" state. The Handler API (with abstract fundamental traps) ensures this.

  • We don't want meta-programmers to have to know the intricate details of the default trap implementations. Instead, all a meta-programmer subclassing Handler should care about are the dependencies between fundamental and derived traps, i.e. "for each derived trap, what fundamentals must I provide?". At this point we don't yet have good docs that document this relationship.

At its core, this "footgun" is not specific to Javascript proxies: it's an old problem in OO frameworks in general, whose documentation may state: "if you subclass class C, and you override method "foo", make sure you also override method "bar" consistently!". I'm sure you've had way more practical experience with exactly this issue in developing OO frameworks than I ever have :-)

# Andreas Rossberg (11 years ago)

On 14 November 2012 09:30, Tom Van Cutsem <tomvc.be at gmail.com> wrote:

2012/11/13 David Bruant <bruant.d at gmail.com>

For the particular case you've written, when going for hasOwnProperty.call or the in operator, the JS engine knows it needs to output a boolean, so it can "rewrite" (or contextually compile) your trap last line as "e===undefined" (since "undefined" is falsy and objects created by object literals are truthy). In that particular case, the allocation isn't necessary provided some simple static analysis. Maybe type inference can be of some help to prevent this allocation in more dynamic/complicated cases too. I would really love to have implementors POV here.

I'm very skeptical of this.

If I may summarize, you're arguing that we can get away with spurious allocations in handler traps because JS implementations can in theory optimize (e.g. partially evaluate trap method bodies). I think that's wishful thinking. Not that implementors can't do this, I just think the cost/benefit ratio is way too high.

I agree, this is a particularly daring instance of the "sufficiently smart compiler" argument.

# Brendan Eich (11 years ago)

Tom Van Cutsem wrote:

At its core, this "footgun" is not specific to Javascript proxies: it's an old problem in OO frameworks in general, whose documentation may state: "if you subclass class C, and you override method "foo", make sure you also override method "bar" consistently!". I'm sure you've had way more practical experience with exactly this issue in developing OO frameworks than I ever have :-)

Agreed. Some OO languages preclude super calls except to the same-named method as the one making the call, because they want all peer method calls to be virtual in the C++ sense. This goes too far in my view, though. Sometimes when calling a super-method you need to call a related super-peer-method. ES6 classes with super allow this.

There's no silver bullet for this footgun ;-).

# David Bruant (11 years ago)

Le 14/11/2012 07:35, Brendan Eich a écrit :

Also Tom's point about precision counts, independent of allocations. A "hasOwn" test in the MOP should not call the same thing that Object.getOwnPropertyDescriptor calls, if we can help it

I don't think it's possible. This thread started with the intention of having has/hasOwn consistent with [[GetOwnProperty]]. If has or hasOwn is distinguishable from gOPD in any way (different trap, flag argument, etc.), then, it reopens the door to have has/hasOwn inconsistent with gOPD. If it is chosen to enforce has/hasOwn into being consistent with gOPD, then I think gOPD should remain as it currently is. In my experience, I've never needed to do the distinction or had a use case where it was relevant to have a specific has/hasOwn behavior.

hOP and the "in" operator can be seen as conveniences on top of Object.gOPD (and Object.getPrototypeOf). "in" would just be syntactic sugar. What I'm saying is historically untrue (since both hOP and 'in' predate Object.gOPD), but that's not an absurd way of thinking of the ES5 object model.

# Allen Wirfs-Brock (11 years ago)

On Nov 14, 2012, at 1:22 AM, Tom Van Cutsem wrote:

2012/11/14 Allen Wirfs-Brock <allen at wirfs-brock.com>

On Nov 13, 2012, at 12:25 PM, Tom Van Cutsem wrote:

It's dawning on me that the main reason I dislike adding flags to the fundamental traps is that it breaks the 1-to-1 symmetry with the operations that they intercept:

Object.getOwnPropertyDescriptor(proxy, name) // traps as getOwnPropertyDescriptor(target, name)

This symmetry should make it relatively easy for an ES5 programmer to pick up the handler API. The signatures look familiar (at least for all operations that are expressed as function calls). Adding extra flags breaks the symmetry.

Interesting, the 1-to-1 symmetry I've been look for is between the internal methods and the proxy traps. To me, that is the MOP and things like Object.getOwnPropertyDescriptor are a connivence layer a level above the MOP. From that perspective, it feels fine to me to have a MOP operation (a trap or internal method), called say, CheckOwnProperty that has a flag argument that controls whether you want to get a property descriptor or a boolean result. That MOP operation can be used at the language level to implement the in operator and other language level-semantics and at the library level to implement convenience functions such as O.p.hasOwnProperty and Object.getOwnPropertyDescriptor. The MOP programmer (whether implementing ordinary objects in the implementation, host objects using low level extension points or using Proxy to implement exotic objects)r sits in the middle between those two layers. All they really need to know about is the MOP.

I think I understand your point of view. I'll try to restate it differently:

You are arguing for 2 levels of abstraction ("layers"):

  • Most JS developers are "base-level" programmers: they use ES5 APIs like Object.getOwnPropertyDescriptor to reflect on their objects.
  • A small fraction of JS developers will be "meta-level" programmers: these include at least proxy authors and host object authors. These programmers would use a different API, i.e. the MOP, which consists of methods such as a "checkOwnProperty" trap and presumably a corresponding Reflect.checkOwnProperty method. There would be a correspondence between the base-level API and the internal MOP methods, but this correspondence is not a straightforward 1-to-1 correspondence.

Now, there are two routes to optimize for, with opposite pay-offs:

  1. Make the MOP API correspond as much as possible to the base-level ES5 API. This benefits ES5 programmers, who can easily pick up the meta-level API as it's recognizable, at the expense of ES6 meta-level programmers, who must deal with a larger API surface (incl. the burden to keep fundamental and derived traps in sync).

  2. Make the MOP API as minimal as possible. This benefits ES6 meta-programmers, but at the expense of increasing the learning curve for ES5 programmers to become ES6 meta-level programmers (because the difference between base-level and meta-level API will necessarily be bigger)

Approach 1) corresponds roughly to the current setup with fundamental + derived traps. Approach 2) corresponds roughly to "provide only fundamental traps".

Yes, as far as you've gone here. But the MOP isn't the whole metaprogramming story. The MOP only defines the essential behaviors of objects. It really doesn't say anything about the semantics of language constructs including how they make use of the MOP.

Now consider an ES6 meta-programmer interested in intercepting the "in"-operator.

This is an attempt to extend the language at the operator level, not an appropriate job for the MOP (ie, a Proxy). If we want to support operator overloading/extension that should be via a different mechanism.

Given approach 1) she can look at the full list of traps (e.g. the list at the top of harmony:direct_proxies) and quickly figure out that she must implement the "has" trap.

But this might have effects far beyond just the in operator and it still might not work. Unless it is explicitly specified (in an accessible place) how does she know that "in" uses "has" instead of "hasOwn" or "getOwnPropertyDescriptor" pus an explicit proto climb? Also what other language features depend upon "has"? You can't just implement "has" for the sole effect of intercept the "in"-operator.

Given approach 2), our meta-programmer must figure out that the "in"-operator in fact calls the "getOwnPropertyDescriptor" trap instead, and tests for its result to be undefined or not.

In effect, approach 2) requires that meta-programmers understand the default implementation of the derived traps (i.e. this code: harmony:virtual_object_api#non-normative_implementation) before being able to intercept a derived operation.

In either case, successfully accomplishing the goal of intercepting "in" (and nothing else) for a particular object requires understanding of how the semantics of "in" (and potentially every other language feature) map to the Proxy MOP.

The MOP need to be sufficient for defining the essential semantics of an ECMAScript object. But a MOP programmer (an exotic object designer) can't anticipate all ways in which those semantics are going to be exercised. All the ways in which the MOP strings of an object can be pulled is open-ended. This is particularly true given Reflect.* functions that essentially provide direct ES level access to making individual MOP calls on an arbitrary object.

The MOP/Proxies are about extend the kinds of available objects, not about extending the semantics of the ES language. Exotic object implementors need to focus on fulfilling the full object semantics contract defined by the MOP. If they do that, the language should take care of itself.

To be fair, if our meta-programmer in approach 1) implements the "has" trap, and that trap is not a simple before/after/around method that forwards to the target, then she must also override "getOwnPropertyDescriptor" to return consistent results. If she subclassed Handler (and we make the fundamental traps abstract methods again), she'll get an exception prompting her to investigate further. If she did not subclass Handler, she just shot herself in the foot. So eventually, she will have to learn about distinction between fundamental and derived methods anyway.

I think I've already said, that Handler subclassing looks like a good approach.

The difference between approach 1) and 2) is at what point you must learn about the differences.

Approach 1) favors ES5 programmers. Approach 2) favors ES6 meta-programmers. I think we should optimize for ES5 programmers.

I think that if you are defining a Proxy you are a ES6 meta-prorammer. No way around it. That's why it is important to make the task of defining Proxy handlers as foot-gun free as possible.

# Allen Wirfs-Brock (11 years ago)

On Nov 14, 2012, at 1:25 AM, Tom Van Cutsem wrote:

2012/11/14 Allen Wirfs-Brock <allen at wirfs-brock.com> Regardless, my main concern isn't this optimization issue, but instead a concern about it being too easy to inadvertently define an internally inconsistent Proxy.

Indeed. So let's get back to alternative solutions to avoid the inconsistent Proxy issue.

As I mentioned previously, I think knowing about the Handler API, and when to use it, is key:

  • You don't need to subclass Handler if you are building proxy abstractions that only do before/after/around-style wrapping of a target object. Even if you implement only some of the traps and leave the others in "forwarding mode", as even your own traps eventually "forward", the resulting proxy is still consistent.

Yes, a very good point and one that isn't necessary obvious in the present APIs and documentation. I can imagine ways to make this more explicit in the APIs. For example providing for before_has, after_has, etc. entry points in the handler interface or by having an addition WrappingProxy constructor that takes both a before handler and an after handler (and other embellishment to tie them together).

  • You want to subclass Handler if you are building proxies that are either not backed by a wrapped target object, or you do want to wrap an object but do not want to forward all operations to it. At that point, you want to make sure that no trap lingers accidentally in the "forwarding" state. The Handler API (with abstract fundamental traps) ensures this.

Can we ensure that programmer do this. Perhaps by requiring that all handler objects are branded as such. This would most directly be accomplished via inheriting the brand.

  • We don't want meta-programmers to have to know the intricate details of the default trap implementations. Instead, all a meta-programmer subclassing Handler should care about are the dependencies between fundamental and derived traps, i.e. "for each derived trap, what fundamentals must I provide?". At this point we don't yet have good docs that document this relationship.

More generally stated, what are the invariants and post-conditions of each abstract trap.

At its core, this "footgun" is not specific to Javascript proxies: it's an old problem in OO frameworks in general, whose documentation may state: "if you subclass class C, and you override method "foo", make sure you also override method "bar" consistently!". I'm sure you've had way more practical experience with exactly this issue in developing OO frameworks than I ever have :-)

Yup, this is a fine example of why it is important to carefully distinguish the external client contract (in this case, what a client can expect from each trap call) and the subclass/extension contract (how the implementation of the traps are factored and interdependent and which invariants must be maintained when extending the implementation).

# Allen Wirfs-Brock (11 years ago)

On Nov 13, 2012, at 10:35 PM, Brendan Eich wrote:

Erik Arvidsson wrote:

On Tue, Nov 13, 2012 at 3:25 PM, Tom Van Cutsem <tomvc.be at gmail.com <mailto:tomvc.be at gmail.com>> wrote:

So, my proposal: let's revert the fundamental traps of Handler to become abstract methods again. This forces subclasses of Handler to provide all fundamentals at once, avoiding the footgun.

Maybe we should skip the derived traps for ES6 and see if the extra allocation really becomes an issue in reality? This way we are keeping the API smaller and the risk of errors lower. We can always add the derived traps later, can't we?

Allocations always matter, in my experience -- even with V8.

Also Tom's point about precision counts, independent of allocations. A "hasOwn" test in the MOP should not call the same thing that Object.getOwnPropertyDescriptor calls, if we can help it -- especially without a flag argument to hint the difference, but that flag argument is blecherous anyway.

A less blecherous flag might be as follows

getOwnPropertyDescriptor: function(target, name, populate=true) -> desc | undefined

If populate is false, the returned property descriptor doesn't need to be populated with the properties that describe attributes. In that case it must be a frozen object.

This allows a preallocated descriptor to be used to indicate the existence of a property. It also removes the blechiness of using an argument to steer the return type of the function.

# Tom Van Cutsem (11 years ago)

2012/11/14 Allen Wirfs-Brock <allen at wirfs-brock.com>

On Nov 14, 2012, at 1:22 AM, Tom Van Cutsem wrote:

Given approach 1) she can look at the full list of traps (e.g. the list at the top of harmony:direct_proxies) and quickly figure out that she must implement the "has" trap.

But this might have effects far beyond just the in operator and it still might not work. Unless it is explicitly specified (in an accessible place) how does she know that "in" uses "has" instead of "hasOwn" or "getOwnPropertyDescriptor" pus an explicit proto climb? Also what other language features depend upon "has"? You can't just implement "has" for the sole effect of intercept the "in"-operator.

Agreed, the MOP methods serve multiple purposes, with many different operators potentially triggering the same MOP method. What I was getting at was that the derived traps enable more direct mappings for some operations that can be readily understood by ES5 programmers.

# Tom Van Cutsem (11 years ago)

2012/11/14 Allen Wirfs-Brock <allen at wirfs-brock.com>

On Nov 14, 2012, at 1:25 AM, Tom Van Cutsem wrote:

  • You don't need to subclass Handler if you are building proxy abstractions that only do before/after/around-style wrapping of a target object. Even if you implement only some of the traps and leave the others in "forwarding mode", as even your own traps eventually "forward", the resulting proxy is still consistent.

Yes, a very good point and one that isn't necessary obvious in the present APIs and documentation.

I guess that depends on how you look at it.

Granted, the current documentation is lacking in this respect. Of course, most Proxy docs were written with specification in mind, and were not written from the point of view of how to use the API effectively.

On the other hand, there are many aspects of the direct proxies API that clearly signal the wrapping use case: the fact that traps default to forwarding, the fact that all traps receive the wrapped "target" object as first argument, the fact that the Reflect object makes it particularly easy to forward intercepted operations, etc.

I always like to categorize proxy abstractions in two groups:

  1. proxies that wrap other objects (revokable references, membranes, tracers, contacts, ...)
  2. proxies that don't wrap other objects directly ("virtual" objects) (e.g. test mock ups, unresolved promises, remote objects, ...)

The old proxy API clearly favored use case (2) over (1). One can view the entire shift from old proxies to direct proxies as a way to better support (1) over (2). And this shows: the footgun that was raised in this thread essentially derives from the difficulty of building virtual objects with direct proxies, which favor forwarding. With the old API, things were exactly the other way around.

With the old API, we had the "ForwardingHandler" to more easily support use case (1). With the direct proxies API, we have the "Handler" (previously called "VirtualHandler") to more easily support use case (2).

I think that's striking because both the ForwardingHandler and VirtualHandler APIs really grew organically from their respective Proxy APIs. When Mark and I started designing direct proxies, it wasn't clear to us (at least not to me) at that time that we needed to replace the old ForwardingHandler (a handler that facilitates forwarding) with its exact dual (a handler that prevents forwarding).

# David Bruant (11 years ago)

Le 14/11/2012 10:34, Andreas Rossberg a écrit :

On 14 November 2012 09:30, Tom Van Cutsem <tomvc.be at gmail.com> wrote:

2012/11/13 David Bruant <bruant.d at gmail.com>

For the particular case you've written, when going for hasOwnProperty.call or the in operator, the JS engine knows it needs to output a boolean, so it can "rewrite" (or contextually compile) your trap last line as "e===undefined" (since "undefined" is falsy and objects created by object literals are truthy). In that particular case, the allocation isn't necessary provided some simple static analysis. Maybe type inference can be of some help to prevent this allocation in more dynamic/complicated cases too. I would really love to have implementors POV here. I'm very skeptical of this.

If I may summarize, you're arguing that we can get away with spurious allocations in handler traps because JS implementations can in theory optimize (e.g. partially evaluate trap method bodies). I think that's wishful thinking. Not that implementors can't do this, I just think the cost/benefit ratio is way too high. I agree, this is a particularly daring instance of the "sufficiently smart compiler" argument.

I've been thinking about this message for a few days and I think I've made up my mind about it. In a lot of our discussions, we mention performance. As far as implementors are concerned, the incentive is removing the biggest bottlenecks as they are noticed. That's probably why a lot of optimizations related to strict mode haven't been leveraged.

I think that what this means for us (people who are discussing evolution of the language) is that we can't know which implementations will be done. We can think the evolution of the language in a way that prevent design-induced performance issues.

The cost/benefit ratio or "sufficiently smart compiler" is not really up to people on this list, and especially not today. It will have to be decided by implementors in due time if they ever notice something to be a bottleneck. Maybe that without the static analysis I mentioned the extra allocation that Tom mentioned would never be a considered as a bottleneck requiring to optimize this case and that could be fine.

So my position is let's not bake performance bottlenecks in the language (which we tend to do naturally anyway) and for things that can be optimized, let's say that the implementations will pay the cost of the static/runtime analysis if it's really felt by them like a worthwhile idea.

# Brendan Eich (11 years ago)

David Bruant wrote:

So my position is let's not bake performance bottlenecks in the language (which we tend to do naturally anyway) and for things that can be optimized, let's say that the implementations will pay the cost of the static/runtime analysis if it's really felt by them like a worthwhile idea.

No, sorry. Allocations will always cost. It's unphysical to argue otherwise. I've been burned too many times by this, even if it's a small constant overhead. Those add up, as David Bacon used to point out to good effect.

# David Bruant (11 years ago)

Le 19/11/2012 12:37, Brendan Eich a écrit :

David Bruant wrote:

So my position is let's not bake performance bottlenecks in the language (which we tend to do naturally anyway) and for things that can be optimized, let's say that the implementations will pay the cost of the static/runtime analysis if it's really felt by them like a worthwhile idea.

No, sorry. Allocations will always cost. It's unphysical to argue otherwise. I've been burned too many times by this, even if it's a small constant overhead. Those add up, as David Bacon used to point out to good effect.

I've made a point where I suggested that with type inference (which cost has been already paid), the extra allocation could be avoided. My point is that it would be up to implementors to decide whether using this type information is worthwhile against the extra allocation. This extra allocation burns only if it occurs.

I wish to point out a little thought on the topic of memory management. As far as I know, all GC algorithms I'm aware of are runtime algorithms, meaning that the primitives of these algorithms are objects and references between objects. I have never heard of memory management system that would take advantage of source code informations to not allocate memory if it's proven to be unused after allocation (or allocate less if it's proven only part will be used). Is it a stupid idea? Too much effort? The conjonctions of 2 research areas where people usually don't talk to one another?

# Brendan Eich (11 years ago)

David Bruant wrote:

I have never heard of memory management system that would take advantage of source code informations to not allocate memory if it's proven to be unused after allocation (or allocate less if it's proven only part will be used). Is it a stupid idea? Too much effort? The conjonctions of 2 research areas where people usually don't talk to one another?

If you have to ask, the answer is "yes".

Monica Lam had a student, whose name escapes me, who did a static memory analysis thesis, "Clouseau", which could analyze for such cases and optimize away allocations. This is on the line of "Sufficiently smart compiler".

# Andreas Rossberg (11 years ago)

On 19 November 2012 13:04, David Bruant <bruant.d at gmail.com> wrote:

I wish to point out a little thought on the topic of memory management. As far as I know, all GC algorithms I'm aware of are runtime algorithms, meaning that the primitives of these algorithms are objects and references between objects. I have never heard of memory management system that would take advantage of source code informations to not allocate memory if it's proven to be unused after allocation (or allocate less if it's proven only part will be used). Is it a stupid idea? Too much effort? The conjonctions of 2 research areas where people usually don't talk to one another?

Search for "region inference" or "region-based memory management". Was a hot topic in the late 90s, but ultimately the cost/benefit ratio turned out to be not so clear.

# Claus Reinke (11 years ago)

On 19 November 2012 13:04, David Bruant <bruant.d at gmail.com> wrote:

I wish to point out a little thought on the topic of memory management. As far as I know, all GC algorithms I'm aware of are runtime algorithms, meaning that the primitives of these algorithms are objects and references between objects. I have never heard of memory management system that would take advantage of source code informations to not allocate memory if it's proven to be unused after allocation (or allocate less if it's proven only part will be used). Is it a stupid idea? Too much effort? The conjonctions of 2 research areas where people usually don't talk to one another?

Search for "region inference" or "region-based memory management". Was a hot topic in the late 90s, but ultimately the cost/benefit ratio turned out to be not so clear.

Also "compile-time garbage collection" or "compile-time memory management". Then there is the whole area of "linear types" or "uniqueness types", which allow for in-place updating (reusing memory) without observable side-effects when absence of other references can be proven statically. Perhaps also "fusion", which avoids the allocation of intermediate structures, when it can be proven statically that construction will be followed immediately by deconstruction.

Claus

# Brendan Eich (11 years ago)

Claus Reinke wrote:

Also "compile-time garbage collection" or "compile-time memory management". Then there is the whole area of "linear types" or "uniqueness types",

"affine types"

which allow for in-place updating (reusing memory) without observable side-effects when absence of other references can be proven statically. Perhaps also "fusion", which avoids the allocation of intermediate structures, when it can be proven statically that construction will be followed immediately by deconstruction.

This is all great fun, but really off-target for JS. JS has no type system, lots of aliasing, and a never-ending need for speed.

# Claus Reinke (11 years ago)

Also "compile-time garbage collection" or "compile-time memory management". Then there is the whole area of "linear types" or "uniqueness types",

"affine types"

which allow for in-place updating (reusing memory) without observable side-effects when absence of other references can be proven statically. Perhaps also "fusion", which avoids the allocation of intermediate structures, when it can be proven statically that construction will be followed immediately by deconstruction.

This is all great fun, but really off-target for JS. JS has no type system, lots of aliasing, and a never-ending need for speed.

Seems I haven't replied yet. You do not necessarily need a language level type system to profit from compile-time memory management.

I don't have any references but -at the time when soft typing had made the rounds but static typing seemed to rule the functional programming world in terms of speed- there were some approaches for working on internal byte code generated from dynamically typed source, to approximate some of the performance gains of source level static analysis.

One way of looking at it was to make information providers and consumers explicit as instructions in the byte code, then to try to move related instructions closer to each other by byte code transformations.

If you managed to bring together a statement that made an abstract register a string and a test that checked for string-ness, then you could drop the test. If you managed to bring together statements for boxing and unboxing, you could drop the pair, saving memory traffic. If you managed to limit the scope of a set of reference count manipulations, you could replace them by bulk updates. No matter how many references there are to an unknown object coming in, after you make a copy, and until you pass out a reference to that copy, you know you have the only copy and might be able to update in place.

Most of these weren't quite as effective as doing proper static analysis in a language designed for it, but such tricks allowed implementations of dynamically typed functional languages to come close to the performance of statically typed language implementations.

Claus