possible excessive proxy invariants for Object.keys/etc??

# Tom Van Cutsem (13 years ago)

2012/11/18 Allen Wirfs-Brock <allen at wirfs-brock.com>

The proxy spec.for Object.getOwnPropertyNames/kets/etc. seem to be doing quite a bit more than this. They

  1. always copy the array returned from the trap? Why is this necessary? Sure the author of a trap should probably always return a fresh object but not doing so doesn't violate the integrity of the frozen/sealed invariants? In most cases they will provide a fresh object and copying adds unnecessary work that is proportional to the number of names to every such call.

The copying is to ensure: a) that the result is an Array b) that all the elements of the result are Strings c) to ensure the stability of the result.

You can think of a + b as implementing a type coercion of the trap result to "Array of String". This coercion is not too dissimilar from what the getOwnPropertyDescriptor has to do (normalization of the returned property descriptor by creating a fresh copy).

c) on the other hand is crucial for the non-configurability/non-extensibility checks mentioned below. It's no use checking some invariants on a data structure if that data structure can later still be mutated.

If we don't care about any of a, b and c, then the result array wouldn't need to be copied.

  1. ensuring that the list of property keys contains no duplicates. Why is this essential? Again, I don't see what it has to do with the integrity of the frozen/sealed invariants. It is extra and probably unnecessary work that is at least proportional to the number of names).

We've been going back and forth over whether or not we wanted to prevent duplicates. I remember Andreas Gal being concerned about these kinds of issues (that was when he was doing the first Firefox prototype for old proxies, in the context of the enumerate() trap, which was called during a live for-in loop. IIRC, Firefox already did de-dupe checks, as properties already enumerated in a child object should not be re-visited when visiting a parent object)

More recently, at the last TC39 meeting in Boston, we decided to change the return type of the enumerate() trap from Array[String] to Iterator, and in doing so waiving the duplicate properties check. Quoting from the "Open issues" section of harmony:direct_proxies#open_issues:

"Enumerate trap signature: consider making the enumerate() trap return an iterator rather than an array of strings. To retain the benefits of an iterator (no need to store collection in memory), we might need to waive the duplicate properties check. Resolution: accepted (duplicate properties check is waived in favor of iterator return type)"

I guess if duplicate properties are not crucial for enumeration, they're also not crucial for Object.getOwnPropertyNames and Object.keys and can be dropped. Mark, can you comment?

  1. Every name in the list returned by the trap code is looked up on the target to determine whether or not it exists, even if the target is extensible. Each of those lookup is observable (the target might itself be a proxy) so, according to the algorithm they all must be performed.

This is where we get into actual checks required to enforce non-configurability/non-extensibility.

Granted, the ES5 spec is not clear about the invariants on getOwnPropertyNames and keys. The currently specified invariants are a common-sense extrapolation of the existing invariants to cover these operations.

In practice, it determines the degree of confidence that a programmer can have in Object.getOwnPropertyNames and friends when dealing with a frozen object. If we waive these invariant checks, then the result of those operations can never be trusted on to reliably introspect on a frozen object's list of property names:

Object.isFrozen(proxy) // true Object.getOwnPropertyNames(proxy) // ['foo'] Object.getOwnPropertyNames(proxy) // [ ]

Here, the 'foo' property apparently disappeared on a frozen object.

If neither the for-in loop nor Object.getOwnPropertyNames nor Object.keys can reliably report an object's own properties, then we've made it impossible to reliably traverse and inspect a presumably deep-frozen object graph.

  1. Every own property of the target, is observably looked up (possibly a second time) even if the object is extensible and has no non-configurable properties.

We may be able to remove the redundancy of two lookups by restructuring the algorithm. There previously was some redundancy in other checks as well.

It isn't clear to me if any of this work is really necessary to ensure integrity. After all, what can you do with any of these names other than use them as the property key argument to some other trap/internal method such as [[SetP]], [[DefineOwnProperty]], etc. Called on a proxy, those fundamental operations are going to enforce the integrity invariants of the actual properties involved so the get name checks doesn't really seem to be adding anything essential.

Perhaps we can just get rid of all the above checking. It seems like a good idea to me.

Alternatively, it suggests that a [[GetNonConfigurablePropertyNames]] internal method/trap would be a useful call to have as the integrity invariants only care about non-configurable properties. That would significantly limit the work in the case where there are none and limit the observable trap calls to only the non-configurable properties.

That would be one way to speed things up.

I share your concern that the current spec algorithm may be a bit too restrictive for implementations to allow optimizations. It's very imperative. We should ask the spec implementors to have a look at the draft spec and share their concerns.

# Allen Wirfs-Brock (13 years ago)

On Nov 19, 2012, at 10:04 AM, Tom Van Cutsem wrote:

Hi Allen,

2012/11/18 Allen Wirfs-Brock <allen at wirfs-brock.com> The proxy spec.for Object.getOwnPropertyNames/kets/etc. seem to be doing quite a bit more than this. They

  1. always copy the array returned from the trap? Why is this necessary? Sure the author of a trap should probably always return a fresh object but not doing so doesn't violate the integrity of the frozen/sealed invariants? In most cases they will provide a fresh object and copying adds unnecessary work that is proportional to the number of names to every such call.

The copying is to ensure: a) that the result is an Array

Why is Array-ness essential? It was what ES5 specified, but ES5 had a fixed implementations of Object.getOwnPropertyNames/keys so that requirement was simply a reflection of what the spec's algorithms actually did. Most places in the specification that consume an "array" only require array-like-ness, not an actual array. Now that we're make the implementation of the getOPN/keys extensible, it would be natural to relax the requirement that they produce a [[Class]]==="Array" object.

b) that all the elements of the result are Strings

And presumably Symbols. We have to also accommodate Symbols, at least for getOwnPropetyNames. Regardless, why is this important? More below...

c) to ensure the stability of the result.

You can think of a + b as implementing a type coercion of the trap result to "Array of String". This coercion is not too dissimilar from what the getOwnPropertyDescriptor has to do (normalization of the returned property descriptor by creating a fresh copy).

Yes, premature type coercion, in my opinion. Also, a classic performance mistakes made by dynamic language programmers: unnecessary coercion of values that are never going to be access or redundant coercion checks of values that are already of the property type. Why is it important to do such checks on values that are are just passing through these traps. When and if somebody actually gets arounds to using one of the elements that are returned as a property key they will be automatically coerced to a valid value. Why is it important that it happens any sooner than that.

getOwnPropertyDescriptor is also on my list to talk to you about for similar reasons. I don't see why the property descriptor needs to be normalized and regenerated (including converting accessor properties on the descriptor into data properties and requiring the [[Prototype]] to be object. I have an alternative for [[GetOwnProperty]]/TrapGetOwnPropertyDescriptor that preserves the original object returned from the trap (while normalizing to a descriptor record for any internal calls on ordinary objects). This preserves any exotic property descriptor object properties as is. This seems like exactly the right things. Such exotic property descriptor properties can only be meaningfully used by other proxy traps or application/library code that knows about them. I don't see any need to normalize them into data properties or otherwise muck with what the actually implementor of the getOwnPropertyDescrptor trap choose to return.

I this alternative implementation is, again, much simpler (and efficient) with fewer externally observable calls in its algorithm steps. I sure you will want to review it so I will include it in the spec. draft.

c) on the other hand is crucial for the non-configurability/non-extensibility checks mentioned below. It's no use checking some invariants on a data structure if that data structure can later still be mutated.

I don't see how an extra copy of the the result array makes any difference in this regard (also see below). The array returned from Object.getOwnPropertyNames is still mutable even after you make a copy. It's only as stable as any other non-frozen object you are likely to encounter.

If we don't care about any of a, b and c, then the result array wouldn't need to be copied.

yes, I believe that should be the case.

  1. ensuring that the list of property keys contains no duplicates. Why is this essential? Again, I don't see what it has to do with the integrity of the frozen/sealed invariants. It is extra and probably unnecessary work that is at least proportional to the number of names).

We've been going back and forth over whether or not we wanted to prevent duplicates. I remember Andreas Gal being concerned about these kinds of issues (that was when he was doing the first Firefox prototype for old proxies, in the context of the enumerate() trap, which was called during a live for-in loop. IIRC, Firefox already did de-dupe checks, as properties already enumerated in a child object should not be re-visited when visiting a parent object)

More recently, at the last TC39 meeting in Boston, we decided to change the return type of the enumerate() trap from Array[String] to Iterator, and in doing so waiving the duplicate properties check. Quoting from the "Open issues" section of harmony:direct_proxies#open_issues:

"Enumerate trap signature: consider making the enumerate() trap return an iterator rather than an array of strings. To retain the benefits of an iterator (no need to store collection in memory), we might need to waive the duplicate properties check. Resolution: accepted (duplicate properties check is waived in favor of iterator return type)"

I guess if duplicate properties are not crucial for enumeration, they're also not crucial for Object.getOwnPropertyNames and Object.keys and can be dropped. Mark, can you comment?

good

  1. Every name in the list returned by the trap code is looked up on the target to determine whether or not it exists, even if the target is extensible. Each of those lookup is observable (the target might itself be a proxy) so, according to the algorithm they all must be performed.

This is where we get into actual checks required to enforce non-configurability/non-extensibility.

Granted, the ES5 spec is not clear about the invariants on getOwnPropertyNames and keys. The currently specified invariants are a common-sense extrapolation of the existing invariants to cover these operations.

Yes, but if they aren't essential then we have to take into account the cost of doing non-essential invariant checks. These costs are both the actual runtime cost of doing the checks, the constraints upon implementation imposed by the need for complete and properly sequenced occurrence of all potentially externally observable algorithm steps, and finally (and least important) the added complexity of the specification which makes it harder to actually achieve interoperable implementations.

In practice, it determines the degree of confidence that a programmer can have in Object.getOwnPropertyNames and friends when dealing with a frozen object. If we waive these invariant checks, then the result of those operations can never be trusted on to reliably introspect on a frozen object's list of property names:

Object.isFrozen(proxy) // true Object.getOwnPropertyNames(proxy) // ['foo'] Object.getOwnPropertyNames(proxy) // [ ]

Here, the 'foo' property apparently disappeared on a frozen object.

Yes, but copying the result an extra time doesn't ensure that two separate calls produce the same result.

If neither the for-in loop nor Object.getOwnPropertyNames nor Object.keys can reliably report an object's own properties, then we've made it impossible to reliably traverse and inspect a presumably deep-frozen object graph.

If that is the goal, that maybe what you need is a new operation that cannot be trapped. Your algorithm is essentially doing all the work, and more, necessary to generate an accurate report of the actual non-configurable properties of the target. If that is what is needed for your deep frozen traversals, add that operation. That seems better than allowing the Proxy trap to do the work to report anything it wants, but then coming in after the fact and checking whether what was report matches what you demand. In this case, it seems preferable to just provide the results you demand without any handler intervention.

This seems to harken back to the original proxy design that essentially turned frozen objects into regular objects with no handlers. If that is what you need, you should provide access to the results you need for the situations where you need them.

But you should not be adding this overhead for Proxy uses that don't have those requirements.

  1. Every own property of the target, is observably looked up (possibly a second time) even if the object is extensible and has no non-configurable properties.

We may be able to remove the redundancy of two lookups by restructuring the algorithm. There previously was some redundancy in other checks as well.

It isn't clear to me if any of this work is really necessary to ensure integrity. After all, what can you do with any of these names other than use them as the property key argument to some other trap/internal method such as [[SetP]], [[DefineOwnProperty]], etc. Called on a proxy, those fundamental operations are going to enforce the integrity invariants of the actual properties involved so the get name checks doesn't really seem to be adding anything essential.

Perhaps we can just get rid of all the above checking. It seems like a good idea to me.

Alternatively, it suggests that a [[GetNonConfigurablePropertyNames]] internal method/trap would be a useful call to have as the integrity invariants only care about non-configurable properties. That would significantly limit the work in the case where there are none and limit the observable trap calls to only the non-configurable properties.

That would be one way to speed things up.

I share your concern that the current spec algorithm may be a bit too restrictive for implementations to allow optimizations. It's very imperative. We should ask the spec implementors to have a look at the draft spec and share their concerns.

Maybe we should move this discussion to es-discuss. What do you think?

# Tom Van Cutsem (13 years ago)

2012/11/19 Allen Wirfs-Brock <allen at wirfs-brock.com>

On Nov 19, 2012, at 10:04 AM, Tom Van Cutsem wrote:

The copying is to ensure: a) that the result is an Array

Why is Array-ness essential? It was what ES5 specified, but ES5 had a fixed implementations of Object.getOwnPropertyNames/keys so that requirement was simply a reflection of what the spec's algorithms actually did. Most places in the specification that consume an "array" only require array-like-ness, not an actual array. Now that we're make the implementation of the getOPN/keys extensible, it would be natural to relax the requirement that they produce a [[Class]]==="Array" object.

It's all a matter of developer expectations and how much leeway we have in changing the "return type" of existing built-ins. In theory, it's a backwards-incompatible change. In practice, it may not matter.

b) that all the elements of the result are Strings

And presumably Symbols. We have to also accommodate Symbols, at least for getOwnPropetyNames. Regardless, why is this important? More below...

Same argument as above.

I recall there was some concern about symbols showing up in existing reflection methods like Object.getOwnPropertyNames. Not sure how that got resolved. It's basically touching upon the same issue of whether we can change the return type of existing methods.

c) to ensure the stability of the result.

You can think of a + b as implementing a type coercion of the trap result to "Array of String". This coercion is not too dissimilar from what the getOwnPropertyDescriptor has to do (normalization of the returned property descriptor by creating a fresh copy).

Yes, premature type coercion, in my opinion. Also, a classic performance mistakes made by dynamic language programmers: unnecessary coercion of values that are never going to be access or redundant coercion checks of values that are already of the property type. Why is it important to do such checks on values that are are just passing through these traps. When and if somebody actually gets arounds to using one of the elements that are returned as a property key they will be automatically coerced to a valid value. Why is it important that it happens any sooner than that.

I don't know if you are familiar with the work of Felleisen et al. on higher-order contracts. In that work, they use the notion of "blame" between different components/modules. Basically: if some module A receives some data from a module B, and B provides "wrong" data, then B should be assigned blame. You don't want to end up in a situation where A receives the blame at the point where it passes that "wrong" data into another module C.

The "early" type coercions at the exit of the traps can be thought of in these terms: the invariant checks will always blame the proxy handler for producing wrong data, at the point where the wrong data is produced and before it's exposed to client code. Clients are expecting ES5/6 objects to obey an implicit "contract".

getOwnPropertyDescriptor is also on my list to talk to you about for similar reasons. I don't see why the property descriptor needs to be normalized and regenerated (including converting accessor properties on the descriptor into data properties and requiring the [[Prototype]] to be object. I have an alternative for [[GetOwnProperty]]/TrapGetOwnPropertyDescriptor that preserves the original object returned from the trap (while normalizing to a descriptor record for any internal calls on ordinary objects). This preserves any exotic property descriptor object properties as is. This seems like exactly the right things. Such exotic property descriptor properties can only be meaningfully used by other proxy traps or application/library code that knows about them. I don't see any need to normalize them into data properties or otherwise muck with what the actually implementor of the getOwnPropertyDescrptor trap choose to return.

I this alternative implementation is, again, much simpler (and efficient) with fewer externally observable calls in its algorithm steps. I sure you will want to review it so I will include it in the spec. draft.

It's all a matter of how much invariants we care to give up.

I think it's risky to allow descriptors to be returned whose attributes may be accessors, thus potentially fooling much existing ES5 code that often does simple tests like:

if (!desc.writable && !desc.configurable) { var val = desc.value; // we can assume the property is immutable, so we can cache its value ... }

You can no longer rely on the correctness of such code if desc can't be guaranteed to be a plain old object containing only data properties.

c) on the other hand is crucial for the non-configurability/non-extensibility checks mentioned below. It's no use checking some invariants on a data structure if that data structure can later still be mutated.

I don't see how an extra copy of the the result array makes any difference in this regard (also see below). The array returned from Object.getOwnPropertyNames is still mutable even after you make a copy. It's only as stable as any other non-frozen object you are likely to encounter.

The returned copy, even if it is mutable, is fresh, meaning that no one else yet has a pointer to it. Hence it can't be modified by the proxy handler.

If we don't make a copy, but we do check the result for invariants, here's what may happen:

  1. proxy checks whether the returned array contains all non-configurable properties of the target
  2. let's assume this is the case, so the proxy returns this array to the client
  3. the proxy handler kept a reference to the array, and mutates it, e.g. by removing a non-configurable property
  4. client is now wrongly under the assumption that the returned array contains all non-configurable properties
  1. Every name in the list returned by the trap code is looked up on the target to determine whether or not it exists, even if the target is extensible. Each of those lookup is observable (the target might itself be a proxy) so, according to the algorithm they all must be performed.

This is where we get into actual checks required to enforce non-configurability/non-extensibility.

Granted, the ES5 spec is not clear about the invariants on getOwnPropertyNames and keys. The currently specified invariants are a common-sense extrapolation of the existing invariants to cover these operations.

Yes, but if they aren't essential then we have to take into account the cost of doing non-essential invariant checks. These costs are both the actual runtime cost of doing the checks, the constraints upon implementation imposed by the need for complete and properly sequenced occurrence of all potentially externally observable algorithm steps, and finally (and least important) the added complexity of the specification which makes it harder to actually achieve interoperable implementations.

I agree about the cost. The question is whether we regard these invariants as essential or not. To me, they are the common-sense invariants an ES5 programmer would expect.

In practice, it determines the degree of confidence that a programmer can have in Object.getOwnPropertyNames and friends when dealing with a frozen object. If we waive these invariant checks, then the result of those operations can never be trusted on to reliably introspect on a frozen object's list of property names:

Object.isFrozen(proxy) // true Object.getOwnPropertyNames(proxy) // ['foo'] Object.getOwnPropertyNames(proxy) // [ ]

Here, the 'foo' property apparently disappeared on a frozen object.

Yes, but copying the result an extra time doesn't ensure that two separate calls produce the same result.

It's not the copying alone that guarantees it, it's the invariant check that 'foo' is in the result + the copying that guarantees it.

If neither the for-in loop nor Object.getOwnPropertyNames nor Object.keys can reliably report an object's own properties, then we've made it impossible to reliably traverse and inspect a presumably deep-frozen object graph.

If that is the goal, that maybe what you need is a new operation that cannot be trapped. Your algorithm is essentially doing all the work, and more, necessary to generate an accurate report of the actual non-configurable properties of the target. If that is what is needed for your deep frozen traversals, add that operation. That seems better than allowing the Proxy trap to do the work to report anything it wants, but then coming in after the fact and checking whether what was report matches what you demand. In this case, it seems preferable to just provide the results you demand without any handler intervention.

This seems to harken back to the original proxy design that essentially turned frozen objects into regular objects with no handlers. If that is what you need, you should provide access to the results you need for the situations where you need them.

But you should not be adding this overhead for Proxy uses that don't have those requirements.

The problem with, say, a non-trappable [[GetNonConfigurableOwnProperties]] internal method, is that membranes can't adapt. We were previously in this situation with the [[Extensible]] internal property. Eventually we ended up trapping Object.isExtensible and Object.preventExtensions, even though these traps can't actually return a result that is different from applying the operation directly to the target. They're still useful to trap, as the proxy may then respond to these operations and change the state of its own target before returning a result.

I'm pretty sure introducing a new non-trappable method is going to lead to the same problems for membranes.

  1. Every own property of the target, is observably looked up (possibly a second time) even if the object is extensible and has no non-configurable properties.

We may be able to remove the redundancy of two lookups by restructuring the algorithm. There previously was some redundancy in other checks as well.

It isn't clear to me if any of this work is really necessary to ensure integrity. After all, what can you do with any of these names other than use them as the property key argument to some other trap/internal method such as [[SetP]], [[DefineOwnProperty]], etc. Called on a proxy, those fundamental operations are going to enforce the integrity invariants of the actual properties involved so the get name checks doesn't really seem to be adding anything essential.

Perhaps we can just get rid of all the above checking. It seems like a good idea to me.

Alternatively, it suggests that a [[GetNonConfigurablePropertyNames]] internal method/trap would be a useful call to have as the integrity invariants only care about non-configurable properties. That would significantly limit the work in the case where there are none and limit the observable trap calls to only the non-configurable properties.

That would be one way to speed things up.

I share your concern that the current spec algorithm may be a bit too restrictive for implementations to allow optimizations. It's very imperative. We should ask the spec implementors to have a look at the draft spec and share their concerns.

Maybe we should move this discussion to es-discuss. What do you think?

Yes, I would be happy to continue the discussion in public.

# Allen Wirfs-Brock (13 years ago)

Tom Van Custem have been having some email discussion while I work on integrating Proxys into the ES6 spec. He and I agree that some broader input would be useful so I'm going to forward some of the message here to es-discuss and carry the discussion forward here. Here is the first message with other to follow:

Begin forwarded message:

# Allen Wirfs-Brock (13 years ago)

(for some reason the followup message didn't seem to make it to es-discuss the first time I redirected them. so here goes using an alternative technique. Sorry in advance with we end up with duplicate messages)

Begin forwarded message:

# Allen Wirfs-Brock (13 years ago)

Begin forwarded message:

# Allen Wirfs-Brock (13 years ago)

Begin forwarded message:

# Brandon Benvie (13 years ago)

In to the non-configurable/extensible issue, the issue is that the proxy still needs to be notified of what's happening, but it's not really allowed to trap because the result is predetermined. Currently this is handled by treating it like a normal trap activation with extra limitations on the return result. Skipping the trap entirely and returning the result is undesirable because the notification is still important even when the result can't be influenced.

The ideal result would be for the the trap to be called normally but as a notification rather than as a request for something to happen/a return value. That way a handler that needs side effects to happen can still make sure they do, but there's no need to conjure up descriptors or reflect actions where the result is predetermined.

# Allen Wirfs-Brock (13 years ago)

and my first reply, direct to es-discuss...

On Nov 20, 2012, at 2:32 PM, Allen Wirfs-Brock wrote:

Begin forwarded message:

From: Tom Van Cutsem <tomvc.be at gmail.com> Date: November 20, 2012 11:36:24 AM PST To: Allen Wirfs-Brock <allen at wirfs-brock.com> Cc: "Mark S. Miller" <erights at google.com>, Jason Orendorff <jorendorff at mozilla.com> Subject: Re: possible excessive proxy invariants for Object.keys/etc??

2012/11/19 Allen Wirfs-Brock <allen at wirfs-brock.com> On Nov 19, 2012, at 10:04 AM, Tom Van Cutsem wrote:

The copying is to ensure: a) that the result is an Array

Why is Array-ness essential? It was what ES5 specified, but ES5 had a fixed implementations of Object.getOwnPropertyNames/keys so that requirement was simply a reflection of what the spec's algorithms actually did. Most places in the specification that consume an "array" only require array-like-ness, not an actual array. Now that we're make the implementation of the getOPN/keys extensible, it would be natural to relax the requirement that they produce a [[Class]]==="Array" object.

It's all a matter of developer expectations and how much leeway we have in changing the "return type" of existing built-ins. In theory, it's a backwards-incompatible change. In practice, it may not matter.

It can't break existing code that doesn't use Proxies. Plus, the default behavior is still to return a built-in Array instance. For existing code to break it would have to:

  1. be used in an environment where proxies existed and were being used
  2. it would have to actually be reflecting upon such proxies
  3. The proxy would have to return something other than a built-in array instance from the getOwnPropertyNames/keys trap.
  4. The code would actually have to be dependent upon the built-in array-ness (Array.isArray check, dependency upon the length invariant, etc.) of such such a returned object.

It all seems quite unlikely to cause any compat issues for existing and probably worth the risk. Particularly if the alternative is to add an otherwise unnecessary array copying to every such proxy trap call.

b) that all the elements of the result are Strings

And presumably Symbols. We have to also accommodate Symbols, at least for getOwnPropetyNames. Regardless, why is this important? More below...

Same argument as above.

I recall there was some concern about symbols showing up in existing reflection methods like Object.getOwnPropertyNames. Not sure how that got resolved. It's basically touching upon the same issue of whether we can change the return type of existing methods.

I'm assuming we are excluding symbols from [[Enumerate]] and [[Keys]] so it is only [[GetOwnPropertyNames]] that have this concern.

One way or another we have to be able to reflectively get a list of symbol keyed properties.

We can either:

  1. include them in the Object.getOwnPropertyNames result.

  2. Consider getOwnPropertyNames depreciated (it actually lives forever) and replace it with a new Object.getOwnPropertyKeys that includes non-private Symbol keys

  3. Keep getOwnPropertyNames as is and add Object.getOwnPropertySymbols that only returns the non-private-Symbol keys.

  4. has the greatest backward compat concerns, but is the simplest to provide and for ES programmers to deal with going forward.

  5. Eliminates the compat risk but creates perpetual potential for a: used gOPNames when I should have used gOPK hazard.

  6. Eliminates the compat risk but means everybody who wants to deal with all own properties have to deal with two lists of keys. They also loose relative insertion order info relating key/symbol property keys.

Either 2 or 3 could require widening the MOP which increases the possibility for incomplete, inconsistent, or otherwise buggy proxies. Even if we expose multiple public functions like either 2 or 3 I would still prefer to only have a single getOwnProperyyKeys internal method/trap that can be filtered to provide just strings and/or symbols. Overall, I'm more concerned about keeping the internal method/trap MOP width as narrow as possible than I am about the size of the public Object.* API

c) to ensure the stability of the result.

You can think of a + b as implementing a type coercion of the trap result to "Array of String". This coercion is not too dissimilar from what the getOwnPropertyDescriptor has to do (normalization of the returned property descriptor by creating a fresh copy).

Yes, premature type coercion, in my opinion. Also, a classic performance mistakes made by dynamic language programmers: unnecessary coercion of values that are never going to be access or redundant coercion checks of values that are already of the property type. Why is it important to do such checks on values that are are just passing through these traps. When and if somebody actually gets arounds to using one of the elements that are returned as a property key they will be automatically coerced to a valid value. Why is it important that it happens any sooner than that.

I don't know if you are familiar with the work of Felleisen et al. on higher-order contracts. In that work, they use the notion of "blame" between different components/modules. Basically: if some module A receives some data from a module B, and B provides "wrong" data, then B should be assigned blame. You don't want to end up in a situation where A receives the blame at the point where it passes that "wrong" data into another module C.

Yes, but ES is rampant with this sort of potentially misplaced blame. We can debate whether such proper "blame" assignment is important or not, but I do believe this sort of very low level MOP interface is a situation where you want to absolutely minimize none essential work. I'd sooner have it be a little bit more difficult to track now Proxy based bugs then to impact the performance of every correctly implemented proxy in every correct program. BTW this is a general statement about the entire proxy MOP and not just about these particularly property key access traps.

The "early" type coercions at the exit of the traps can be thought of in these terms: the invariant checks will always blame the proxy handler for producing wrong data, at the point where the wrong data is produced and before it's exposed to client code. Clients are expecting ES5/6 objects to obey an implicit "contract".

We are defining that contract right now. It would be a mistake to over specify it in ways that may unnecessarily impact performance.

getOwnPropertyDescriptor is also on my list to talk to you about for similar reasons. I don't see why the property descriptor needs to be normalized and regenerated (including converting accessor properties on the descriptor into data properties and requiring the [[Prototype]] to be object. I have an alternative for [[GetOwnProperty]]/TrapGetOwnPropertyDescriptor that preserves the original object returned from the trap (while normalizing to a descriptor record for any internal calls on ordinary objects). This preserves any exotic property descriptor object properties as is. This seems like exactly the right things. Such exotic property descriptor properties can only be meaningfully used by other proxy traps or application/library code that knows about them. I don't see any need to normalize them into data properties or otherwise muck with what the actually implementor of the getOwnPropertyDescrptor trap choose to return.

I this alternative implementation is, again, much simpler (and efficient) with fewer externally observable calls in its algorithm steps. I sure you will want to review it so I will include it in the spec. draft.

It's all a matter of how much invariants we care to give up.

The only invariants we have general agreement regarding enforce are those with respect to the visible properties of frozen/sealed target objects.

I think enforcement of additional invariant that don't directly related to those specific integrity constrains shouldn't be added.

Even without them, the core of the language is still memory safe and interoperability implementable. From the spec. I can tell you exactly what is supposed to happen for any of these situations.

I think it's risky to allow descriptors to be returned whose attributes may be accessors, thus potentially fooling much existing ES5 code that often does simple tests like:

if (!desc.writable && !desc.configurable) { var val = desc.value; // we can assume the property is immutable, so we can cache its value ... }

You can no longer rely on the correctness of such code if desc can't be guaranteed to be a plain old object containing only data properties.

If you want to enforce this as an invariant (after you've already done the fast check to make sure that the target is an high integrity object) I'm fine with that. But, please don't but more checks in the fast path where we don't have these integrity requirements.

Also, in any situation like the above if you really care you can always do a Object.getOwnPropertyDescriptor and verify that you really are dealing with a own data property. Make the client that has the requirement pay the most rather than everybody else who doesn't.

c) on the other hand is crucial for the non-configurability/non-extensibility checks mentioned below. It's no use checking some invariants on a data structure if that data structure can later still be mutated.

I don't see how an extra copy of the the result array makes any difference in this regard (also see below). The array returned from Object.getOwnPropertyNames is still mutable even after you make a copy. It's only as stable as any other non-frozen object you are likely to encounter.

The returned copy, even if it is mutable, is fresh, meaning that no one else yet has a pointer to it. Hence it can't be modified by the proxy handler.

If we don't make a copy, but we do check the result for invariants, here's what may happen:

  1. proxy checks whether the returned array contains all non-configurable properties of the target
  2. let's assume this is the case, so the proxy returns this array to the client
  3. the proxy handler kept a reference to the array, and mutates it, e.g. by removing a non-configurable property
  4. client is now wrongly under the assumption that the returned array contains all non-configurable properties

and then... How is this likely to be exploited? But if is exploitable, then I'd be content if you only enforce it for the high integrity object situations.

  1. Every name in the list returned by the trap code is looked up on the target to determine whether or not it exists, even if the target is extensible. Each of those lookup is observable (the target might itself be a proxy) so, according to the algorithm they all must be performed.

This is where we get into actual checks required to enforce non-configurability/non-extensibility.

Granted, the ES5 spec is not clear about the invariants on getOwnPropertyNames and keys. The currently specified invariants are a common-sense extrapolation of the existing invariants to cover these operations.

Yes, but if they aren't essential then we have to take into account the cost of doing non-essential invariant checks. These costs are both the actual runtime cost of doing the checks, the constraints upon implementation imposed by the need for complete and properly sequenced occurrence of all potentially externally observable algorithm steps, and finally (and least important) the added complexity of the specification which makes it harder to actually achieve interoperable implementations.

I agree about the cost. The question is whether we regard these invariants as essential or not. To me, they are the common-sense invariants an ES5 programmer would expect.

They may be common sense but not necessarily essential. There is a cost to all such enforcement.

In practice, it determines the degree of confidence that a programmer can have in Object.getOwnPropertyNames and friends when dealing with a frozen object. If we waive these invariant checks, then the result of those operations can never be trusted on to reliably introspect on a frozen object's list of property names:

Object.isFrozen(proxy) // true Object.getOwnPropertyNames(proxy) // ['foo'] Object.getOwnPropertyNames(proxy) // [ ]

Here, the 'foo' property apparently disappeared on a frozen object.

Yes, but copying the result an extra time doesn't ensure that two separate calls produce the same result.

It's not the copying alone that guarantees it, it's the invariant check that 'foo' is in the result + the copying that guarantees it.

If neither the for-in loop nor Object.getOwnPropertyNames nor Object.keys can reliably report an object's own properties, then we've made it impossible to reliably traverse and inspect a presumably deep-frozen object graph.

If that is the goal, that maybe what you need is a new operation that cannot be trapped. Your algorithm is essentially doing all the work, and more, necessary to generate an accurate report of the actual non-configurable properties of the target. If that is what is needed for your deep frozen traversals, add that operation. That seems better than allowing the Proxy trap to do the work to report anything it wants, but then coming in after the fact and checking whether what was report matches what you demand. In this case, it seems preferable to just provide the results you demand without any handler intervention.

This seems to harken back to the original proxy design that essentially turned frozen objects into regular objects with no handlers. If that is what you need, you should provide access to the results you need for the situations where you need them.

But you should not be adding this overhead for Proxy uses that don't have those requirements.

The problem with, say, a non-trappable [[GetNonConfigurableOwnProperties]] internal method, is that membranes can't adapt. We were previously in this situation with the [[Extensible]] internal property. Eventually we ended up trapping Object.isExtensible and Object.preventExtensions, even though these traps can't actually return a result that is different from applying the operation directly to the target. They're still useful to trap, as the proxy may then respond to these operations and change the state of its own target before returning a result.

I'm pretty sure introducing a new non-trappable method is going to lead to the same problems for membranes.

Then have the trap but always ignore its results and call a non-trapping operation on the target object to get the actual list of names. You integrity check are all about non-configurable properties. Normal, non-high integrity proxies want integrity checking algorithms that are proportional to the number of non-configurable properties (usually 0) rather than the total number of properties.

# Allen Wirfs-Brock (13 years ago)

On Nov 20, 2012, at 3:16 PM, Brandon Benvie wrote:

In to the non-configurable/extensible issue, the issue is that the proxy still needs to be notified of what's happening, but it's not really allowed to trap because the result is predetermined. Currently this is handled by treating it like a normal trap activation with extra limitations on the return result. Skipping the trap entirely and returning the result is undesirable because the notification is still important even when the result can't be influenced.

The ideal result would be for the the trap to be called normally but as a notification rather than as a request for something to happen/a return value. That way a handler that needs side effects to happen can still make sure they do, but there's no need to conjure up descriptors or reflect actions where the result is predetermined.

right, I just made what I think is an equivalent suggestion in a new replay to Tom's last message.

# David Bruant (13 years ago)

Le 21/11/2012 01:06, Allen Wirfs-Brock a écrit :

b) that all the elements of the result are Strings
And presumably Symbols.  We have to also accommodate Symbols, at
least for getOwnPropetyNames.  Regardless, why is this
important?  More below...

Same argument as above.

I recall there was some concern about symbols showing up in existing reflection methods like Object.getOwnPropertyNames. Not sure how that got resolved. It's basically touching upon the same issue of whether we can change the return type of existing methods.

I'm assuming we are excluding symbols from [[Enumerate]] and [[Keys]]

Why so? I would assume unique symbols to show up for these.

so it is only [[GetOwnPropertyNames]] that have this concern.

and of course non-enumerable unique symbols for this operation.

For both case, I think returning unique symbols work as long as ES6 doesn't consider that objects have built-in unique-symboled properties (because that could probably break existing code).

One way or another we have to be able to reflectively get a list of symbol keyed properties.

We can either:

  1. include them in the Object.getOwnPropertyNames result.

1bis) Make all built-in symbols (like @iterable) private

  1. Consider getOwnPropertyNames depreciated (it actually lives forever) and replace it with a new Object.getOwnPropertyKeys that includes non-private Symbol keys

  2. Keep getOwnPropertyNames as is and add Object.getOwnPropertySymbols that only returns the non-private-Symbol keys.

  3. has the greatest backward compat concerns, but is the simplest to provide and for ES programmers to deal with going forward.

Since the use of Object.getOwnPropertyNames may not be widespread, maybe that making non-enumerable unique symbol properties could do the trick (as it has with new {Object, Array, etc.}.prototype additions)

1bis) If all standard built-in names are private, they're not enumerated, neither in for-in, Object.keys, Object.getOwnPropertyNames and you can have a unique name in any of these enumeration operation if you've added them manually, so, no backward compat (or close enough that it's acceptable in my opinion)

  1. Eliminates the compat risk but creates perpetual potential for a: used gOPNames when I should have used gOPK hazard.
  2. Eliminates the compat risk but means everybody who wants to deal with all own properties have to deal with two lists of keys. They also loose relative insertion order info relating key/symbol property keys.

It's a bit unfortunate to make all built-in symbols private (because they ought to be unique), but it feels like something acceptable. It may induce a bit of boilerplate for proxy whitelists (because all built-in names like @iterable would need to be added), but a built-in constructor helper that would generate sets with all built-in names in it could solve that issue easily.

David

[1] strawman:enumeration

# Andreas Rossberg (13 years ago)

On 21 November 2012 01:06, Allen Wirfs-Brock <allen at wirfs-brock.com> wrote:

Tom Van Cutsem <tomvc.be at gmail.com> wrote:

Allen Wirfs-Brock <allen at wirfs-brock.com> wrote:

Tom Van Cutsem <tomvc.be at gmail.com> wrote:

c) to ensure the stability of the result.

You can think of a + b as implementing a type coercion of the trap result to "Array of String". This coercion is not too dissimilar from what the getOwnPropertyDescriptor has to do (normalization of the returned property descriptor by creating a fresh copy).

Yes, premature type coercion, in my opinion. Also, a classic performance mistakes made by dynamic language programmers: unnecessary coercion of values that are never going to be access or redundant coercion checks of values that are already of the property type. Why is it important to do such checks on values that are are just passing through these traps. When and if somebody actually gets arounds to using one of the elements that are returned as a property key they will be automatically coerced to a valid value. Why is it important that it happens any sooner than that.

I don't know if you are familiar with the work of Felleisen et al. on higher-order contracts. In that work, they use the notion of "blame" between different components/modules. Basically: if some module A receives some data from a module B, and B provides "wrong" data, then B should be assigned blame. You don't want to end up in a situation where A receives the blame at the point where it passes that "wrong" data into another module C.

Yes, but ES is rampant with this sort of potentially misplaced blame. We can debate whether such proper "blame" assignment is important or not, but I do believe this sort of very low level MOP interface is a situation where you want to absolutely minimize none essential work. I'd sooner have it be a little bit more difficult to track now Proxy based bugs then to impact the performance of every correctly implemented proxy in every correct program. BTW this is a general statement about the entire proxy MOP and not just about these particularly property key access traps.

I'm strongly in favour of guaranteeing the contract Tom is mentioning. However, there is an alternative to copying: we could require the array (or array-like object) returned by the trap to be frozen. (We could also freeze it ourselves, but that might be more problematic.)

(The fact that ES is already full of mistakes should not be an excuse for reiterating them for new features.)

# Allen Wirfs-Brock (13 years ago)

On Nov 21, 2012, at 1:55 AM, David Bruant wrote:

Le 21/11/2012 01:06, Allen Wirfs-Brock a écrit :

b) that all the elements of the result are Strings

And presumably Symbols. We have to also accommodate Symbols, at least for getOwnPropetyNames. Regardless, why is this important? More below...

Same argument as above.

I recall there was some concern about symbols showing up in existing reflection methods like Object.getOwnPropertyNames. Not sure how that got resolved. It's basically touching upon the same issue of whether we can change the return type of existing methods.

I'm assuming we are excluding symbols from [[Enumerate]] and [[Keys]] Why so? I would assume unique symbols to show up for these.

I believe that the POR is that Symbol keys are never enumerated (even if associated with a the property attribute that has [[Enumerable]]: true). I suppose a proxy could violate that invariant (I wouldn't want to actively enforce it) but would be a buggy proxy.

so it is only [[GetOwnPropertyNames]] that have this concern. and of course non-enumerable unique symbols for this operation.

ie, private Symbols

(private Symbols, not only are always non-enumerable. They are never reflected by primitive operation unless the symbol value is explicitly presented as an argument. You can getOwnPropertyDescriptor using a private symbol, but getOwnPropertyNames never includes them)

For both case, I think returning unique symbols work as long as ES6 doesn't consider that objects have built-in unique-symboled properties (because that could probably break existing code).

I assume by unique-symbol you mean the samething as private symbol. The above will hold, assuming we make @@interator, @@toStringTag, @@hasInstance, etc. private symbols. While they need to be well-known, I don't see any reason why they shouldn't also be private.

One way or another we have to be able to reflectively get a list of symbol keyed properties.

We can either:

  1. include them in the Object.getOwnPropertyNames result. 1bis) Make all built-in symbols (like @iterable) private

  2. Consider getOwnPropertyNames depreciated (it actually lives forever) and replace it with a new Object.getOwnPropertyKeys that includes non-private Symbol keys

  3. Keep getOwnPropertyNames as is and add Object.getOwnPropertySymbols that only returns the non-private-Symbol keys.

  4. has the greatest backward compat concerns, but is the simplest to provide and for ES programmers to deal with going forward. Since the use of Object.getOwnPropertyNames may not be widespread, maybe that making non-enumerable unique symbol properties could do the trick (as it has with new {Object, Array, etc.}.prototype additions)

1bis) If all standard built-in names are private, they're not enumerated, neither in for-in, Object.keys, Object.getOwnPropertyNames and you can have a unique name in any of these enumeration operation if you've added them manually, so, no backward compat (or close enough that it's acceptable in my opinion)

  1. Eliminates the compat risk but creates perpetual potential for a: used gOPNames when I should have used gOPK hazard.
  2. Eliminates the compat risk but means everybody who wants to deal with all own properties have to deal with two lists of keys. They also loose relative insertion order info relating key/symbol property keys.

It's a bit unfortunate to make all built-in symbols private (because they ought to be unique), but it feels like something acceptable. It may induce a bit of boilerplate for proxy whitelists (because all built-in names like @iterable would need to be added), but a built-in constructor helper that would generate sets with all built-in names in it could solve that issue easily.

Good point about the white lists, that changes my mind about making the built-in symbols private. I think I'd prefer to take the risks of alternative 1 rather than forcing proxy writers to deal with them in white lists. I think there is probably a greater risk of people messing up whitelisting than that there will be campat. issues with gOPN. Note that even in legacy code, if a gOPN symbol valued element is used in any context that requires a property key, things will still work fine. It is only explicitly doing string manipulation on a gOPN element that would cause trouble. For example, trying to string prefix every element a list of own property names.

My second choice would be adding getOwnPropertyKeys. I would also only want to have a getOwnPropertyKeys internal method/trap and would defined getOwnPropertyNames as filtering that list.

# Allen Wirfs-Brock (13 years ago)

On Nov 21, 2012, at 3:54 AM, Andreas Rossberg wrote:

On 21 November 2012 01:06, Allen Wirfs-Brock <allen at wirfs-brock.com> wrote:

Tom Van Cutsem <tomvc.be at gmail.com> wrote:

Allen Wirfs-Brock <allen at wirfs-brock.com> wrote:

Tom Van Cutsem <tomvc.be at gmail.com> wrote:

c) to ensure the stability of the result.

You can think of a + b as implementing a type coercion of the trap result to "Array of String". This coercion is not too dissimilar from what the getOwnPropertyDescriptor has to do (normalization of the returned property descriptor by creating a fresh copy).

Yes, premature type coercion, in my opinion. Also, a classic performance mistakes made by dynamic language programmers: unnecessary coercion of values that are never going to be access or redundant coercion checks of values that are already of the property type. Why is it important to do such checks on values that are are just passing through these traps. When and if somebody actually gets arounds to using one of the elements that are returned as a property key they will be automatically coerced to a valid value. Why is it important that it happens any sooner than that.

I don't know if you are familiar with the work of Felleisen et al. on higher-order contracts. In that work, they use the notion of "blame" between different components/modules. Basically: if some module A receives some data from a module B, and B provides "wrong" data, then B should be assigned blame. You don't want to end up in a situation where A receives the blame at the point where it passes that "wrong" data into another module C.

Yes, but ES is rampant with this sort of potentially misplaced blame. We can debate whether such proper "blame" assignment is important or not, but I do believe this sort of very low level MOP interface is a situation where you want to absolutely minimize none essential work. I'd sooner have it be a little bit more difficult to track now Proxy based bugs then to impact the performance of every correctly implemented proxy in every correct program. BTW this is a general statement about the entire proxy MOP and not just about these particularly property key access traps.

I'm strongly in favour of guaranteeing the contract Tom is mentioning. However, there is an alternative to copying: we could require the array (or array-like object) returned by the trap to be frozen. (We could also freeze it ourselves, but that might be more problematic.)

I'd be more favorably inclined towards freezing than I am towards copying. But, as you know, ES5 does not currently produce frozen objects in these situations. I feel uncomfortable about enforcing a frozen invariant for traps where that invariant is not provided by the corresponding ordinary object behavior. Perhaps I could get over that or perhaps incompatibility applying that requirement to ordinary objects wouldn't break anything.

Regardless, freezing and testing for frozen is, itself, not a cheap operation. It requires iterating over all the property descriptors of an object. If we are going to build in a lot of checks of for frozen objects perhaps we should just make frozen (and possibly) sealed object level states rather than a dynamic check of all properties. Essentially we could internally turn the [[Extensible]] internal property into a four state value: open,non-extensible,sealed,frozen. It would make both freezing and checking for frozen much cheaper.

(The fact that ES is already full of mistakes should not be an excuse for reiterating them for new features.)

I think it is usually a mistake to perform complex invariant check at low levels of a language engine. Those often become performance barriers. Checking complex relationships belongs at higher abstraction layers.

# Andreas Rossberg (13 years ago)

On 21 November 2012 17:55, Allen Wirfs-Brock <allen at wirfs-brock.com> wrote:

I'd be more favorably inclined towards freezing than I am towards copying. But, as you know, ES5 does not currently produce frozen objects in these situations. I feel uncomfortable about enforcing a frozen invariant for traps where that invariant is not provided by the corresponding ordinary object behavior. Perhaps I could get over that or perhaps incompatibility applying that requirement to ordinary objects wouldn't break anything.

Regardless, freezing and testing for frozen is, itself, not a cheap operation. It requires iterating over all the property descriptors of an object. If we are going to build in a lot of checks of for frozen objects perhaps we should just make frozen (and possibly) sealed object level states rather than a dynamic check of all properties. Essentially we could internally turn the [[Extensible]] internal property into a four state value: open,non-extensible,sealed,frozen. It would make both freezing and checking for frozen much cheaper.

That doesn't seem necessary, because it is just as easy to optimise the current check for the normal case where the object has been frozen or sealed with the respective operation.

I think it is usually a mistake to perform complex invariant check at low levels of a language engine. Those often become performance barriers. Checking complex relationships belongs at higher abstraction layers.

Well, the root of the problem arguably lies with the whole idea of proxies hooking arbitrary code into "low-level" operations. Perhaps such power has to come with a cost in terms of checks and balances. There are no higher abstraction layers in this case.

# Allen Wirfs-Brock (13 years ago)

On Nov 21, 2012, at 9:16 AM, Andreas Rossberg wrote:

On 21 November 2012 17:55, Allen Wirfs-Brock <allen at wirfs-brock.com> wrote:

I'd be more favorably inclined towards freezing than I am towards copying. But, as you know, ES5 does not currently produce frozen objects in these situations. I feel uncomfortable about enforcing a frozen invariant for traps where that invariant is not provided by the corresponding ordinary object behavior. Perhaps I could get over that or perhaps incompatibility applying that requirement to ordinary objects wouldn't break anything.

Regardless, freezing and testing for frozen is, itself, not a cheap operation. It requires iterating over all the property descriptors of an object. If we are going to build in a lot of checks of for frozen objects perhaps we should just make frozen (and possibly) sealed object level states rather than a dynamic check of all properties. Essentially we could internally turn the [[Extensible]] internal property into a four state value: open,non-extensible,sealed,frozen. It would make both freezing and checking for frozen much cheaper.

That doesn't seem necessary, because it is just as easy to optimise the current check for the normal case where the object has been frozen or sealed with the respective operation.

If you are writing any sort of generic algorithm that does a freeze check on an arbitrary object you have to explicitly perform all of the internal method calls because you don't know whether the object is a proxy (where every such internal method call turns into an observable trap) or even some other sort of exotic object implementation that can observe actual internal method calls. If there is explicit internal state is designate an object as frozen, then we wouldn't have all of those potentially observable calls.

I think it is usually a mistake to perform complex invariant check at low levels of a language engine. Those often become performance barriers. Checking complex relationships belongs at higher abstraction layers.

Well, the root of the problem arguably lies with the whole idea of proxies hooking arbitrary code into "low-level" operations. Perhaps such power has to come with a cost in terms of checks and balances. There are no higher abstraction layers in this case.

A integrity membranes we are talking about is a higher level abstraction. If that is where the requirements exist then let it do the work. The real question what is it need to do that work and can it be done in a way that doesn't impose a cost for situations that don't have the same requirements. One thing we've talked about in this thread that might be useful is a non-trappable operation that produces a guaranteed accurate list of own property names for an ordinary object.

# Mark S. Miller (13 years ago)

On Wed, Nov 21, 2012 at 8:55 AM, Allen Wirfs-Brock <allen at wirfs-brock.com> wrote:

[...] Essentially we could internally turn the [[Extensible]] internal property into a four state value: open,non-extensible,sealed,frozen. [...]

First, my apologies for not yet finding the time to catch up on this thread. But I did want to encourage this particular idea. We may not be able to make this work out, but it would have many benefits if we could. For example, the "isFrozen" check in the Object.observe API would make more sense if this were a simple state rather than a pattern check. More later...

# Andreas Rossberg (13 years ago)

On 21 November 2012 18:35, Allen Wirfs-Brock <allen at wirfs-brock.com> wrote:

If you are writing any sort of generic algorithm that does a freeze check on an arbitrary object you have to explicitly perform all of the internal method calls because you don't know whether the object is a proxy (where every such internal method call turns into an observable trap) or even some other sort of exotic object implementation that can observe actual internal method calls. If there is explicit internal state is designate an object as frozen, then we wouldn't have all of those potentially observable calls.

Yes, but the fast path in the VM would merely check whether you have an ordinary object with the 'frozen' flag set. Only if that fails, or for (most) proxies and other exotics, you have to fall back to do something more complicated. Presumably, most practical use cases would never hit that.

# David Bruant (13 years ago)

Le 21/11/2012 17:37, Allen Wirfs-Brock a écrit :

On Nov 21, 2012, at 1:55 AM, David Bruant wrote:

Le 21/11/2012 01:06, Allen Wirfs-Brock a écrit :

b) that all the elements of the result are Strings
And presumably Symbols.  We have to also accommodate Symbols,
at least for getOwnPropetyNames.  Regardless, why is this
important?  More below...

Same argument as above.

I recall there was some concern about symbols showing up in existing reflection methods like Object.getOwnPropertyNames. Not sure how that got resolved. It's basically touching upon the same issue of whether we can change the return type of existing methods.

I'm assuming we are excluding symbols from [[Enumerate]] and [[Keys]] Why so? I would assume unique symbols to show up for these.

I believe that the POR is that Symbol keys are never enumerated (even if associated with a the property attribute that has [[Enumerable]]: true). I suppose a proxy could violate that invariant (I wouldn't want to actively enforce it) but would be a buggy proxy.

What does "POR" mean?

so it is only [[GetOwnPropertyNames]] that have this concern. and of course non-enumerable unique symbols for this operation.

ie, private Symbols

(private Symbols, not only are always non-enumerable. They are never reflected by primitive operation unless the symbol value is explicitly presented as an argument. You can getOwnPropertyDescriptor using a private symbol, but getOwnPropertyNames never includes them)

For both case, I think returning unique symbols work as long as ES6 doesn't consider that objects have built-in unique-symboled properties (because that could probably break existing code).

I assume by unique-symbol you mean the samething as private symbol.

Indeed, I meant private here, sorry for the mistake.

The above will hold, assuming we make @@interator, @@toStringTag, @@hasInstance, etc. private symbols. While they need to be well-known, I don't see any reason why they shouldn't also be private. +1

Good point about the white lists, that changes my mind about making the built-in symbols private. I think I'd prefer to take the risks of alternative 1 rather than forcing proxy writers to deal with them in white lists. I think there is probably a greater risk of people messing up whitelisting than that there will be campat. issues with gOPN.

If the proxy module provides a constructor like BuiltInPrivateNamesWeakSet, things could be fine:

 var whitelist = new BuiltInPrivateNamesWeakSet();
 whitelist.add(...) // adding user generated private names if applicable
 var p = new Proxy(target, handler, whitelist);

The name I'm proposing is a bit long, but that would solve the built-in private name micro management problem elegantly enough as far as I'm concerned. It's also future-proof in the sense that if new built-in private names are added, the weakset will do the right thing in updated environments. If people want to build the weakset manually, they would be free to do so, but at least, the helper would do the annoying part on behalf of the user. We can consider that if the third argument is omitted, all built-in private names pass. That sounds like a good useful default.

# Tom Van Cutsem (13 years ago)

2012/11/19 Allen Wirfs-Brock <allen at wirfs-brock.com>

On Nov 19, 2012, at 10:04 AM, Tom Van Cutsem wrote:

It's all a matter of developer expectations and how much leeway we have in changing the "return type" of existing built-ins. In theory, it's a backwards-incompatible change. In practice, it may not matter.

It can't break existing code that doesn't use Proxies.

It's very likely that existing ES5 code (which doesn't use or know about proxies) will come to interact with ES6 proxies. Consider a sandboxed JS environment that provides a wrapped DOM (implemented using proxies). The ES5 code loaded into the sandbox doesn't actively "use" proxies; it will interact with proxies unknowingly. That's the kind of scenarios for which we should be careful with backwards-incompatible changes.

b) that all the elements of the result are Strings

And presumably Symbols. We have to also accommodate Symbols, at least for getOwnPropetyNames. Regardless, why is this important? More below...

Same argument as above.

I recall there was some concern about symbols showing up in existing reflection methods like Object.getOwnPropertyNames. Not sure how that got resolved. It's basically touching upon the same issue of whether we can change the return type of existing methods.

I'm assuming we are excluding symbols from [[Enumerate]] and [[Keys]] so it is only [[GetOwnPropertyNames]] that have this concern.

One way or another we have to be able to reflectively get a list of symbol keyed properties.

We can either:

  1. include them in the Object.getOwnPropertyNames result.

  2. Consider getOwnPropertyNames depreciated (it actually lives forever) and replace it with a new Object.getOwnPropertyKeys that includes non-private Symbol keys

  3. Keep getOwnPropertyNames as is and add Object.getOwnPropertySymbols that only returns the non-private-Symbol keys.

  4. has the greatest backward compat concerns, but is the simplest to provide and for ES programmers to deal with going forward.

  5. Eliminates the compat risk but creates perpetual potential for a: used gOPNames when I should have used gOPK hazard.

  6. Eliminates the compat risk but means everybody who wants to deal with all own properties have to deal with two lists of keys. They also loose relative insertion order info relating key/symbol property keys.

Either 2 or 3 could require widening the MOP which increases the possibility for incomplete, inconsistent, or otherwise buggy proxies. Even if we expose multiple public functions like either 2 or 3 I would still prefer to only have a single getOwnProperyyKeys internal method/trap that can be filtered to provide just strings and/or symbols. Overall, I'm more concerned about keeping the internal method/trap MOP width as narrow as possible than I am about the size of the public Object.* API

Let's first discuss whether there is an issue with just letting (unique) symbols show up in Object.{keys,getOwnPropertyNames}. It's still a change in return type (from Array[String] to Array[String|Symbol]), but a far less innocuous one than changing Array[String] into "Any".

c) to ensure the stability of the result.

You can think of a + b as implementing a type coercion of the trap result to "Array of String". This coercion is not too dissimilar from what the getOwnPropertyDescriptor has to do (normalization of the returned property descriptor by creating a fresh copy).

Yes, premature type coercion, in my opinion. Also, a classic performance mistakes made by dynamic language programmers: unnecessary coercion of values that are never going to be access or redundant coercion checks of values that are already of the property type. Why is it important to do such checks on values that are are just passing through these traps. When and if somebody actually gets arounds to using one of the elements that are returned as a property key they will be automatically coerced to a valid value. Why is it important that it happens any sooner than that.

I don't know if you are familiar with the work of Felleisen et al. on higher-order contracts. In that work, they use the notion of "blame" between different components/modules. Basically: if some module A receives some data from a module B, and B provides "wrong" data, then B should be assigned blame. You don't want to end up in a situation where A receives the blame at the point where it passes that "wrong" data into another module C.

Yes, but ES is rampant with this sort of potentially misplaced blame. We can debate whether such proper "blame" assignment is important or not, but I do believe this sort of very low level MOP interface is a situation where you want to absolutely minimize none essential work. I'd sooner have it be a little bit more difficult to track now Proxy based bugs then to impact the performance of every correctly implemented proxy in every correct program. BTW this is a general statement about the entire proxy MOP and not just about these particularly property key access traps.

It's true that most proxies will probably be benign and thus most dynamic invariant checks will succeed. But we should not remove checks just because they succeed 99.9% of the time. By analogy, most arguments passed to the + operator are valid operands, but that doesn't obviate the runtime type-check it must make over and over again (barring optimization). In the case of the + operator a forgotten type-check would lead to a crash or memory-unsafe behavior. In the case of a forgotten invariant check for proxies, the consequences are far less severe, but may lead to inconsistent behavior none the less.

The "early" type coercions at the exit of the traps can be thought of in these terms: the invariant checks will always blame the proxy handler for producing wrong data, at the point where the wrong data is produced and before it's exposed to client code. Clients are expecting ES5/6 objects to obey an implicit "contract".

We are defining that contract right now. It would be a mistake to over specify it in ways that may unnecessarily impact performance.

getOwnPropertyDescriptor is also on my list to talk to you about for similar reasons. I don't see why the property descriptor needs to be normalized and regenerated (including converting accessor properties on the descriptor into data properties and requiring the [[Prototype]] to be object. I have an alternative for [[GetOwnProperty]]/TrapGetOwnPropertyDescriptor that preserves the original object returned from the trap (while normalizing to a descriptor record for any internal calls on ordinary objects). This preserves any exotic property descriptor object properties as is. This seems like exactly the right things. Such exotic property descriptor properties can only be meaningfully used by other proxy traps or application/library code that knows about them. I don't see any need to normalize them into data properties or otherwise muck with what the actually implementor of the getOwnPropertyDescrptor trap choose to return.

I this alternative implementation is, again, much simpler (and efficient) with fewer externally observable calls in its algorithm steps. I sure you will want to review it so I will include it in the spec. draft.

It's all a matter of how much invariants we care to give up.

The only invariants we have general agreement regarding enforce are those with respect to the visible properties of frozen/sealed target objects.

Agreed! Just to be clear, the invariants we were originally discussing were those related to getOwnPropertyNames/keys/enumerate. For getOwnPropertyNames, the invariant that I specced was that the trap must return all non-configurable properties of the target, and may not return any new properties that don't exist on a non-extensible target.

To me, this invariant is directly related to "the visible properties of frozen/sealed target objects".

I think enforcement of additional invariant that don't directly related to those specific integrity constrains shouldn't be added.

Why do you not consider the above invariant related to what it means for a property to be non-configurable or an object to be non-extensible? To me, they are directly related.

Even without them, the core of the language is still memory safe and interoperability implementable. From the spec. I can tell you exactly what is supposed to happen for any of these situations.

I think it's risky to allow descriptors to be returned whose attributes may be accessors, thus potentially fooling much existing ES5 code that often does simple tests like:

if (!desc.writable && !desc.configurable) { var val = desc.value; // we can assume the property is immutable, so we can cache its value ... }

You can no longer rely on the correctness of such code if desc can't be guaranteed to be a plain old object containing only data properties.

If you want to enforce this as an invariant (after you've already done the fast check to make sure that the target is an high integrity object) I'm fine with that. But, please don't but more checks in the fast path where we don't have these integrity requirements.

Also, in any situation like the above if you really care you can always do a Object.getOwnPropertyDescriptor and verify that you really are dealing with a own data property. Make the client that has the requirement pay the most rather than everybody else who doesn't.

No, that wouldn't work. If we don't enforce that property descriptors returned from Object.getOwnPropertyDescriptor are normalized, you can imagine that the returned descriptor is itself a proxy, which, when asked for its own properties, returns yet another proxy that may again lie about the description of its own attributes (we're talking about the attributes of the attributes of a property here).

This all sounds incredibly theoretical, but the fact of the matter is this: if you don't enforce the language invariants directly on the basic primitives provided by the language (such as Object.getOwnPropertyDescriptor), then the cat is out of the bag and you can no longer reason about integrity. Any further checks you want to make to build up more confidence are themselves made with unreliable primitives, so they provide only a false sense of security.

c) on the other hand is crucial for the non-configurability/non-extensibility checks mentioned below. It's no use checking some invariants on a data structure if that data structure can later still be mutated.

I don't see how an extra copy of the the result array makes any difference in this regard (also see below). The array returned from Object.getOwnPropertyNames is still mutable even after you make a copy. It's only as stable as any other non-frozen object you are likely to encounter.

The returned copy, even if it is mutable, is fresh, meaning that no one else yet has a pointer to it. Hence it can't be modified by the proxy handler.

If we don't make a copy, but we do check the result for invariants, here's what may happen:

  1. proxy checks whether the returned array contains all non-configurable properties of the target
  2. let's assume this is the case, so the proxy returns this array to the client
  3. the proxy handler kept a reference to the array, and mutates it, e.g. by removing a non-configurable property
  4. client is now wrongly under the assumption that the returned array contains all non-configurable properties

and then... How is this likely to be exploited? But if is exploitable, then I'd be content if you only enforce it for the high integrity object situations.

I can't give you a concrete exploit. I can only make the general observation that sandboxes such as Caja are written entirely in Javascript. I'm not sure this could be done without the language providing foolproof primitives. That's why I think it's important we think carefully about the values returned by traps.

  1. Every name in the list returned by the trap code is looked up on the target to determine whether or not it exists, even if the target is extensible. Each of those lookup is observable (the target might itself be a proxy) so, according to the algorithm they all must be performed.

This is where we get into actual checks required to enforce non-configurability/non-extensibility.

Granted, the ES5 spec is not clear about the invariants on getOwnPropertyNames and keys. The currently specified invariants are a common-sense extrapolation of the existing invariants to cover these operations.

Yes, but if they aren't essential then we have to take into account the cost of doing non-essential invariant checks. These costs are both the actual runtime cost of doing the checks, the constraints upon implementation imposed by the need for complete and properly sequenced occurrence of all potentially externally observable algorithm steps, and finally (and least important) the added complexity of the specification which makes it harder to actually achieve interoperable implementations.

I agree about the cost. The question is whether we regard these invariants as essential or not. To me, they are the common-sense invariants an ES5 programmer would expect.

They may be common sense but not necessarily essential. There is a cost to all such enforcement.

As I argued earlier, you need at least 1 reliable primitive that is guaranteed to give you all the property names of a proxy that pretends to be frozen. Otherwise you can't reliably walk deep-frozen object graphs.

I'm fine with Object.getOwnPropertyNames being that reliable primitive, and perhaps relaxing the invariant on Object.keys and the for-in loop (the enumerate trap).

In practice, it determines the degree of confidence that a programmer can have in Object.getOwnPropertyNames and friends when dealing with a frozen object. If we waive these invariant checks, then the result of those operations can never be trusted on to reliably introspect on a frozen object's list of property names:

Object.isFrozen(proxy) // true Object.getOwnPropertyNames(proxy) // ['foo'] Object.getOwnPropertyNames(proxy) // [ ]

Here, the 'foo' property apparently disappeared on a frozen object.

Yes, but copying the result an extra time doesn't ensure that two separate calls produce the same result.

It's not the copying alone that guarantees it, it's the invariant check that 'foo' is in the result + the copying that guarantees it.

If neither the for-in loop nor Object.getOwnPropertyNames nor Object.keys can reliably report an object's own properties, then we've made it impossible to reliably traverse and inspect a presumably deep-frozen object graph.

If that is the goal, that maybe what you need is a new operation that cannot be trapped. Your algorithm is essentially doing all the work, and more, necessary to generate an accurate report of the actual non-configurable properties of the target. If that is what is needed for your deep frozen traversals, add that operation. That seems better than allowing the Proxy trap to do the work to report anything it wants, but then coming in after the fact and checking whether what was report matches what you demand. In this case, it seems preferable to just provide the results you demand without any handler intervention.

This seems to harken back to the original proxy design that essentially turned frozen objects into regular objects with no handlers. If that is what you need, you should provide access to the results you need for the situations where you need them.

But you should not be adding this overhead for Proxy uses that don't have those requirements.

The problem with, say, a non-trappable [[GetNonConfigurableOwnProperties]] internal method, is that membranes can't adapt. We were previously in this situation with the [[Extensible]] internal property. Eventually we ended up trapping Object.isExtensible and Object.preventExtensions, even though these traps can't actually return a result that is different from applying the operation directly to the target. They're still useful to trap, as the proxy may then respond to these operations and change the state of its own target before returning a result.

I'm pretty sure introducing a new non-trappable method is going to lead to the same problems for membranes.

Then have the trap but always ignore its results and call a non-trapping operation on the target object to get the actual list of names. You integrity check are all about non-configurable properties. Normal, non-high integrity proxies want integrity checking algorithms that are proportional to the number of non-configurable properties (usually 0) rather than the total number of properties.

Details matter in this case. Let's go over them. The invariant for getOwnPropertyNames consists of three parts:

  1. check whether the trap result contains all non-configurable target properties
  2. if the target is non-extensible, check whether the trap result contains no new properties beyond those defined on the target
  3. if the target is non-extensible, check whether all existing target properties are in the result (if not, a proxy may make it appear that a property was "deleted" on a non-extensible object, which is otherwhise impossible)

Part #1 is proportional to the number of non-configurable target properties (call this number N) Part #2 is proportional to the number of properties returned in the trap result (call this M) Part #3 is proportional to the total number of target properties (whether they be configurable or not) (call this P)

For a non-extensible target, the total cost is O(N + M + P), and since N <= P, the cost is O(M + P). In other words, the "optimization" of using a [[GetNonConfigurableOwnProperties]] method in Part #1 is not very effective, as Part #3 must visit all own properties anyway.

For an extensible target, I agree [[GetNonConfigurableOwnProperties]] would introduce a payoff, reducing the cost of the check from O(P) to O(N).

# Tom Van Cutsem (13 years ago)

2012/11/21 David Bruant <bruant.d at gmail.com>

Since the use of Object.getOwnPropertyNames may not be widespread, maybe that making non-enumerable unique symbol properties could do the trick (as it has with new {Object, Array, etc.}.prototype additions)

1bis) If all standard built-in names are private, they're not enumerated, neither in for-in, Object.keys, Object.getOwnPropertyNames and you can have a unique name in any of these enumeration operation if you've added them manually, so, no backward compat (or close enough that it's acceptable in my opinion)

Even disregarding the proxy whitelist hassle, let's not go there. Making all the well-known unique symbols, like @iterator, private just so they don't appear in Object.getOwnPropertyNames feels silly to me. We have the distinction between unique and private symbols precisely because we've identified a need for symbols that need to be unique but not necessarily private!

# Tom Van Cutsem (13 years ago)

2012/11/21 Allen Wirfs-Brock <allen at wirfs-brock.com>

I'd be more favorably inclined towards freezing than I am towards copying. But, as you know, ES5 does not currently produce frozen objects in these situations. I feel uncomfortable about enforcing a frozen invariant for traps where that invariant is not provided by the corresponding ordinary object behavior. Perhaps I could get over that or perhaps incompatibility applying that requirement to ordinary objects wouldn't break anything.

While the arrays produced by Object.keys etc. aren't frozen, there's an implicit guarantee that those arrays will not further be modified implicitly by any other operation in the program. Of course a program can choose to explicitly mutate it. But there's no magical action-at-a-distance.

Currently, we provide the same guarantee in the face of proxies by "defensively copying" the trap result into a fresh array. Requiring the trap result to be frozen feels OK to me, except that it doesn't play well with the current default of automatically forwarding:

var p = Proxy(target, { }) Object.keys(p) // error: "keys" trap did not return a frozen array

var p = Proxy(target, { keys: function(t) { return Object.freeze(Object.keys(t)); }) Object.keys(p) // now it works

One way to get rid of this irregularity is indeed to make Object.keys always return frozen arrays. I don't think it would hurt performance, as implementations already aren't allowed to reuse arrays returned from Object.keys (of course they can always cheat as long as they don't get caught).

Regardless, freezing and testing for frozen is, itself, not a cheap operation. It requires iterating over all the property descriptors of an object. If we are going to build in a lot of checks of for frozen objects perhaps we should just make frozen (and possibly) sealed object level states rather than a dynamic check of all properties. Essentially we could internally turn the [[Extensible]] internal property into a four state value: open,non-extensible,sealed,frozen. It would make both freezing and checking for frozen much cheaper.

As Andreas mentioned, for normal objects, isFrozen checks can be optimized to O(1). For proxies, if we leave in the derived isFrozen trap, the check could also be O(1).

# Tom Van Cutsem (13 years ago)

2012/11/21 Mark S. Miller <erights at google.com>

On Wed, Nov 21, 2012 at 8:55 AM, Allen Wirfs-Brock <allen at wirfs-brock.com> wrote:

[...] Essentially we could internally turn the [[Extensible]] internal property into a four state value: open,non-extensible,sealed,frozen. [...]

First, my apologies for not yet finding the time to catch up on this thread. But I did want to encourage this particular idea. We may not be able to make this work out, but it would have many benefits if we could. For example, the "isFrozen" check in the Object.observe API would make more sense if this were a simple state rather than a pattern check. More later...

A lot of the complexity of the current invariant checks derives from the fact that Javascript objects have (too) fine-grained invariants (i.e. non-configurability at the level of individual properties, non-extensibility at the level of the object, and all possible combinations in between). This means we have to account for weird cases such as a non-extensible object with all but 1 property being non-configurable (an "almost-frozen" object).

If JS objects could be in only one of four states, things would be a lot simpler to reason about. That said, I don't see how we can get there without radically breaking with ES5's view of object invariants.

# Allen Wirfs-Brock (13 years ago)

On Nov 21, 2012, at 11:13 AM, David Bruant wrote:

Le 21/11/2012 17:37, Allen Wirfs-Brock a écrit :

On Nov 21, 2012, at 1:55 AM, David Bruant wrote:

Le 21/11/2012 01:06, Allen Wirfs-Brock a écrit :

b) that all the elements of the result are Strings

And presumably Symbols. We have to also accommodate Symbols, at least for getOwnPropetyNames. Regardless, why is this important? More below...

Same argument as above.

I recall there was some concern about symbols showing up in existing reflection methods like Object.getOwnPropertyNames. Not sure how that got resolved. It's basically touching upon the same issue of whether we can change the return type of existing methods.

I'm assuming we are excluding symbols from [[Enumerate]] and [[Keys]] Why so? I would assume unique symbols to show up for these.

I believe that the POR is that Symbol keys are never enumerated (even if associated with a the property attribute that has [[Enumerable]]: true). I suppose a proxy could violate that invariant (I wouldn't want to actively enforce it) but would be a buggy proxy. What does "POR" mean?

Plan of Record

# Kevin Reid (13 years ago)

On Wed, Nov 21, 2012 at 12:42 PM, Tom Van Cutsem <tomvc.be at gmail.com> wrote:

2012/11/21 Mark S. Miller <erights at google.com>

On Wed, Nov 21, 2012 at 8:55 AM, Allen Wirfs-Brock

<allen at wirfs-brock.com> wrote:

[...] Essentially we could internally turn the [[Extensible]] internal property into a four state value: open,non-extensible,sealed,frozen. [...]

[...] If JS objects could be in only one of four states, things would be a lot simpler to reason about. That said, I don't see how we can get there without radically breaking with ES5's view of object invariants.

Why can't an implementation cache the knowledge that an object is frozen (after Object.freeze or after a full pattern-check) exactly as if the fourth state exists, and get the efficiency benefit (but not the observability-of-the-test-on-a-proxy benefit) without changing the ES5 model?

# David Bruant (13 years ago)

Le 21/11/2012 21:13, Tom Van Cutsem a écrit :

2012/11/21 David Bruant <bruant.d at gmail.com <mailto:bruant.d at gmail.com>>

Since the use of Object.getOwnPropertyNames may not be widespread,
maybe that making non-enumerable unique symbol properties could do
the trick (as it has with new {Object, Array, etc.}.prototype
additions)

1bis) If all standard built-in names are private, they're not
enumerated, neither in for-in, Object.keys,
Object.getOwnPropertyNames and you can have a unique name in any
of these enumeration operation if you've added them manually, so,
no backward compat (or close enough that it's acceptable in my
opinion)

Even disregarding the proxy whitelist hassle, let's not go there. Making all the well-known unique symbols, like @iterator, private just so they don't appear in Object.getOwnPropertyNames feels silly to me. We have the distinction between unique and private symbols precisely because we've identified a need for symbols that need to be unique but not necessarily private!

Rest assured that it doesn't make me happy to propose this idea, but it seems worthwhile to explore.

Allen's latest point about backward compat makes me feel that it may not be that big of a problem ("Note that even in legacy code, if a gOPN symbol valued element is used in any context that requires a property key, things will still work fine. It is only explicitly doing string manipulation on a gOPN element that would cause trouble. For example, trying to string prefix every element a list of own property names.") I guess only user testing could tell. If absolutely necessary, it may not be absurd to have the following: string + unique name = new unique name.

# Allen Wirfs-Brock (13 years ago)

On Nov 21, 2012, at 12:09 PM, Tom Van Cutsem wrote:

2012/11/19 Allen Wirfs-Brock <allen at wirfs-brock.com> On Nov 19, 2012, at 10:04 AM, Tom Van Cutsem wrote:

It's all a matter of developer expectations and how much leeway we have in changing the "return type" of existing built-ins. In theory, it's a backwards-incompatible change. In practice, it may not matter.

It can't break existing code that doesn't use Proxies.

It's very likely that existing ES5 code (which doesn't use or know about proxies) will come to interact with ES6 proxies. Consider a sandboxed JS environment that provides a wrapped DOM (implemented using proxies). The ES5 code loaded into the sandbox doesn't actively "use" proxies; it will interact with proxies unknowingly. That's the kind of scenarios for which we should be careful with backwards-incompatible changes.

Presumably in this scenario it is the sandbox that is providing the Proxy so it can make to always return a built-in array if it is concerned about this.

The trade-off is always doing an extra copy of an object, just because of the unlikely possibility that a trap implementor doesn't use an actual built-in array.

On the other hand, it seems very unlikely that realistic consumers of the existing Object.keys/Object.getOwnPropertyNames are actually doing an Array.isArray checks on the value produced from them. What would be the point?

b) that all the elements of the result are Strings

And presumably Symbols. We have to also accommodate Symbols, at least for getOwnPropetyNames. Regardless, why is this important? More below...

Same argument as above.

I recall there was some concern about symbols showing up in existing reflection methods like Object.getOwnPropertyNames. Not sure how that got resolved. It's basically touching upon the same issue of whether we can change the return type of existing methods.

I'm assuming we are excluding symbols from [[Enumerate]] and [[Keys]] so it is only [[GetOwnPropertyNames]] that have this concern.

One way or another we have to be able to reflectively get a list of symbol keyed properties.

We can either:

  1. include them in the Object.getOwnPropertyNames result.

  2. Consider getOwnPropertyNames depreciated (it actually lives forever) and replace it with a new Object.getOwnPropertyKeys that includes non-private Symbol keys

  3. Keep getOwnPropertyNames as is and add Object.getOwnPropertySymbols that only returns the non-private-Symbol keys.

  4. has the greatest backward compat concerns, but is the simplest to provide and for ES programmers to deal with going forward.

  5. Eliminates the compat risk but creates perpetual potential for a: used gOPNames when I should have used gOPK hazard.

  6. Eliminates the compat risk but means everybody who wants to deal with all own properties have to deal with two lists of keys. They also loose relative insertion order info relating key/symbol property keys.

Either 2 or 3 could require widening the MOP which increases the possibility for incomplete, inconsistent, or otherwise buggy proxies. Even if we expose multiple public functions like either 2 or 3 I would still prefer to only have a single getOwnProperyyKeys internal method/trap that can be filtered to provide just strings and/or symbols. Overall, I'm more concerned about keeping the internal method/trap MOP width as narrow as possible than I am about the size of the public Object.* API

Let's first discuss whether there is an issue with just letting (unique) symbols show up in Object.{keys,getOwnPropertyNames}. It's still a change in return type (from Array[String] to Array[String|Symbol]), but a far less innocuous one than changing Array[String] into "Any".

Wow, I actually think that the Array[String|Symbol] change is far more likely to have actual compatibility impact. I can imagine various scenarios where is program or library is doing string processing on the elements returned from keys/getOPN. That seems far more likely, then that they have dependencies upon the returned collection being Array.isArray rather than just Array-like.

c) to ensure the stability of the result.

You can think of a + b as implementing a type coercion of the trap result to "Array of String". This coercion is not too dissimilar from what the getOwnPropertyDescriptor has to do (normalization of the returned property descriptor by creating a fresh copy).

Yes, premature type coercion, in my opinion. Also, a classic performance mistakes made by dynamic language programmers: unnecessary coercion of values that are never going to be access or redundant coercion checks of values that are already of the property type. Why is it important to do such checks on values that are are just passing through these traps. When and if somebody actually gets arounds to using one of the elements that are returned as a property key they will be automatically coerced to a valid value. Why is it important that it happens any sooner than that.

I don't know if you are familiar with the work of Felleisen et al. on higher-order contracts. In that work, they use the notion of "blame" between different components/modules. Basically: if some module A receives some data from a module B, and B provides "wrong" data, then B should be assigned blame. You don't want to end up in a situation where A receives the blame at the point where it passes that "wrong" data into another module C.

Yes, but ES is rampant with this sort of potentially misplaced blame. We can debate whether such proper "blame" assignment is important or not, but I do believe this sort of very low level MOP interface is a situation where you want to absolutely minimize none essential work. I'd sooner have it be a little bit more difficult to track now Proxy based bugs then to impact the performance of every correctly implemented proxy in every correct program. BTW this is a general statement about the entire proxy MOP and not just about these particularly property key access traps.

It's true that most proxies will probably be benign and thus most dynamic invariant checks will succeed. But we should not remove checks just because they succeed 99.9% of the time. By analogy, most arguments passed to the + operator are valid operands, but that doesn't obviate the runtime type-check it must make over and over again (barring optimization). In the case of the + operator a forgotten type-check would lead to a crash or memory-unsafe behavior. In the case of a forgotten invariant check for proxies, the consequences are far less severe, but may lead to inconsistent behavior none the less.

There is a big difference between essential checks that are needed for memory safety and checks that are needed to maintain some of these proxy invariants. As you say, one will cause system level crashes or unsafe memory accesses. The other just introduces application level bugs.

Essential runtime checks are always should be designed to be as inexpensive as possible and for that reason are always very simple. Complex relationship validation belongs at the application level. You might do such validation in your sandbox implementation (for example providing your own Object.getOwnPropertyNames that does a copy to a fresh array) but it shouldn't be forced upon applications that don't need it.

The "early" type coercions at the exit of the traps can be thought of in these terms: the invariant checks will always blame the proxy handler for producing wrong data, at the point where the wrong data is produced and before it's exposed to client code. Clients are expecting ES5/6 objects to obey an implicit "contract".

We are defining that contract right now. It would be a mistake to over specify it in ways that may unnecessarily impact performance.

getOwnPropertyDescriptor is also on my list to talk to you about for similar reasons. I don't see why the property descriptor needs to be normalized and regenerated (including converting accessor properties on the descriptor into data properties and requiring the [[Prototype]] to be object. I have an alternative for [[GetOwnProperty]]/TrapGetOwnPropertyDescriptor that preserves the original object returned from the trap (while normalizing to a descriptor record for any internal calls on ordinary objects). This preserves any exotic property descriptor object properties as is. This seems like exactly the right things. Such exotic property descriptor properties can only be meaningfully used by other proxy traps or application/library code that knows about them. I don't see any need to normalize them into data properties or otherwise muck with what the actually implementor of the getOwnPropertyDescrptor trap choose to return.

I this alternative implementation is, again, much simpler (and efficient) with fewer externally observable calls in its algorithm steps. I sure you will want to review it so I will include it in the spec. draft.

It's all a matter of how much invariants we care to give up.

The only invariants we have general agreement regarding enforce are those with respect to the visible properties of frozen/sealed target objects.

Agreed! Just to be clear, the invariants we were originally discussing were those related to getOwnPropertyNames/keys/enumerate. For getOwnPropertyNames, the invariant that I specced was that the trap must return all non-configurable properties of the target, and may not return any new properties that don't exist on a non-extensible target.

To me, this invariant is directly related to "the visible properties of frozen/sealed target objects".

I think enforcement of additional invariant that don't directly related to those specific integrity constrains shouldn't be added.

Why do you not consider the above invariant related to what it means for a property to be non-configurable or an object to be non-extensible? To me, they are directly related.

The issue is that (as currently defined) it imposes what seems like significant algorithmic complexity on applications that don't actually care about that invariant.

The root question that I think I posed is: Are you sure it actually impacts the integrity of the use cases you have in mind, if this invariant was not checked for keys/getOPN? After all, you are presumably going to use those names to perform operations at the individual property level on proxies and those individual operations are all doing equivalent integrity checks.

If the real integrity invariants are being maintained at the individual property level, then maybe checking them in keys/getOPN is just an (expensive) security blanket that isn't essential.

Even without them, the core of the language is still memory safe and interoperability implementable. From the spec. I can tell you exactly what is supposed to happen for any of these situations.

I think it's risky to allow descriptors to be returned whose attributes may be accessors, thus potentially fooling much existing ES5 code that often does simple tests like:

if (!desc.writable && !desc.configurable) { var val = desc.value; // we can assume the property is immutable, so we can cache its value ... }

You can no longer rely on the correctness of such code if desc can't be guaranteed to be a plain old object containing only data properties.

If you want to enforce this as an invariant (after you've already done the fast check to make sure that the target is an high integrity object) I'm fine with that. But, please don't but more checks in the fast path where we don't have these integrity requirements.

Also, in any situation like the above if you really care you can always do a Object.getOwnPropertyDescriptor and verify that you really are dealing with a own data property. Make the client that has the requirement pay the most rather than everybody else who doesn't.

No, that wouldn't work. If we don't enforce that property descriptors returned from Object.getOwnPropertyDescriptor are normalized, you can imagine that the returned descriptor is itself a proxy, which, when asked for its own properties, returns yet another proxy that may again lie about the description of its own attributes (we're talking about the attributes of the attributes of a property here).

Yes, I assume that the descriptor object may themselves be proxies. But aren't the proxies you really care about then ones that you are implementing as part of your sandbox implementation. Those you have control over. What does it matter to you if some random application object is implemented by a proxy with n-levels of indirection.

This all sounds incredibly theoretical, but the fact of the matter is this: if you don't enforce the language invariants directly on the basic primitives provided by the language (such as Object.getOwnPropertyDescriptor), then the cat is out of the bag and you can no longer reason about integrity. Any further checks you want to make to build up more confidence are themselves made with unreliable primitives, so they provide only a false sense of security.

Hence my above question, is getOPN really one of these basic primitives?

c) on the other hand is crucial for the non-configurability/non-extensibility checks mentioned below. It's no use checking some invariants on a data structure if that data structure can later still be mutated.

I don't see how an extra copy of the the result array makes any difference in this regard (also see below). The array returned from Object.getOwnPropertyNames is still mutable even after you make a copy. It's only as stable as any other non-frozen object you are likely to encounter.

The returned copy, even if it is mutable, is fresh, meaning that no one else yet has a pointer to it. Hence it can't be modified by the proxy handler.

If we don't make a copy, but we do check the result for invariants, here's what may happen:

  1. proxy checks whether the returned array contains all non-configurable properties of the target
  2. let's assume this is the case, so the proxy returns this array to the client
  3. the proxy handler kept a reference to the array, and mutates it, e.g. by removing a non-configurable property
  4. client is now wrongly under the assumption that the returned array contains all non-configurable properties

and then... How is this likely to be exploited? But if is exploitable, then I'd be content if you only enforce it for the high integrity object situations.

I can't give you a concrete exploit. I can only make the general observation that sandboxes such as Caja are written entirely in Javascript. I'm not sure this could be done without the language providing foolproof primitives. That's why I think it's important we think carefully about the values returned by traps.

  1. Every name in the list returned by the trap code is looked up on the target to determine whether or not it exists, even if the target is extensible. Each of those lookup is observable (the target might itself be a proxy) so, according to the algorithm they all must be performed.

This is where we get into actual checks required to enforce non-configurability/non-extensibility.

Granted, the ES5 spec is not clear about the invariants on getOwnPropertyNames and keys. The currently specified invariants are a common-sense extrapolation of the existing invariants to cover these operations.

Yes, but if they aren't essential then we have to take into account the cost of doing non-essential invariant checks. These costs are both the actual runtime cost of doing the checks, the constraints upon implementation imposed by the need for complete and properly sequenced occurrence of all potentially externally observable algorithm steps, and finally (and least important) the added complexity of the specification which makes it harder to actually achieve interoperable implementations.

I agree about the cost. The question is whether we regard these invariants as essential or not. To me, they are the common-sense invariants an ES5 programmer would expect.

They may be common sense but not necessarily essential. There is a cost to all such enforcement.

As I argued earlier, you need at least 1 reliable primitive that is guaranteed to give you all the property names of a proxy that pretends to be frozen. Otherwise you can't reliably walk deep-frozen object graphs.

I'm fine with Object.getOwnPropertyNames being that reliable primitive, and perhaps relaxing the invariant on Object.keys and the for-in loop (the enumerate trap).

In practice, it determines the degree of confidence that a programmer can have in Object.getOwnPropertyNames and friends when dealing with a frozen object. If we waive these invariant checks, then the result of those operations can never be trusted on to reliably introspect on a frozen object's list of property names:

Object.isFrozen(proxy) // true Object.getOwnPropertyNames(proxy) // ['foo'] Object.getOwnPropertyNames(proxy) // [ ]

Here, the 'foo' property apparently disappeared on a frozen object.

Yes, but copying the result an extra time doesn't ensure that two separate calls produce the same result.

It's not the copying alone that guarantees it, it's the invariant check that 'foo' is in the result + the copying that guarantees it.

If neither the for-in loop nor Object.getOwnPropertyNames nor Object.keys can reliably report an object's own properties, then we've made it impossible to reliably traverse and inspect a presumably deep-frozen object graph.

If that is the goal, that maybe what you need is a new operation that cannot be trapped. Your algorithm is essentially doing all the work, and more, necessary to generate an accurate report of the actual non-configurable properties of the target. If that is what is needed for your deep frozen traversals, add that operation. That seems better than allowing the Proxy trap to do the work to report anything it wants, but then coming in after the fact and checking whether what was report matches what you demand. In this case, it seems preferable to just provide the results you demand without any handler intervention.

This seems to harken back to the original proxy design that essentially turned frozen objects into regular objects with no handlers. If that is what you need, you should provide access to the results you need for the situations where you need them.

But you should not be adding this overhead for Proxy uses that don't have those requirements.

The problem with, say, a non-trappable [[GetNonConfigurableOwnProperties]] internal method, is that membranes can't adapt. We were previously in this situation with the [[Extensible]] internal property. Eventually we ended up trapping Object.isExtensible and Object.preventExtensions, even though these traps can't actually return a result that is different from applying the operation directly to the target. They're still useful to trap, as the proxy may then respond to these operations and change the state of its own target before returning a result.

I'm pretty sure introducing a new non-trappable method is going to lead to the same problems for membranes.

Then have the trap but always ignore its results and call a non-trapping operation on the target object to get the actual list of names. You integrity check are all about non-configurable properties. Normal, non-high integrity proxies want integrity checking algorithms that are proportional to the number of non-configurable properties (usually 0) rather than the total number of properties.

Details matter in this case. Let's go over them. The invariant for getOwnPropertyNames consists of three parts:

  1. check whether the trap result contains all non-configurable target properties
  2. if the target is non-extensible, check whether the trap result contains no new properties beyond those defined on the target
  3. if the target is non-extensible, check whether all existing target properties are in the result (if not, a proxy may make it appear that a property was "deleted" on a non-extensible object, which is otherwhise impossible)

Part #1 is proportional to the number of non-configurable target properties (call this number N)

not P? at least in the generic case you have to get the property descriptor of every property in order to determine which are non-configurable. But I guess it doesn't change the answer

Part #2 is proportional to the number of properties returned in the trap result (call this M) Part #3 is proportional to the total number of target properties (whether they be configurable or not) (call this P)

For a non-extensible target, the total cost is O(N + M + P), and since N <= P, the cost is O(M + P). In other words, the "optimization" of using a [[GetNonConfigurableOwnProperties]] method in Part #1 is not very effective, as Part #3 must visit all own properties anyway.

For an extensible target, I agree [[GetNonConfigurableOwnProperties]] would introduce a payoff, reducing the cost of the check from O(P) to O(N).

It still appears to me that for Sealed/Frozen objects, which I believe are the ones you actually care about, you are enforcing that the list returned is exactly the own properties of the ultimate ordinary target object (perhaps with multiple levels of proxy indirection). In otherwords, the result cannot be meaningfully changed. In that case it doesn't need to be trap.

# Allen Wirfs-Brock (13 years ago)

On Nov 21, 2012, at 12:27 PM, Tom Van Cutsem wrote:

2012/11/21 Allen Wirfs-Brock <allen at wirfs-brock.com> I'd be more favorably inclined towards freezing than I am towards copying. But, as you know, ES5 does not currently produce frozen objects in these situations. I feel uncomfortable about enforcing a frozen invariant for traps where that invariant is not provided by the corresponding ordinary object behavior. Perhaps I could get over that or perhaps incompatibility applying that requirement to ordinary objects wouldn't break anything.

While the arrays produced by Object.keys etc. aren't frozen, there's an implicit guarantee that those arrays will not further be modified implicitly by any other operation in the program. Of course a program can choose to explicitly mutate it. But there's no magical action-at-a-distance.

Currently, we provide the same guarantee in the face of proxies by "defensively copying" the trap result into a fresh array. Requiring the trap result to be frozen feels OK to me, except that it doesn't play well with the current default of automatically forwarding:

var p = Proxy(target, { }) Object.keys(p) // error: "keys" trap did not return a frozen array

var p = Proxy(target, { keys: function(t) { return Object.freeze(Object.keys(t)); }) Object.keys(p) // now it works

Wait, Object.keys as now just implemented by calling the [[Keys]] internal method and it doesn't have any idea about what kind of object that internal method is being called upon. So it must apply the same rules to both the result of ordinary object [[Keys]] and proxy [[Keys]]. Object.keys could always copy the result or always freeze its result or accept either frozen/non-frozen but all [[Keys]] invocation need to be treated equally.

One way to get rid of this irregularity is indeed to make Object.keys always return frozen arrays. I don't think it would hurt performance, as implementations already aren't allowed to reuse arrays returned from Object.keys (of course they can always cheat as long as they don't get caught).

But client of Object.keys are currently permitted to modify its result array. Freezing it would break such clients.

Regardless, freezing and testing for frozen is, itself, not a cheap operation. It requires iterating over all the property descriptors of an object. If we are going to build in a lot of checks of for frozen objects perhaps we should just make frozen (and possibly) sealed object level states rather than a dynamic check of all properties. Essentially we could internally turn the [[Extensible]] internal property into a four state value: open,non-extensible,sealed,frozen. It would make both freezing and checking for frozen much cheaper.

As Andreas mentioned, for normal objects, isFrozen checks can be optimized to O(1). For proxies, if we leave in the derived isFrozen trap, the check could also be O(1).

Are you sure about that last point? What if the target is a Proxy or some other exotic object that is observing internal method calls on itself. This is one of my overall concerns about some of the more complex invariant checking algorithms. Our interoperability rules require strict ordering of all observable operations and internal method/trap calls are potentially observable.

# Allen Wirfs-Brock (13 years ago)

On Nov 21, 2012, at 12:42 PM, Tom Van Cutsem wrote:

2012/11/21 Mark S. Miller <erights at google.com> On Wed, Nov 21, 2012 at 8:55 AM, Allen Wirfs-Brock <allen at wirfs-brock.com> wrote:

[...] Essentially we could internally turn the [[Extensible]] internal property into a four state value: open,non-extensible,sealed,frozen. [...]

First, my apologies for not yet finding the time to catch up on this thread. But I did want to encourage this particular idea. We may not be able to make this work out, but it would have many benefits if we could. For example, the "isFrozen" check in the Object.observe API would make more sense if this were a simple state rather than a pattern check. More later...

A lot of the complexity of the current invariant checks derives from the fact that Javascript objects have (too) fine-grained invariants (i.e. non-configurability at the level of individual properties, non-extensibility at the level of the object, and all possible combinations in between). This means we have to account for weird cases such as a non-extensible object with all but 1 property being non-configurable (an "almost-frozen" object).

If JS objects could be in only one of four states, things would be a lot simpler to reason about. That said, I don't see how we can get there without radically breaking with ES5's view of object invariants.

Of course, I don't necessarily agree that ES5 has such a view of object invariants... ;-)

However, everything we currently have in ES5 maps to one of those fours states. Do you actually care very much, form a reasoning perspective, about a open object with some non-configurable properties. Or an non-extensible object with with some configurable properties?

Overall, I think having the four states makes the conceptual model clearer and might guide us in cleaning up the MOP.

# Tom Van Cutsem (13 years ago)

2012/11/21 Allen Wirfs-Brock <allen at wirfs-brock.com>

On Nov 21, 2012, at 12:09 PM, Tom Van Cutsem wrote:

Let's first discuss whether there is an issue with just letting (unique) symbols show up in Object.{keys,getOwnPropertyNames}. It's still a change in return type (from Array[String] to Array[String|Symbol]), but a far less innocuous one than changing Array[String] into "Any".

Wow, I actually think that the Array[String|Symbol] change is far more likely to have actual compatibility impact. I can imagine various scenarios where is program or library is doing string processing on the elements returned from keys/getOPN. That seems far more likely, then that they have dependencies upon the returned collection being Array.isArray rather than just Array-like.

If we don't do any invariant checks, the trap isn't even obliged to return an array-like. Code that expects an array-like is likely to immediately crash if this happens though, so I'm not sure if that's a real issue.

c) to ensure the stability of the result.

You can think of a + b as implementing a type coercion of the trap result to "Array of String". This coercion is not too dissimilar from what the getOwnPropertyDescriptor has to do (normalization of the returned property descriptor by creating a fresh copy).

Yes, premature type coercion, in my opinion. Also, a classic performance mistakes made by dynamic language programmers: unnecessary coercion of values that are never going to be access or redundant coercion checks of values that are already of the property type. Why is it important to do such checks on values that are are just passing through these traps. When and if somebody actually gets arounds to using one of the elements that are returned as a property key they will be automatically coerced to a valid value. Why is it important that it happens any sooner than that.

I don't know if you are familiar with the work of Felleisen et al. on higher-order contracts. In that work, they use the notion of "blame" between different components/modules. Basically: if some module A receives some data from a module B, and B provides "wrong" data, then B should be assigned blame. You don't want to end up in a situation where A receives the blame at the point where it passes that "wrong" data into another module C.

Yes, but ES is rampant with this sort of potentially misplaced blame. We can debate whether such proper "blame" assignment is important or not, but I do believe this sort of very low level MOP interface is a situation where you want to absolutely minimize none essential work. I'd sooner have it be a little bit more difficult to track now Proxy based bugs then to impact the performance of every correctly implemented proxy in every correct program. BTW this is a general statement about the entire proxy MOP and not just about these particularly property key access traps.

It's true that most proxies will probably be benign and thus most dynamic invariant checks will succeed. But we should not remove checks just because they succeed 99.9% of the time. By analogy, most arguments passed to the + operator are valid operands, but that doesn't obviate the runtime type-check it must make over and over again (barring optimization). In the case of the + operator a forgotten type-check would lead to a crash or memory-unsafe behavior. In the case of a forgotten invariant check for proxies, the consequences are far less severe, but may lead to inconsistent behavior none the less.

There is a big difference between essential checks that are needed for memory safety and checks that are needed to maintain some of these proxy invariants. As you say, one will cause system level crashes or unsafe memory accesses. The other just introduces application level bugs.

Essential runtime checks are always should be designed to be as inexpensive as possible and for that reason are always very simple. Complex relationship validation belongs at the application level. You might do such validation in your sandbox implementation (for example providing your own Object.getOwnPropertyNames that does a copy to a fresh array) but it shouldn't be forced upon applications that don't need it.

The "early" type coercions at the exit of the traps can be thought of in these terms: the invariant checks will always blame the proxy handler for producing wrong data, at the point where the wrong data is produced and before it's exposed to client code. Clients are expecting ES5/6 objects to obey an implicit "contract".

We are defining that contract right now. It would be a mistake to over specify it in ways that may unnecessarily impact performance.

getOwnPropertyDescriptor is also on my list to talk to you about for similar reasons. I don't see why the property descriptor needs to be normalized and regenerated (including converting accessor properties on the descriptor into data properties and requiring the [[Prototype]] to be object. I have an alternative for [[GetOwnProperty]]/TrapGetOwnPropertyDescriptor that preserves the original object returned from the trap (while normalizing to a descriptor record for any internal calls on ordinary objects). This preserves any exotic property descriptor object properties as is. This seems like exactly the right things. Such exotic property descriptor properties can only be meaningfully used by other proxy traps or application/library code that knows about them. I don't see any need to normalize them into data properties or otherwise muck with what the actually implementor of the getOwnPropertyDescrptor trap choose to return.

I this alternative implementation is, again, much simpler (and efficient) with fewer externally observable calls in its algorithm steps. I sure you will want to review it so I will include it in the spec. draft.

It's all a matter of how much invariants we care to give up.

The only invariants we have general agreement regarding enforce are those with respect to the visible properties of frozen/sealed target objects.

Agreed! Just to be clear, the invariants we were originally discussing were those related to getOwnPropertyNames/keys/enumerate. For getOwnPropertyNames, the invariant that I specced was that the trap must return all non-configurable properties of the target, and may not return any new properties that don't exist on a non-extensible target.

To me, this invariant is directly related to "the visible properties of frozen/sealed target objects".

I think enforcement of additional invariant that don't directly related to those specific integrity constrains shouldn't be added.

Why do you not consider the above invariant related to what it means for a property to be non-configurable or an object to be non-extensible? To me, they are directly related.

The issue is that (as currently defined) it imposes what seems like significant algorithmic complexity on applications that don't actually care about that invariant.

The root question that I think I posed is: Are you sure it actually impacts the integrity of the use cases you have in mind, if this invariant was not checked for keys/getOPN? After all, you are presumably going to use those names to perform operations at the individual property level on proxies and those individual operations are all doing equivalent integrity checks.

If the real integrity invariants are being maintained at the individual property level, then maybe checking them in keys/getOPN is just an (expensive) security blanket that isn't essential.

A concrete example: say I'm implementing a membrane that wants to allow frozen "records" to pass through the membrane unwrapped. By "record", I mean an object with only data properties bound to primitives (strings, numbers, ...). Since individual strings and numbers can pass through unwrapped, it's not entirely unreasonable to allow compound values composed of only primitives to be passed through unwrapped as well.

To pass such frozen records, the membrane needs to perform a check to see whether the frozen object is indeed a record. To do so, it must be able to iterate over all the properties of the object and test whether they are indeed data properties bound to primitives.

If the frozen record is a proxy that can lie about its own properties, it can not report a particular "foo" property that is bound to a mutable object, thus opening a communications channel piercing the membrane.

Even without them, the core of the language is still memory safe and interoperability implementable. From the spec. I can tell you exactly what is supposed to happen for any of these situations.

I think it's risky to allow descriptors to be returned whose attributes may be accessors, thus potentially fooling much existing ES5 code that often does simple tests like:

if (!desc.writable && !desc.configurable) { var val = desc.value; // we can assume the property is immutable, so we can cache its value ... }

You can no longer rely on the correctness of such code if desc can't be guaranteed to be a plain old object containing only data properties.

If you want to enforce this as an invariant (after you've already done the fast check to make sure that the target is an high integrity object) I'm fine with that. But, please don't but more checks in the fast path where we don't have these integrity requirements.

Also, in any situation like the above if you really care you can always do a Object.getOwnPropertyDescriptor and verify that you really are dealing with a own data property. Make the client that has the requirement pay the most rather than everybody else who doesn't.

No, that wouldn't work. If we don't enforce that property descriptors returned from Object.getOwnPropertyDescriptor are normalized, you can imagine that the returned descriptor is itself a proxy, which, when asked for its own properties, returns yet another proxy that may again lie about the description of its own attributes (we're talking about the attributes of the attributes of a property here).

Yes, I assume that the descriptor object may themselves be proxies. But aren't the proxies you really care about then ones that you are implementing as part of your sandbox implementation. Those you have control over. What does it matter to you if some random application object is implemented by a proxy with n-levels of indirection.

The proxies I'm worried about are precisely those that an application doesn't have control over. Those are the ones that may confuse ES5 code relying on ES5 invariants.

This all sounds incredibly theoretical, but the fact of the matter is this: if you don't enforce the language invariants directly on the basic primitives provided by the language (such as Object.getOwnPropertyDescriptor), then the cat is out of the bag and you can no longer reason about integrity. Any further checks you want to make to build up more confidence are themselves made with unreliable primitives, so they provide only a false sense of security.

Hence my above question, is getOPN really one of these basic primitives?

Without at least 1 operation that reliably lists an object's properties, you can't reliably introspect a frozen object.

c) on the other hand is crucial for the non-configurability/non-extensibility checks mentioned below. It's no use checking some invariants on a data structure if that data structure can later still be mutated.

I don't see how an extra copy of the the result array makes any difference in this regard (also see below). The array returned from Object.getOwnPropertyNames is still mutable even after you make a copy. It's only as stable as any other non-frozen object you are likely to encounter.

The returned copy, even if it is mutable, is fresh, meaning that no one else yet has a pointer to it. Hence it can't be modified by the proxy handler.

If we don't make a copy, but we do check the result for invariants, here's what may happen:

  1. proxy checks whether the returned array contains all non-configurable properties of the target
  2. let's assume this is the case, so the proxy returns this array to the client
  3. the proxy handler kept a reference to the array, and mutates it, e.g. by removing a non-configurable property
  4. client is now wrongly under the assumption that the returned array contains all non-configurable properties

and then... How is this likely to be exploited? But if is exploitable, then I'd be content if you only enforce it for the high integrity object situations.

I can't give you a concrete exploit. I can only make the general observation that sandboxes such as Caja are written entirely in Javascript. I'm not sure this could be done without the language providing foolproof primitives. That's why I think it's important we think carefully about the values returned by traps.

  1. Every name in the list returned by the trap code is looked up on the target to determine whether or not it exists, even if the target is extensible. Each of those lookup is observable (the target might itself be a proxy) so, according to the algorithm they all must be performed.

This is where we get into actual checks required to enforce non-configurability/non-extensibility.

Granted, the ES5 spec is not clear about the invariants on getOwnPropertyNames and keys. The currently specified invariants are a common-sense extrapolation of the existing invariants to cover these operations.

Yes, but if they aren't essential then we have to take into account the cost of doing non-essential invariant checks. These costs are both the actual runtime cost of doing the checks, the constraints upon implementation imposed by the need for complete and properly sequenced occurrence of all potentially externally observable algorithm steps, and finally (and least important) the added complexity of the specification which makes it harder to actually achieve interoperable implementations.

I agree about the cost. The question is whether we regard these invariants as essential or not. To me, they are the common-sense invariants an ES5 programmer would expect.

They may be common sense but not necessarily essential. There is a cost to all such enforcement.

As I argued earlier, you need at least 1 reliable primitive that is guaranteed to give you all the property names of a proxy that pretends to be frozen. Otherwise you can't reliably walk deep-frozen object graphs.

I'm fine with Object.getOwnPropertyNames being that reliable primitive, and perhaps relaxing the invariant on Object.keys and the for-in loop (the enumerate trap).

In practice, it determines the degree of confidence that a programmer can have in Object.getOwnPropertyNames and friends when dealing with a frozen object. If we waive these invariant checks, then the result of those operations can never be trusted on to reliably introspect on a frozen object's list of property names:

Object.isFrozen(proxy) // true Object.getOwnPropertyNames(proxy) // ['foo'] Object.getOwnPropertyNames(proxy) // [ ]

Here, the 'foo' property apparently disappeared on a frozen object.

Yes, but copying the result an extra time doesn't ensure that two separate calls produce the same result.

It's not the copying alone that guarantees it, it's the invariant check that 'foo' is in the result + the copying that guarantees it.

If neither the for-in loop nor Object.getOwnPropertyNames nor Object.keys can reliably report an object's own properties, then we've made it impossible to reliably traverse and inspect a presumably deep-frozen object graph.

If that is the goal, that maybe what you need is a new operation that cannot be trapped. Your algorithm is essentially doing all the work, and more, necessary to generate an accurate report of the actual non-configurable properties of the target. If that is what is needed for your deep frozen traversals, add that operation. That seems better than allowing the Proxy trap to do the work to report anything it wants, but then coming in after the fact and checking whether what was report matches what you demand. In this case, it seems preferable to just provide the results you demand without any handler intervention.

This seems to harken back to the original proxy design that essentially turned frozen objects into regular objects with no handlers. If that is what you need, you should provide access to the results you need for the situations where you need them.

But you should not be adding this overhead for Proxy uses that don't have those requirements.

The problem with, say, a non-trappable [[GetNonConfigurableOwnProperties]] internal method, is that membranes can't adapt. We were previously in this situation with the [[Extensible]] internal property. Eventually we ended up trapping Object.isExtensible and Object.preventExtensions, even though these traps can't actually return a result that is different from applying the operation directly to the target. They're still useful to trap, as the proxy may then respond to these operations and change the state of its own target before returning a result.

I'm pretty sure introducing a new non-trappable method is going to lead to the same problems for membranes.

Then have the trap but always ignore its results and call a non-trapping operation on the target object to get the actual list of names. You integrity check are all about non-configurable properties. Normal, non-high integrity proxies want integrity checking algorithms that are proportional to the number of non-configurable properties (usually 0) rather than the total number of properties.

Details matter in this case. Let's go over them. The invariant for getOwnPropertyNames consists of three parts:

  1. check whether the trap result contains all non-configurable target properties
  2. if the target is non-extensible, check whether the trap result contains no new properties beyond those defined on the target
  3. if the target is non-extensible, check whether all existing target properties are in the result (if not, a proxy may make it appear that a property was "deleted" on a non-extensible object, which is otherwhise impossible)

Part #1 is proportional to the number of non-configurable target properties (call this number N)

not P? at least in the generic case you have to get the property descriptor of every property in order to determine which are non-configurable. But I guess it doesn't change the answer

Yes, sorry, should have been more explicit: with only Object.getOwnPropertyNames, this is proportional to P. I was operating under the assumption of having a [[GetOwnNonConfigurableProperties]] method as you suggested earlier.

Part #2 is proportional to the number of properties returned in the trap result (call this M) Part #3 is proportional to the total number of target properties (whether they be configurable or not) (call this P)

For a non-extensible target, the total cost is O(N + M + P), and since N <= P, the cost is O(M + P). In other words, the "optimization" of using a [[GetNonConfigurableOwnProperties]] method in Part #1 is not very effective, as Part #3 must visit all own properties anyway.

For an extensible target, I agree [[GetNonConfigurableOwnProperties]] would introduce a payoff, reducing the cost of the check from O(P) to O(N).

It still appears to me that for Sealed/Frozen objects, which I believe are the ones you actually care about, you are enforcing that the list returned is exactly the own properties of the ultimate ordinary target object (perhaps with multiple levels of proxy indirection). In otherwords, the result cannot be meaningfully changed. In that case it doesn't need to be trap.

You're right that the result cannot be meaningfully changed for frozen objects. But it still needs to be a trap for membranes. The issue is again that the membrane proxy needs to be notified of the operation occurring, so that it can update its target object before the operation proceeds. Like Brandon mentioned earlier in this thread, the trap is effectively just a notification callback at that point.

# Allen Wirfs-Brock (13 years ago)

On Nov 22, 2012, at 5:36 AM, Tom Van Cutsem wrote:

2012/11/21 Allen Wirfs-Brock <allen at wirfs-brock.com>

On Nov 21, 2012, at 12:09 PM, Tom Van Cutsem wrote:

Let's first discuss whether there is an issue with just letting (unique) symbols show up in Object.{keys,getOwnPropertyNames}. It's still a change in return type (from Array[String] to Array[String|Symbol]), but a far less innocuous one than changing Array[String] into "Any".

Wow, I actually think that the Array[String|Symbol] change is far more likely to have actual compatibility impact. I can imagine various scenarios where is program or library is doing string processing on the elements returned from keys/getOPN. That seems far more likely, then that they have dependencies upon the returned collection being Array.isArray rather than just Array-like.

If we don't do any invariant checks, the trap isn't even obliged to return an array-like. Code that expects an array-like is likely to immediately crash if this happens though, so I'm not sure if that's a real issue.

Just like all the generic array function. Essentially all objects are array-like, you don't even need to have a "length" property. Such "array-likes" generally act as if they had a length of 0.

Alternatively, what's the real difference to an application whether it "crashes" (eg, throws TypeError) within the Proxy implementation of [[Keys]] or crashes when it touches the object returned from Object.keys?

...

A concrete example: say I'm implementing a membrane that wants to allow frozen "records" to pass through the membrane unwrapped. By "record", I mean an object with only data properties bound to primitives (strings, numbers, ...). Since individual strings and numbers can pass through unwrapped, it's not entirely unreasonable to allow compound values composed of only primitives to be passed through unwrapped as well.

To pass such frozen records, the membrane needs to perform a check to see whether the frozen object is indeed a record. To do so, it must be able to iterate over all the properties of the object and test whether they are indeed data properties bound to primitives.

If the frozen record is a proxy that can lie about its own properties, it can not report a particular "foo" property that is bound to a mutable object, thus opening a communications channel piercing the membrane.

...

Hence my above question, is getOPN really one of these basic primitives?

Without at least 1 operation that reliably lists an object's properties, you can't reliably introspect a frozen object.

...

It still appears to me that for Sealed/Frozen objects, which I believe are the ones you actually care about, you are enforcing that the list returned is exactly the own properties of the ultimate ordinary target object (perhaps with multiple levels of proxy indirection). In otherwords, the result cannot be meaningfully changed. In that case it doesn't need to be trap.

You're right that the result cannot be meaningfully changed for frozen objects. But it still needs to be a trap for membranes. The issue is again that the membrane proxy needs to be notified of the operation occurring, so that it can update its target object before the operation proceeds. Like Brandon mentioned earlier in this thread, the trap is effectively just a notification callback at that point.

Then I think the thing to do is provide a [[GetNonConfigurablePropertyKeys]] internal method that, for Proxy objects, only calls the trap as a notification and always returns the [[GetNonConfigurablePropertyKeys]] of the target object (which would eventually bottom out in an ordinary object. Because your use cases require sealed/frozen objects the list returned is equivalent for those situations to the the own property list.

I'd also eliminate all of the integrity checking from [Keys]]/[[GetOwnPropertyNames]] because for your use cases they simply wouldn't be used. But they still have utility for people implementing pure open virtual objects. Eliminating the unnecessary integrity checks greatly simplifies them and should make them more efficient.

Finally, because sealed/frozen is so key to your use cases, I'd move forward with my suggestion for formalizing the sealed/frozen as object states. Rather that something that has to be deduced from [[Extensible]] and the [[Configurable]]/[[Writable]] attributes. It may not be essential (implementation might internally optimize it that way, anyway) but it seems important enough that it would be better for the spec. to be explicit about this rather than leaving in to implementation to figure out that such an optimization is important. Also, making the state explicit eliminates all the individually observable [[GetOwnProperty]] calls that are otherwise needed to deduce frozen/sealed (and which can't be optimized away because they are observable via traps). BTW, the ordering of those call is currently unspecified because property name enumeration order is unspecified.

So one new internal method [[GetNonConfigurablePropertyKeys]] (although I'd really prefer to combine it, [[Keys]], and [[GetOwnPropertyNames]] into a single parameterized trap). Also replace [[PreventExtensions]]/[[IsExtensible]] with [SetIntegrity]//[[GetIntegrity]] where state is one of "non-extensible", "sealed", "frozen" and only lower to higher transitions are allowed. Also, for proxies this would generate a "notify only" trap. The Object.sealed/freeze/isSealed/isFrozen/preventExtrensions/isExtensible functions could then all be implemented in terms of those two traps.

What do you think?

# Dean Tribble (13 years ago)

I am looking forward to proxies in JavaScript, and had a thought on the issues below. You could extend the the "direct proxy" approach for this.

When the Proxy receives getOwnPropertyNames, it

  1. notifies the handler that property names are being requested
  2. the handler adds/removes any properties (configurable or otherwise subject to the normal constraints) on the target
  3. upon return, the proxy invokes getOwnPropertyNames directly on the target (e..g, invoking the *normal *system primitive)

This approach appears to have consistent behavior for configurability and extensibility. For example, the trap operation above could add configurable properties to an extensible target, and remove them later. It could add non-configurable properties, but they are permanent once added, etc. Thus there's no loss of generality. In addition to optionally setting up properties on the target, the handler trap above would need to indicate to the proxy (via exception or boolean result) that the getOwnPropertyNames operation should proceed ahead or fail.

This extension of the "direct proxy" approach applies to all query operations, eliminates the copying and validation overhead discussed below, simplifies the implementation, retains full backwards compatibility, and enables most if not all the expressiveness we might expect for proxies.

Dean

From: Allen Wirfs-Brock <allen at wirfs-brock.com>

# Mark S. Miller (13 years ago)

+1. I think this is a really effective extension of the direct proxy approach, and I don't know why we didn't see it earlier. It's weird that this preserves all the flexibility of fully virtual configurable properties even though it insists that even these be made into real properties on the target. The trick is that placing a configurable property on the target doesn't commit the handler to anything, since the handler can remove or change this property freely as of the next trap.

Apologies again for not yet having the time to do more than skim the thread at this point. But IIRC someone already suggested a similar change to some of the imperative traps -- perhaps freeze, seal, and preventExtensions. The handler would not perform these operations on the target, only to have the proxy check it. Rather, if the trap indicates that the operation should succeed, the proxy then simply performs the operation -- or else throws. I wonder if this philosophy could be extended to some of the other imperative operations as well?

What expressiveness does this even-more-direct proxy approach lose? AFAICT, not much. At the same time, it should result in a much simpler implementation and much greater confidence that invariants of the non-proxy sublanguage are preserved by the introduction of proxies. In fact, I think it's much stronger on invariant preservation that the current direct proxies, while being much simpler.