A case for removing the seal/freeze/isSealed/isFrozen traps
Loss of identity, extra allocations, and forwarding overhead remain problems.
It seems to me that you are focusing too much on "share ... to untrusted parties." It's true you want either a membrane or an already-frozen object in such a setting. But the latter case, already-frozen object, does not want a membrane, both to avoid identity change and to avoid the allocation and forwarding overheads. And outside of untrusted parties, frozen objects have their uses -- arguably more over time with safe parallelism in JS.
Warning: In this post, I'll be diverging a bit from the main topic.
Loss of identity, extra allocations, and forwarding overhead remain problems.
I'm doubtful loss of identity matters often enough to be a valid argument here. I'd be interested in being proved wrong, though.
I understand the point about extra allocation. I'll talk about that below.
The forwarding overhead can be made inexistent in the very case I've exposed because in the handler, the traps you care about are absent from the handler, so engines are free to optimize the [[Get]]
& friends as operations applied directly to the target. A handler-wise write barrier can deoptimize but in most practical cases, the deoptimization won't happen because in most practical cases handlers don't change.
It seems to me that you are focusing too much on "share ... to untrusted parties."
Your very own recent words: "In a programming-in-the-large setting, a writable data property is inviting Murphy's Law. I'm not talking about security in a mixed-trust environment specifically. Large programs become "mixed trust", even when it's just me, myself, and I (over time) hacking the large amount of code."...to which I agree with (obviously?)
And "Be a better language for writing complex applications" is in the first goals
Maybe I should use another word than "untrusted parties". What I mean is "any code that will manipulate something without necessarily caring to learn about what this something expects as precondition and own invariants". This includes security issues of course, but also buggy code (which, in big applications, are often related to mismatch between a precondition/expectation and how something is used).
I've seen this in a previous experience on a Chrome extension where someone would seal an object as a form of documentation to express "I need these properties to stay in the object". It looked like:
function C(){
// play with |this|
return Object.seal(this)
}
My point here is that people do want to protect their object integrity against "untrusted parties" which in that case was just "people who'll contribute to this code in the future".
Anecdotally, the person removed the Object.seal
before the return because of performance reasons, based on a JSPerf test. Interestingly, a JSPerf test with a proxy-based solution might have convinced to do proxies instead of Object.seal
. But that's a JSPerf test and it doesn't really measure the GC overhead of extra objects. Are there data on this? Are there methodologies to measure this overhead? I understand it, but I find myself unable to pull up numbers on this topic and convincing arguments that JSPerf only measures one part of perf the story and its nice conclusion graph should be taken with a pinch of salt.
It's true you want either a membrane or an already-frozen object in such a setting.
Not a membrane, just a proxy that protects its target. Objects linked from the proxy likely came from somewhere else. They're in charge of deciding of their own "integrity policy".
And outside of untrusted parties, frozen objects have their uses -- arguably more over time with safe parallelism in JS.
Arguably indeed. I would love to see this happen. Still, if (deeply) frozen "POJSO" could be part shared among contexts, I think we can agree that it wouldn't apply to frozen proxies for a long time (ever?)
I went a bit too far suggesting frozen objects could de-facto disappear with proxies. I'm still unclear on the need for specific seal/freeze/isSealed/isFrozen traps.
2013/2/13 David Bruant <bruant.d at gmail.com>
Le 12/02/2013 14:29, Brendan Eich a écrit :
Loss of identity, extra allocations, and forwarding overhead remain problems.
I'm doubtful loss of identity matters often enough to be a valid argument here. I'd be interested in being proved wrong, though.
I do think identity is an issue in practice, especially without a membrane in-place to preserve object identity across the membrane. Also:
It's true you want either a membrane or an already-frozen object in such
a setting.
Not a membrane, just a proxy that protects its target. Objects linked from the proxy likely came from somewhere else. They're in charge of deciding of their own "integrity policy".
"Freezing" an object by handing out a read-only view is fragile without a full membrane: what if a method of the wrapped object returns |this|, or passes its |this| argument to some client-provided callback? It's all too easy for a direct reference to the target to escape, thus bypassing the read-only view. This is not the case with frozen objects.
I went a bit too far suggesting frozen objects could de-facto disappear
with proxies. I'm still unclear on the need for specific seal/freeze/isSealed/isFrozen traps
I think Allen and I reached consensus that we might do without those traps. In addition, Allen was considering an alternative design where the "state" of an object (i.e. "extensible", "non-extensible", "sealed" or "frozen") is represented explicitly as an internal property, so that Object.isFrozen and Object.isSealed must not "derive" the state of an object from its properties.
Le 13/02/2013 20:36, Tom Van Cutsem a écrit :
Hi David,
I went a bit too far suggesting frozen objects could de-facto disappear with proxies. I'm still unclear on the need for specific seal/freeze/isSealed/isFrozen traps
I think Allen and I reached consensus that we might do without those traps.
Excellent!
In addition, Allen was considering an alternative design where the "state" of an object (i.e. "extensible", "non-extensible", "sealed" or "frozen") is represented explicitly as an internal property, so that Object.isFrozen and Object.isSealed must not "derive" the state of an object from its properties.
Interesting. So what would happen when calling Object.isFrozen on a proxy? Would Object.isFrozen/isSealed/isExtensible reach out directly to the target? or a unique "state" trap returning a string for all of them? ("state" is too generic of a name, but you get the idea)
Regardless on the final decision on (full) notification proxies, maybe these operations (isSealed/isFrozen) could have notification trap. The invariant is that the answer has to be the target one (all the time), so the trap return value is irrelevant. Like the getPrototypeOf trap.
On Feb 13, 2013, at 12:53 PM, David Bruant wrote:
Le 13/02/2013 20:36, Tom Van Cutsem a écrit :
Hi David,
I went a bit too far suggesting frozen objects could de-facto disappear with proxies. I'm still unclear on the need for specific seal/freeze/isSealed/isFrozen traps
I think Allen and I reached consensus that we might do without those traps. Excellent!
Where "do without", means replaced with set/getIntegrity traps and objects have explicit internal state whose value is one of normal/non-extensible/sealed/frozen (and possibly) "fixed-inheritance" between normal and non-extensible to freeze [[Prototype]]).
[[SetIntegrity]] can increase the integrity level but not decrease it.
The perf and invariant complexity concerns come from the fact that the sealed/frozen status of an object can only be inferred by inspecting all of its methods. Having an explicit state eliminates the need to do this inspection. It also simplifies the MOP by merging all of the extensible/sealed/frozen related MOP operations into only two ops. But, one way or another, these object state transitions must be accounted for in the MOP.
For this to fly, implementation have to be able to expand their current 1 bit of of extensible state to at least 2 bits (3 would be better). Or perhaps not, I suppose we could just introduce the MOP level changes and a lazy implementation could continue to infer the state by examining all its methods.
In addition, Allen was considering an alternative design where the "state" of an object (i.e. "extensible", "non-extensible", "sealed" or "frozen") is represented explicitly as an internal property, so that Object.isFrozen and Object.isSealed must not "derive" the state of an object from its properties. Interesting. So what would happen when calling Object.isFrozen on a proxy? Would Object.isFrozen/isSealed/isExtensible reach out directly to the target? or a unique "state" trap returning a string for all of them? ("state" is too generic of a name, but you get the idea)
This is a question regarding proxy design, rather than the MOP. Either get/setIntegrity traps to the handler or it forwards directly to the target. That's would be a design issue for Tom, but my starting point is to simply follow the current design decisions made for [[PreventExtensions]]/[[IsExtensible]]
Regardless on the final decision on (full) notification proxies, maybe these operations (isSealed/isFrozen) could have notification trap. The invariant is that the answer has to be the target one (all the time), so the trap return value is irrelevant. Like the getPrototypeOf trap.
Right, one way or another these operations need to be part of the MOP.
2013/2/14 Allen Wirfs-Brock <allen at wirfs-brock.com>
On Feb 13, 2013, at 12:53 PM, David Bruant wrote:
Interesting.
So what would happen when calling Object.isFrozen on a proxy? Would Object.isFrozen/isSealed/isExtensible reach out directly to the target? or a unique "state" trap returning a string for all of them? ("state" is too generic of a name, but you get the idea)
This is a question regarding proxy design, rather than the MOP. Either get/setIntegrity traps to the handler or it forwards directly to the target. That's would be a design issue for Tom, but my starting point is to simply follow the current design decisions made for [[PreventExtensions]]/[[IsExtensible]]
A get/setIntegrity trap with the invariant constraints of isExtensible/preventExtensions would be the obvious path to take.
One thing that remains unclear to me: if the state of an object becomes explicit, we introduce the risk for this state to become inconsistent with the state from which it is derived.
For example, setting the integrity of an object to "frozen" must still make all own properties non-configurable, i.e.
Reflect.setIntegrity(obj, "frozen")
should have the same effect as
Object.freeze(obj)
Likewise, turning the last configurable property of a non-extensible object into a non-configurable property should automagically change the state to "frozen", i.e.
Object.defineProperty(obj, "lastProperty", { configurable: false }) // must update internal state as well as the property Reflect.getIntegrity(obj) === "frozen"
Will this not just shift the current complexity someplace else?
Regardless on the final decision on (full) notification proxies, maybe these operations (isSealed/isFrozen) could have notification trap. The invariant is that the answer has to be the target one (all the time), so the trap return value is irrelevant. Like the getPrototypeOf trap.
Right, one way or another these operations need to be part of the MOP.
If we go for get/setIntegrity I wouldn't re-introduce all the derived operations as notification traps. Then we might as well leave things the way they are.
On 13 February 2013 13:39, David Bruant <bruant.d at gmail.com> wrote:
Warning: In this post, I'll be diverging a bit from the main topic.
Le 12/02/2013 14:29, Brendan Eich a écrit :
Loss of identity, extra allocations, and forwarding overhead remain problems.
I'm doubtful loss of identity matters often enough to be a valid argument here. I'd be interested in being proved wrong, though.
I understand the point about extra allocation. I'll talk about that below.
The forwarding overhead can be made inexistent in the very case I've exposed because in the handler, the traps you care about are absent from the handler, so engines are free to optimize the [[Get]]&friends as operations applied directly to the target.
You're being vastly over-optimistic about the performance and the amount of optimisation that can realistically be expected for proxies. Proxies are inherently unstructured, higher-order, and effectful, which defeats most sufficiently simple static analyses. A compiler has to work much, much harder to get useful results. Don't expect anything anytime soon.
I've seen this in a previous experience on a Chrome extension where someone would seal an object as a form of documentation to express "I need these properties to stay in the object". It looked like: function C(){ // play with |this| return Object.seal(this) }
My point here is that people do want to protect their object integrity against "untrusted parties" which in that case was just "people who'll contribute to this code in the future".
Anecdotally, the person removed the Object.seal before the return because of performance reasons, based on a JSPerf test [3]. Interestingly, a JSPerf test with a proxy-based solution [4] might have convinced to do proxies instead of Object.seal.
Take all these JSPerf micro benchmark games with two grains of salt; lots of them focus on premature optimisation. Also, seal and freeze are far more likely to see decent treat than proxies.
But more importantly, I think you get too hung up on proxies as the proverbial hammer. Proxies are very much an expert feature. Using them for random micro abstractions is like shooting birds with a nuke. A language that makes that necessary would be a terrible language. All programmers messing with home-brewed proxies on a daily basis is a very scary vision, if you ask me.
A compiler has to work much, much harder to get useful results. Don't expect anything anytime soon.
var handler = {set: function(){throw new TypeError}}
var p = new Proxy({a: 32}, handler);
p.a;
It's possible at runtime to notice that the handler of p doesn't have a get trap, optimize p.[[Get]]
as target.[[Get]]
and guard this optimization on handler modifications. Obviously, do that only if the code is hot. I feel it's not that much work than what JS engines do currently and the useful result is effectively getting rid of the forwarding overhead. Is this vastly over-optimistic?
Take all these JSPerf micro benchmark games with two grains of salt;
... that's exactly what I said right after :-/ "But that's a JSPerf test and it doesn't really measure the GC overhead of extra objects." "JSPerf only measures one part of perf the story and its nice conclusion graph should be taken with a pinch of salt."
lots of them focus on premature optimisation.
I'm quite aware. I fear the Sphinx. I wrote "might have convinced to do proxies instead of Object.seal". I didn't say I agreed. and I actually don't.
Also, seal and freeze are far more likely to see decent treat than proxies.
Why so?
All programmers messing with home-brewed proxies on a daily basis is a very scary vision, if you ask me.
hmm... maybe.
On 14 February 2013 19:16, David Bruant <bruant.d at gmail.com> wrote:
Le 14/02/2013 18:11, Andreas Rossberg a écrit :
You're being vastly over-optimistic about the performance and the amount of optimisation that can realistically be expected for proxies. Proxies are inherently unstructured, higher-order, and effectful, which defeats most sufficiently simple static analyses. A compiler has to work much, much harder to get useful results. Don't expect anything anytime soon.
var handler = {set: function(){throw new TypeError}} var p = new Proxy({a: 32}, handler); p.a;
It's possible at runtime to notice that the handler of p doesn't have a get trap, optimize p.[[Get]] as target.[[Get]] and guard this optimization on handler modifications. Obviously, do that only if the code is hot. I feel it's not that much work than what JS engines do currently and the useful result is effectively getting rid of the forwarding overhead. Is this vastly over-optimistic?
Yes. Proxies hook into many different basic operations, and there are many special cases you could potentially optimise for each of them, many of which don't come for free. I very much doubt that any vendor currently has serious plans to go down that rathole instead of spending their energy elsewhere. Certainly not before it is clear how (and how much) proxies will actually be used in practice.
On 14 February 2013 01:05, Allen Wirfs-Brock <allen at wirfs-brock.com> wrote:
Where "do without", means replaced with set/getIntegrity traps and objects have explicit internal state whose value is one of normal/non-extensible/sealed/frozen (and possibly) "fixed-inheritance" between normal and non-extensible to freeze [[Prototype]]).
[[SetIntegrity]] can increase the integrity level but not decrease it.
The perf and invariant complexity concerns come from the fact that the sealed/frozen status of an object can only be inferred by inspecting all of its methods. Having an explicit state eliminates the need to do this inspection. It also simplifies the MOP by merging all of the extensible/sealed/frozen related MOP operations into only two ops. But, one way or another, these object state transitions must be accounted for in the MOP.
For this to fly, implementation have to be able to expand their current 1 bit of of extensible state to at least 2 bits (3 would be better). Or perhaps not, I suppose we could just introduce the MOP level changes and a lazy implementation could continue to infer the state by examining all its methods.
I still must be missing something. Why should the language be changed when the proposed change is equivalent anyway? Why is this an optimisation that the spec should worry about instead of the implementations?
Andreas Rossberg wrote:
But more importantly, I think you get too hung up on proxies as the proverbial hammer. Proxies are very much an expert feature. Using them for random micro abstractions is like shooting birds with a nuke. A language that makes that necessary would be a terrible language. All programmers messing with home-brewed proxies on a daily basis is a very scary vision, if you ask me.
This.
Andreas Rossberg wrote:
On 14 February 2013 19:16, David Bruant<bruant.d at gmail.com> wrote:
Le 14/02/2013 18:11, Andreas Rossberg a écrit :
You're being vastly over-optimistic about the performance and the amount of optimisation that can realistically be expected for proxies. Proxies are inherently unstructured, higher-order, and effectful, which defeats most sufficiently simple static analyses. A compiler has to work much, much harder to get useful results. Don't expect anything anytime soon. var handler = {set: function(){throw new TypeError}} var p = new Proxy({a: 32}, handler);
p.a;
It's possible at runtime to notice that the handler of p doesn't have a get trap, optimize p.[[Get]] as target.[[Get]] and guard this optimization on handler modifications. Obviously, do that only if the code is hot. I feel it's not that much work than what JS engines do currently and the useful result is effectively getting rid of the forwarding overhead. Is this vastly over-optimistic?
Yes. Proxies hook into many different basic operations, and there are many special cases you could potentially optimise for each of them, many of which don't come for free. I very much doubt that any vendor currently has serious plans to go down that rathole instead of spending their energy elsewhere. Certainly not before it is clear how (and how much) proxies will actually be used in practice.
You're right in general, and we have not optimized, e.g. inlining scripted trap calls.
We did do something special for our new DOM bindings I wanted to pass along, in case anyone is interested:
bugzilla.mozilla.org/show_bug.cgi?id=769911
Thanks to bz for the link. This is yet another inline cache specialization for expandos on nodelists.
I've worked a lot with ECMAScript5 features in last two years, and I must say I never found a good use case for Object.freeze/seal/preventExtensions, it actually raised more issues than it actually helped (those few times when I decided to use it). Currently I think that's not JavaScript'y approach and use cases mentioning "untrusted parties" sounds "logical" just in theory, in practice when actually we never include "untrusted" modules in our code base does not make much sense.
However, main point I want to raise is that several times I had a use case for very close functionality, that with current API seem not possible: I'd like to be able to prevent accidental object extensions. I want to control all enumerable properties of the object, so they can only be set via defineProperty, but any direct assignment of non existing prop e.g. 'x.notDefinedYet = value' will throw. Imagine some ORM implementation, that via setters propagates changes to underlying persistent layer, at this time we cannot prevent accidental property sets that may occur before property was actually defined (therefore not caught by the setter) I assume that proxies will make such functionality possible, but maybe some Object.preventUndefinedExtensions will be even better.
I like this direction: it would distinguish the user-level operation assignment from the meta-level operation definition. I’m not sure where delete
fits in, but it’s much less common, so less of a potential problem.
Le 15/02/2013 11:03, Mariusz Nowak a écrit :
I've worked a lot with ECMAScript5 features in last two years, and I must say I never found a good use case for Object.freeze/seal/preventExtensions, it actually raised more issues than it actually helped (those few times when I decided to use it). Currently I think that's not JavaScript'y approach and use cases mentioning "untrusted parties" sounds "logical" just in theory, in practice when actually we never include "untrusted" modules in our code base does not make much sense.
However, main point I want to raise is that several times I had a use case for very close functionality, that with current API seem not possible: I'd like to be able to prevent accidental object extensions.
If something accidental can happen, then "untrusted parties" is more than theorical ;-) Brendan says it better [1]: "In a programming-in-the-large setting, a writable data property is inviting Murphy's Law. I'm not talking about security in a mixed-trust environment specifically. Large programs become "mixed trust", even when it's just me, myself, and I (over time) hacking the large amount of code."
"Security" and "untrusted parties" aren't about terrorists groups trying to hack your application to get a copy of your database or corrupt it or your choice to use some code downloaded from a dark-backgrounded website. They're about you trying to meet a deadline and not having time to read carefully the documentation and comments of every single line of modules you're delegating to. Trust isn't an all-or-nothing notion. Anytime I say "untrusted", I should probably say "partially trusted" instead. Trust also changes over time, mostly because as times passes, our brains forget the invariants and assumptions we baked in our code and if those aren't enforced at compile time or runtime, we'll probably violate them at one point or another and thus create bugs. Or we just make mistakes, because we're human and that's exactly the case you're explaining. "Security" and "untrusted parties" are about our inability as human beings to remember everything we do and our inability to be perfect. Any "security" mechanism is a mechanism to protect against hostile outsiders but also and probably mostly ourselves over time.
It is usually not considered so, but separation of concerns is a security mechanism in my opinion. So are most object-oriented so-called good practices.
"Security" is very loaded with emotions of people afraid to have their password stolen and "cyber attacks". It's also loaded with the notion of human safety and human integrity which, as human beings are sensitive to. Maybe I should start using a different word...
I want to control all enumerable properties of the object, so they can only be set via defineProperty, but any direct assignment of non existing prop e.g. 'x.notDefinedYet = value' will throw. Imagine some ORM implementation, that via setters propagates changes to underlying persistent layer, at this time we cannot prevent accidental property sets that may occur before property was actually defined (therefore not caught by the setter) I assume that proxies will make such functionality possible, but maybe some Object.preventUndefinedExtensions will be even better.
The problem is that there are probably dozens of use cases like yours [2] and the Object built-in can't welcome them all. Hence proxies as an extension mechanism of any "random micro-abstraction" (as Andreas Rossberg puts it ;-) )
David
[1] esdiscuss/2013-February/028724 [2] When I learned JS, how many time did I mistyped .innerHTML and wasted hours not understanding where some "undefined" string in my UI came from.
David, that's great clarification, and indeed it looks a bit different from that perspective.
Still the only use case I see for freezing/sealing whole object (the way it works now) is when we expose some constant dictionary object on which each property counts, and that's very rare use case. I don't see much good in disallowing extensions to prototypes we expose. it's not JS way. We can prevent accidental modifications of existing API's but disallowing custom extensions is too restrictive and not friendly in my opinion.
-- View this message in context: mozilla.6506.n7.nabble.com/A-case-for-removing-the-seal-freeze-isSealed-isFrozen-traps-tp272443p272674.html Sent from the Mozilla - ECMAScript 4 discussion mailing list archive at Nabble.com.
I definitely agree that something like "preventAccidentalExtensions" (disallows new properties through [[Put]] but not [[DefineOwnProperty]]) has more common uses cases than preventExtensions, and for the precise reasons that David said. The security is against bugs usually, not attackers. PreventExtensions is a clumsy tool for managing capabilities because it leaves no room for giving some code permission while preventing other code, which is exactly what we want when the clueful me of now is writing code to manage the clueless I of the future.
... and security sensitive code could just ban/alter the reflection methods.
On Fri, Feb 15, 2013 at 9:24 AM, Erik Arvidsson <erik.arvidsson at gmail.com>wrote:
... and security sensitive code could just ban/alter the reflection methods. On Feb 15, 2013 8:29 AM, "Brandon Benvie" <bbenvie at mozilla.com> wrote:
I definitely agree that something like "preventAccidentalExtensions" (disallows new properties through [[Put]] but not [[DefineOwnProperty]]) has more common uses cases than preventExtensions, and for the precise reasons that David said. The security is against bugs usually, not attackers. PreventExtensions is a clumsy tool for managing capabilities because it leaves no room for giving some code permission while preventing other code, which is exactly what we want when the clueful me of now is writing code to manage the clueless I of the future.
I think this would fit a really common use case, but I would say that the current attempted way to solve this problem is private names. Last I checked, private names (or the weak map variant) would not be frozen after an Object.freeze, but only trusted parties (like methods,getters/setters, and potentially those with the shared name) could modify it.
The pattern I would like to see optimized for using Object.freeze is the functional approach. I think the tools are there. Object.freeze makes immutable objects, and using Object.create to use frozen objects as prototypes, and store just the differences in the child object could potentially be an elegant way of doing persistent data structures. I haven't really tested the performance of it now, but I wonder how optimized it could get. The prototype chains could get very deep, but seeing as they would all be frozen all the way up, I wonder if it could be made more efficient.
On 15 February 2013 14:29, Brandon Benvie <bbenvie at mozilla.com> wrote:
I definitely agree that something like "preventAccidentalExtensions" (disallows new properties through [[Put]] but not [[DefineOwnProperty]]) has more common uses cases than preventExtensions, and for the precise reasons that David said. The security is against bugs usually, not attackers. PreventExtensions is a clumsy tool for managing capabilities because it leaves no room for giving some code permission while preventing other code, which is exactly what we want when the clueful me of now is writing code to manage the clueless I of the future.
If you need private extensibility, just complement preventExtensions with installing a private map or expando object.
On Feb 14, 2013, at 11:46 AM, Andreas Rossberg wrote:
On 14 February 2013 01:05, Allen Wirfs-Brock <allen at wirfs-brock.com> wrote:
Where "do without", means replaced with set/getIntegrity traps and objects have explicit internal state whose value is one of normal/non-extensible/sealed/frozen (and possibly) "fixed-inheritance" between normal and non-extensible to freeze [[Prototype]]).
[[SetIntegrity]] can increase the integrity level but not decrease it.
The perf and invariant complexity concerns come from the fact that the sealed/frozen status of an object can only be inferred by inspecting all of its methods. Having an explicit state eliminates the need to do this inspection. It also simplifies the MOP by merging all of the extensible/sealed/frozen related MOP operations into only two ops. But, one way or another, these object state transitions must be accounted for in the MOP.
For this to fly, implementation have to be able to expand their current 1 bit of of extensible state to at least 2 bits (3 would be better). Or perhaps not, I suppose we could just introduce the MOP level changes and a lazy implementation could continue to infer the state by examining all its methods.
I still must be missing something. Why should the language be changed when the proposed change is equivalent anyway? Why is this an optimisation that the spec should worry about instead of the implementations?
It's to simplify the MOP and that simplification is directly reflected as a simplification to the Proxy hander interface. Instead of 6 traps (preventExtensions, isExtensible, freeze, isFrozen, seal, isSealed) only two are needed.
Also, having an explicit frozen object state simplifies some of the object invariants which would otherwise perform explicitly specified accesses to the target object which would be observable (if the target is itself a proxy).
On Feb 14, 2013, at 1:14 AM, Tom Van Cutsem wrote:
2013/2/14 Allen Wirfs-Brock <allen at wirfs-brock.com>
On Feb 13, 2013, at 12:53 PM, David Bruant wrote:
Interesting.
So what would happen when calling Object.isFrozen on a proxy? Would Object.isFrozen/isSealed/isExtensible reach out directly to the target? or a unique "state" trap returning a string for all of them? ("state" is too generic of a name, but you get the idea)
This is a question regarding proxy design, rather than the MOP. Either get/setIntegrity traps to the handler or it forwards directly to the target. That's would be a design issue for Tom, but my starting point is to simply follow the current design decisions made for [[PreventExtensions]]/[[IsExtensible]]
A get/setIntegrity trap with the invariant constraints of isExtensible/preventExtensions would be the obvious path to take.
One thing that remains unclear to me: if the state of an object becomes explicit, we introduce the risk for this state to become inconsistent with the state from which it is derived.
For example, setting the integrity of an object to "frozen" must still make all own properties non-configurable, i.e.
Reflect.setIntegrity(obj, "frozen")
should have the same effect as
Object.freeze(obj)
yes, Object.freeze(obj) would be specified as performing: obj.[SetIntegrity]
Likewise, turning the last configurable property of a non-extensible object into a non-configurable property should automagically change the state to "frozen", i.e.
Object.defineProperty(obj, "lastProperty", { configurable: false }) // must update internal state as well as the property Reflect.getIntegrity(obj) === "frozen"
If I was starting fresh, I would say that Object.isFrozen(obj) is true only if Object.freeze(obj) (or equivalent Reflect.setIntegrity) has previous been performed and that Object.preventExtensions() followed by setting every property to non-configurable is not equivalent to performing Object.freeze.
I wonder if we can make that change for ES6 without breaking anything. Does anybody know of code that does Object.isFrozen(obj) checks without also having the expectation that obj would have been explicitly frozen using Object.freeze?
Will this not just shift the current complexity someplace else?
Well, it means that for 100% backwards compatibility, Object.isFrozen would have to be something like:
- Let state = obj.[GetIntegrity]; 2 If state is "frozen" return true; 3 If state is "sealed" or "non-extensible", then return true if all properties are non-configurable and non-writable 4 return false.
The real complexity saving is in simplifying the MOP/Proxy handler interface and also in making Proxy invariants only sensitive to the explicit integrity state of an object.
Regardless on the final decision on (full) notification proxies, maybe these operations (isSealed/isFrozen) could have notification trap. The invariant is that the answer has to be the target one (all the time), so the trap return value is irrelevant. Like the getPrototypeOf trap.
Right, one way or another these operations need to be part of the MOP.
If we go for get/setIntegrity I wouldn't re-introduce all the derived operations as notification traps. Then we might as well leave things the way they are.
I meant either get/setIntegrity or isExtensible/preventExtensions/isFrozen/freeze/isSealed/seal need to be part of the MOP. If we have get/setIntegrity we don't need the others (at the MOP/proxy trap level)
Le 16/02/2013 23:31, Allen Wirfs-Brock a écrit :
Will this not just shift the current complexity someplace else? Well, it means that for 100% backwards compatibility, Object.isFrozen would have to be something like:
- Let state = obj.[GetIntegrity]; 2 If state is "frozen" return true; 3 If state is "sealed" or "non-extensible", then return true if all properties are non-configurable and non-writable
nit: You can save the state to "frozen" before returning true.
On 16 February 2013 20:36, Allen Wirfs-Brock <allen at wirfs-brock.com> wrote:
On Feb 14, 2013, at 11:46 AM, Andreas Rossberg wrote:
On 14 February 2013 01:05, Allen Wirfs-Brock <allen at wirfs-brock.com> wrote:
Where "do without", means replaced with set/getIntegrity traps and objects have explicit internal state whose value is one of normal/non-extensible/sealed/frozen (and possibly) "fixed-inheritance" between normal and non-extensible to freeze [[Prototype]]).
[[SetIntegrity]] can increase the integrity level but not decrease it.
The perf and invariant complexity concerns come from the fact that the sealed/frozen status of an object can only be inferred by inspecting all of its methods. Having an explicit state eliminates the need to do this inspection. It also simplifies the MOP by merging all of the extensible/sealed/frozen related MOP operations into only two ops. But, one way or another, these object state transitions must be accounted for in the MOP.
For this to fly, implementation have to be able to expand their current 1 bit of of extensible state to at least 2 bits (3 would be better). Or perhaps not, I suppose we could just introduce the MOP level changes and a lazy implementation could continue to infer the state by examining all its methods.
I still must be missing something. Why should the language be changed when the proposed change is equivalent anyway? Why is this an optimisation that the spec should worry about instead of the implementations?
It's to simplify the MOP and that simplification is directly reflected as a simplification to the Proxy hander interface. Instead of 6 traps (preventExtensions, isExtensible, freeze, isFrozen, seal, isSealed) only two are needed.
Also, having an explicit frozen object state simplifies some of the object invariants which would otherwise perform explicitly specified accesses to the target object which would be observable (if the target is itself a proxy).
Well, that is either a breaking change, such that implementations can not actually be lazy about it, or it doesn't really remove complexity, since you still need to infer the state as a fallback (i.e., it's just an optimisation).
I don't necessarily oppose making that breaking change, but we have to be aware that, even though it's an optimisation, the change is yet another complication of the object model. The additional state modifies the meaning of per-property descriptors on a second level. IIUC, defineProperty now has to check against that state, and getOwnPropertyDescriptor somehow has to take it into account, too. For direct proxies, the respective traps have to extend their validation logic. Overall, not a simplification, as far as I can see.
David Bruant wrote:
... "Security" is very loaded with emotions of people afraid to have their password stolen and "cyber attacks". It's also loaded with the notion of human safety and human integrity which, as human beings are sensitive to. Maybe I should start using a different word...
Great explanation, David. That's everything I've wanted to say but haven't been able to find the words. Thanks for this!
Also, I've started using the word "integrity" to describe this kind of code, to get away from the loadedness of "security". For instance, Mark Miller's "low-integrity" puzzle[1]:
function makeTable() { var array = []; return Object.freeze({ add: function(v) { array.push(v); }, store: function(i, v) { array[i] = v; }, get: function(i) { return array[i]; } }); }
as a "high-integrity" function:
var freeze = Object.freeze, push = Function.prototype.call.bind(Array.prototype.push); function makeTable() { var array = []; return freeze({ add: function(v) { push(array, v); }, store: function(i, v) { array[i >>> 0] = v; }, get: function(i) { return array[i >>> 0]; } }); }
Of course, you don't want to write this way all the time. I think it's good for library code.
[1] esdiscuss/2011-November/017964
Nathan
as a "high-integrity" function:
var freeze = Object.freeze, push = Function.prototype.call.bind(Array.prototype.push); function makeTable() { var array = []; return freeze({ add: function(v) { push(array, v); }, store: function(i, v) { array[i >>> 0] = v; }, get: function(i) { return array[i >>> 0]; } }); }
Careful there, you're not done!-) With nodejs, adding the following
var table = makeTable();
table.add(1);
table.add(2);
table.add(3);
var secret;
Object.defineProperty(Array.prototype,42,{get:function(){ secret = this;}});
table.get(42);
console.log(secret);
secret[5] = "me, too!";
console.log( table.get(5) );
to your code prints
$ node integrity.js
[ 1, 2, 3 ]
me, too!
Couldn't resist, Claus
Claus Reinke wrote:
Careful there, you're not done!-) With nodejs, adding the following
var table = makeTable(); table.add(1); table.add(2); table.add(3);
var secret; Object.defineProperty(Array.prototype,42,{get:function(){ secret = this;}});
table.get(42); console.log(secret); secret[5] = "me, too!";
console.log( table.get(5) );
to your code prints
$ node integrity.js [ 1, 2, 3 ] me, too!
Couldn't resist, Claus
Nice! This is not something I had considered. Aside from freezing Array.prototype, I can only really think of one solution: not use an array.
var create = Object.create, freeze = Object.freeze, push = Function.prototype.call.bind(Array.prototype.push); function makeTable() { var array = create(null); return freeze({ add: function(v) { push(array, v); }, store: function(i, v) { array[i >>> 0] = v; }, get: function(i) { return array[i >>> 0]; } }); }
I suppose the array isn't really needed since we're not using methods inherited from Array.prototype. Downside: Browsers won't know to optimize the object as an array.
(Side note: Mark's original post was in the context of frozen Array.prototype on non-compliant implementations which allow writing to inherited non-writable properties. I find this a fun exercise without frozen Array.prototype, though.)
Nathan
On Mon, Feb 18, 2013 at 2:27 PM, Nathan Wall <nathan.wall at live.com> wrote:
Claus Reinke wrote:
Careful there, you're not done!-) With nodejs, adding the following
var table = makeTable(); table.add(1); table.add(2); table.add(3);
var secret; Object.defineProperty(Array.prototype,42,{get:function(){ secret = this;}});
table.get(42); console.log(secret); secret[5] = "me, too!";
console.log( table.get(5) );
to your code prints
$ node integrity.js [ 1, 2, 3 ] me, too!
Couldn't resist, Claus
Nice! This is not something I had considered. Aside from freezing Array.prototype, I can only really think of one solution: not use an array.
var create = Object.create, freeze = Object.freeze, push = Function.prototype.call.bind(Array.prototype.push); function makeTable() { var array = create(null); return freeze({ add: function(v) { push(array, v); }, store: function(i, v) { array[i >>> 0] = v; },
In all seriousness, I suggest array[+i] = v; rather than array[i >>> 0] = v;
because the latter is too verbose to become a habit. I recommend this even though I agree that the more verbose one has the better semantics.
get: function(i) { return array[i >>> 0]; } }); }
I suppose the array isn't really needed since we're not using methods inherited from Array.prototype. Downside: Browsers won't know to optimize the object as an array.
(Side note: Mark's original post was in the context of frozen Array.prototype on non-compliant implementations which allow writing to inherited non-writable properties. I find this a fun exercise without frozen Array.prototype, though.)
Note that Jorge's attack at esdiscuss/2011-November/017979.htmlworks even on compliant browsers.
The main use case (correct me if I'm wrong) for freezing/sealing an object is sharing an object to untrusted parties while preserving the object integrity. There is also the tamper-proofing of objects everyone has access to (
Object.prototype
in the browser)In a world with proxies, it's easy to build new objects with high integrity without Object.freeze: build your object, share only a wrapped version to untrusted parties, the handler takes care of the integrity.
function thrower(){ throw new Error('nope'); } var frozenHandler = { set: thrower, defineProperty: thrower, delete: thrower }; function makeFrozen(o){ return new Proxy(o, frozenHandler); }
This is true to a point that I wonder why anyone would call
Object.freeze
on script-created objects any longer... By design and for good reasons, proxies are a subset of "script-created objects", so my previous sentence contained: "I wonder why anyone would callObject.freeze
on proxies..."There were concerned about
Object.freeze
/Object.seal
being costly on proxies if defined as preventExtension + enumerate + nbProps*defineProperty. AssumingObject.freeze
becomes de-facto deprecated in favor of proxy-wrapping for high-integrity use cases, maybe that cost is not that big of a deal.