David Bruant (2013-02-13T12:39:20.000Z)
github at esdiscuss.org (2013-07-12T02:26:27.601Z)
Warning: In this post, I'll be diverging a bit from the main topic. > Loss of identity, extra allocations, and forwarding overhead remain problems. I'm doubtful loss of identity matters often enough to be a valid argument here. I'd be interested in being proved wrong, though. I understand the point about extra allocation. I'll talk about that below. The forwarding overhead can be made inexistent in the very case I've exposed because in the handler, the traps you care about are absent from the handler, so engines are free to optimize the `[[Get]]` & friends as operations applied directly to the target. A handler-wise write barrier can deoptimize but in most practical cases, the deoptimization won't happen because in most practical cases handlers don't change. > It seems to me that you are focusing too much on "share ... to untrusted parties." Your very own [recent words](https://mail.mozilla.org/pipermail/es-discuss/2013-February/028724.html): "In a programming-in-the-large setting, a writable data property is inviting Murphy's Law. I'm not talking about security in a mixed-trust environment specifically. Large programs become "mixed trust", even when it's just me, myself, and I (over time) hacking the large amount of code."...to which I agree with (obviously?) And "Be a better language for writing complex applications" is [in the first goals](http://wiki.ecmascript.org/doku.php?id=harmony:harmony#goals) Maybe I should use another word than "untrusted parties". What I mean is "any code that will manipulate something without necessarily caring to learn about what this something expects as precondition and own invariants". This includes security issues of course, but also buggy code (which, in big applications, are often related to mismatch between a precondition/expectation and how something is used). I've seen this in a previous experience on a Chrome extension where someone would seal an object as a form of documentation to express "I need these properties to stay in the object". It looked like: ```js function C(){ // play with |this| return Object.seal(this) } ``` My point here is that people do want to protect their object integrity against "untrusted parties" which in that case was just "people who'll contribute to this code in the future". Anecdotally, the person removed the `Object.seal` before the return because of performance reasons, based on [a JSPerf test](jsperf.com/object-seal-freeze/). Interestingly, a [JSPerf test with a proxy-based solution](http://jsperf.com/object-seal-freeze/2) might have convinced to do proxies instead of `Object.seal`. But that's a JSPerf test and it doesn't really measure the GC overhead of extra objects. Are there data on this? Are there methodologies to measure this overhead? I understand it, but I find myself unable to pull up numbers on this topic and convincing arguments that JSPerf only measures one part of perf the story and its nice conclusion graph should be taken with a pinch of salt. > It's true you want either a membrane or an already-frozen object in such a setting. Not a membrane, just a proxy that protects its target. Objects linked from the proxy likely came from somewhere else. They're in charge of deciding of their own "integrity policy". > And outside of untrusted parties, frozen objects have their uses -- arguably more over time with safe parallelism in JS. Arguably indeed. I would love to see this happen. Still, if (deeply) frozen "POJSO" could be part shared among contexts, I think we can agree that it wouldn't apply to frozen proxies for a long time (ever?) I went a bit too far suggesting frozen objects could de-facto disappear with proxies. I'm still unclear on the need for specific seal/freeze/isSealed/isFrozen traps.