David Bruant (2013-02-14T18:16:13.000Z)
github at esdiscuss.org (2013-07-12T02:26:27.617Z)
> A compiler has to work much, much harder to get useful results. Don't expect anything anytime soon. ```js var handler = {set: function(){throw new TypeError}} var p = new Proxy({a: 32}, handler); p.a; ``` It's possible *at runtime* to notice that the handler of p doesn't have a get trap, optimize `p.[[Get]]` as `target.[[Get]]` and guard this optimization on handler modifications. Obviously, do that only if the code is hot. I feel it's not that much work than what JS engines do currently and the useful result is effectively getting rid of the forwarding overhead. Is this vastly over-optimistic? > Take all these JSPerf micro benchmark games with two grains of salt; ... that's exactly what I said right after :-/ "But that's a JSPerf test and it doesn't really measure the GC overhead of extra objects." "JSPerf only measures one part of perf the story and its nice conclusion graph should be taken with a pinch of salt." > lots of them focus on premature optimisation. I'm quite aware. I fear the [Sphinx](https://twitter.com/ubench_sphinx). I wrote "might have convinced to do proxies instead of Object.seal". I didn't say I agreed. and I actually don't. > Also, seal and freeze are far more likely to see decent treat than proxies. Why so? > All programmers messing with home-brewed proxies on a daily basis is a very scary vision, if you ask me. hmm... maybe.