How to clean up __proto__ (was: Why we need to clean up __proto__)

# Mark S. Miller (13 years ago)

[Oops. Actually changing the Subject: this time] [change of Subject: as this reply starts the promised thread]

Hi Lasse, good. That is the direction Dave Herman suggested and which I also find attractive. I have elaborated it at < strawman:magic_proto_property>,

which I've also placed on the Agenda for the January EcmaScript meeting.

The most surprising punchline is in requirement #4:

Object.create(null), creating an object that does not inherit from Object.prototype, also creates an object that does not inherit it’s proto, even if that property has not been deleted. With this change, objects-as-stringmaps for objects created by Object.create(null) would avoid the proto hazard, even in contexts where Object.prototype.proto has not been deleted. (FF already acts this way, so my previous message was wrong in claiming that Object.create(null) fails to avoid this platform on all non-IE browsers.)

Comments on this proposal appreciated.

# Axel Rauschmayer (13 years ago)

Smart. These would be like the dict proposal [1].

It might make sense to encapsulate this, e.g. as a constructor StringMap. Rationale: allows you to use a shim on older systems. And you have to change existing code, anyway.

In the shim, one could avoid creating excess garbage by only prefixing keys ending with "proto". For "proto" itself, one could cache the prefixed key, for others having the suffix, one would only do so to make absolutely sure that keys never clash.

[1] strawman:dicts

# Lasse Reichstein (13 years ago)

There is one side-effect to defining proto using a getter/setter property. You can extract the setter and store it for later, allowing you to change the prototype of objects after someone else deleted the proto property.

That means that if you're not the first script to run on a page, you can't know for sure that you can remove the setting-of-proto ability. But then again, if you're not the first script to run, you can't even know that you can remove it, or trust anything ever again, so it's not really a (new) problem, more of an observation.

I.e., by extracting the setter, you can create a setPrototypeOf(object, newProto) function:

var setPrototypeOf = Function.prototype.call.bind(Object.getOwnPropertyDescriptor(Object.prototype, "proto").set);

# David Bruant (13 years ago)

Le 29/12/2011 12:38, Lasse Reichstein a écrit :

There is one side-effect to defining proto using a getter/setter property. You can extract the setter and store it for later, allowing you to change the prototype of objects after someone else deleted the proto property.

That means that if you're not the first script to run on a page, you can't know for sure that you can remove the setting-of-proto ability. But then again, if you're not the first script to run, you can't even know that you can remove it, or trust anything ever again, so it's not really a (new) problem, more of an observation.

I don't have a formal proof of it, but it seems that the security of a webpage depends of who runs first. Basically, the first-runner has all hands to alterate the environment as desired (in defensive or offensive ways).

# Allen Wirfs-Brock (13 years ago)

On Dec 29, 2011, at 3:38 AM, Lasse Reichstein wrote:

There is one side-effect to defining proto using a getter/setter property. You can extract the setter and store it for later, allowing you to change the prototype of objects after someone else deleted the proto property.

Not if the built-in setter function for proto is defined similarly to:

 Object.defineProperty(Object.prototype,'__proto__', 
    {set: function __proto__(value) {
          if (Object.getPropertyDescriptor(this,'__proto__').set !== __proto__) throw new TypeError('invalid use of __proto__');
          ... //do the work of validating and setting [[Prototype]] of this
   }});

(and assuming that Object.getPropertyDescriptor is defined to access inherited properties)

# Mark S. Miller (13 years ago)

that's very clever. But I don't think it is needed.

David is right about running first. I also don't have a proof, but practically I'm sure that for JS as it is and will continue to be under ES-next, unless some trusted code runs in a context (frame) to initialize itself before any untrusted code runs in that context, all is lost anyway.

So if there's not really anything to be gained by this more complex normative-optional behavior, I'd rather avoid the extra complexity.

# Lasse Reichstein (13 years ago)

On Thu, Dec 29, 2011 at 8:41 PM, Mark S. Miller <erights at google.com> wrote:

Hi Allen, that's very clever. But I don't think it is needed.

David is right about running first. I also don't have a proof, but practically I'm sure that for JS as it is and will continue to be under ES-next, unless some trusted code runs in a context (frame) to initialize itself before any untrusted code runs in that context, all is lost anyway.

Pretty certainly. Malicious code can save and poison all of the methods on Object and Object.prototype, so it shouldn't be hard to ensure that Object.prototype.proto is never made unconfigurable, so you can always put the setter back if you need it and remove it again afterwards. The only real question is whether it's detectable. I'm not convinced either way (there's a lot of subtle tricks and subtle counter-tricks possible, and I've been writing this paragraph for a while and kept coming up without counters to my previous examples in both directions :).

# David Bruant (13 years ago)

Le 30/12/2011 01:00, Lasse Reichstein a écrit :

On Thu, Dec 29, 2011 at 8:41 PM, Mark S. Miller <erights at google.com> wrote:

Hi Allen, that's very clever. But I don't think it is needed.

David is right about running first. I also don't have a proof, but practically I'm sure that for JS as it is and will continue to be under ES-next, unless some trusted code runs in a context (frame) to initialize itself before any untrusted code runs in that context, all is lost anyway. Pretty certainly. Malicious code can save and poison all of the methods on Object and Object.prototype, so it shouldn't be hard to ensure that Object.prototype.proto is never made unconfigurable, so you can always put the setter back if you need it and remove it again afterwards. The only real question is whether it's detectable. I'm not convinced either way (there's a lot of subtle tricks and subtle counter-tricks possible, and I've been writing this paragraph for a while and kept coming up without counters to my previous examples in both directions :).

I think that once again, it all boils down to whether you run first. If you run first, you can save a reference to anything in the environment (getter/setters included). If you have a doubt on a reference, you can check equality with yours.

If you do not run first, the attacker can make the environement look like a normal one. Specifically, you can try to do Object.defineProperty(Object.prototype, 'proto', {configurable:false}) and the attacker can later pretend that the property is not configurable (in response to an Object.getOwnPropertyDescriptor) even though it actually still is (and she can still change the value at convenience).

When asking the question "can I detect that I am running first?", I would answer that it's not always possible. If an attacker changes the JavaScript environment in a way that is not observable to you (functionally speaking), then, you cannot know that you have been running second.

To take a specific example, regarding cookie theft, if you run first in an ES5 + WebIDL compliant environment (regarding WebIDL, I think IE9 is the closest implementation and the rest is far from it), you can take the getter/setter pair of the cookie property somewhere in the prototype chain of the document object. Then, you can delete the property so that no code can neither access nor modify the cookie unless you hand them either the getter or the setter. On the other hand, if an attacker runs first, she can replace the getter/setter pair with hers (and steal your cookie, by the way). When your code runs (second), it cannot have a clue that someone ran first, because the environment looks normal. When you use your cookie setter function (which is the attacker function, but you can't know that), the attacker code gets alerted and steals this new cookie as well. Your code cannot know this happened, since it cannot know how the environment was before it ran.

So, ES5+WebIDL allow to prevent cookie theft... assuming you run first. If you don't, an attacker can steal your cookies anyway discretely enough so that you don't know it.

On the good side of not being able to know that the environment changed is that we can implement polyfill libraries. When code runs, it doesn't know nor need to care whether features it uses are built-in or comes from a library as long as they respect the expected specification.

I've been thinking about this "run first" idea for some time. Since on a webpage, security seems to depend on your ability to run code first, it would be interesting if there was a way to ensure that some code (preferably defensive) is run before any other code. Though I find this interesting, I'm still not sure whether this would be a good or bad idea. I'm also clueless on how it would look like. Creative ideas welcome.

# John J Barton (13 years ago)

On Thu, Dec 29, 2011 at 5:11 PM, David Bruant <bruant.d at gmail.com> wrote:

Le 30/12/2011 01:00, Lasse Reichstein a écrit :

On Thu, Dec 29, 2011 at 8:41 PM, Mark S. Miller <erights at google.com> wrote: I've been thinking about this "run first" idea for some time. Since on a webpage, security seems to depend on your ability to run code first, it would be interesting if there was a way to ensure that some code (preferably defensive) is run before any other code. Though I find this interesting, I'm still not sure whether this would be a good or bad idea. I'm also clueless on how it would look like. Creative ideas welcome.

The browser runs first: what can't it do that you want to support?

jjb

# Mark S. Miller (13 years ago)

On Thu, Dec 29, 2011 at 5:11 PM, David Bruant <bruant.d at gmail.com> wrote: [...]

If you do not run first, the attacker can make the environement look like a normal one. Specifically, you can try to do Object.defineProperty(Object.prototype, 'proto', {configurable:false}) and the attacker can later pretend that the property is not configurable (in response to an Object.getOwnPropertyDescriptor) even though it actually still is (and she can still change the value at convenience).

I just want to point out that SES initialization has been doing this kind of virtualization for a long time, and depending on being able to do it transparently enough. The most extreme example is < code.google.com/p/es-lab/source/browse/trunk/src/ses/WeakMap.js>, where we

emulate WeakMaps with surprising efficiency on platforms that don't provide these as built ins.

The technique relies on unguessability and undiscoverability of a random chosen property name. We virtualize freeze, seal, and preventExtensions, to add this property before we lose our ability to do so. We virtualize Object.getOwnPropertyNames so that it doesn't report this property, and we make the property non-enumerable, so that it can't be discovered with for-in, which we cannot virtualize without parsing.

# David Bruant (13 years ago)

Le 30/12/2011 02:28, John J Barton a écrit :

On Thu, Dec 29, 2011 at 5:11 PM, David Bruant <bruant.d at gmail.com <mailto:bruant.d at gmail.com>> wrote:

Le 30/12/2011 01:00, Lasse Reichstein a écrit :
> On Thu, Dec 29, 2011 at 8:41 PM, Mark S. Miller
<erights at google.com <mailto:erights at google.com>> wrote:
I've been thinking about this "run first" idea for some time.
Since on a
webpage, security seems to depend on your ability to run code
first, it
would be interesting if there was a way to ensure that some code
(preferably defensive) is run before *any* other code. Though I find
this interesting, I'm still not sure whether this would be a good
or bad
idea. I'm also clueless on how it would look like.
Creative ideas welcome.

The browser runs first: what can't it do that you want to support?

I was thinking of the case of XSS for instance where your code is in competition with unexpected and malicious code. What I've said before applies and even against an XSS attack, you can prevent cookie theft as long as you run first. I can't see a way for the browser to enforce that trusted code run before untrusted code.

# Russell Leggett (13 years ago)

On Fri, Dec 30, 2011 at 6:53 AM, David Bruant <bruant.d at gmail.com> wrote:

Le 30/12/2011 02:28, John J Barton a écrit :

On Thu, Dec 29, 2011 at 5:11 PM, David Bruant <bruant.d at gmail.com> wrote:

Le 30/12/2011 01:00, Lasse Reichstein a écrit :

On Thu, Dec 29, 2011 at 8:41 PM, Mark S. Miller <erights at google.com> wrote: I've been thinking about this "run first" idea for some time. Since on a webpage, security seems to depend on your ability to run code first, it would be interesting if there was a way to ensure that some code (preferably defensive) is run before any other code. Though I find this interesting, I'm still not sure whether this would be a good or bad idea. I'm also clueless on how it would look like. Creative ideas welcome.

The browser runs first: what can't it do that you want to support?

I was thinking of the case of XSS for instance where your code is in competition with unexpected and malicious code. What I've said before applies and even against an XSS attack, you can prevent cookie theft as long as you run first. I can't see a way for the browser to enforce that trusted code run before untrusted code.

I must be missing something here, but if you put your defensive script

first in the head of the html, how can XSS code run first? Most XSS attacks are base on user data unescaped and put into the body of the page somewhere. As long as server-side html generation never inserts unescaped code into the head before the defensive script code, where is the vulnerability. I'm not saying I'm right, I just don't see it.

# John J Barton (13 years ago)

On Fri, Dec 30, 2011 at 3:53 AM, David Bruant <bruant.d at gmail.com> wrote:

Le 30/12/2011 02:28, John J Barton a écrit :

On Thu, Dec 29, 2011 at 5:11 PM, David Bruant <bruant.d at gmail.com> wrote:

Le 30/12/2011 01:00, Lasse Reichstein a écrit :

On Thu, Dec 29, 2011 at 8:41 PM, Mark S. Miller <erights at google.com> wrote: I've been thinking about this "run first" idea for some time. Since on a webpage, security seems to depend on your ability to run code first, it would be interesting if there was a way to ensure that some code (preferably defensive) is run before any other code. Though I find this interesting, I'm still not sure whether this would be a good or bad idea. I'm also clueless on how it would look like. Creative ideas welcome.

The browser runs first: what can't it do that you want to support?

I was thinking of the case of XSS for instance where your code is in competition with unexpected and malicious code.

How did this competition begin?

A use case for JS security environment is loading app components cross-site. The other site is 'trusted' in that you believe it has code helpful to the user. But you want to limit its control, for reliability rather than security in the normal sense. Is this what you have in mind?

What I've said before applies and even against an XSS attack, you can prevent cookie theft as long as you run first. I can't see a way for the browser to enforce that trusted code run before untrusted code.

What causes 'untrusted' code to run at all? You must be doing something to cause the browser to load code outside of normal methods. That 'cause' is running first.

To put this another way, what ever runs first is the trusted code.

jjb

# gaz Heyes (13 years ago)

On 30 December 2011 17:05, John J Barton <johnjbarton at johnjbarton.com>wrote:

On Fri, Dec 30, 2011 at 3:53 AM, David Bruant <bruant.d at gmail.com> wrote: How did this competition begin?

A use case for JS security environment is loading app components cross-site. The other site is 'trusted' in that you believe it has code helpful to the user. But you want to limit its control, for reliability rather than security in the normal sense. Is this what you have in mind?

What I've said before applies and even against an XSS attack, you can prevent cookie theft as long as you run first. I can't see a way for the browser to enforce that trusted code run before untrusted code.

What causes 'untrusted' code to run at all? You must be doing something to cause the browser to load code outside of normal methods. That 'cause' is running first.

To put this another way, what ever runs first is the trusted code.

I believe me and Mario Heiderich have solved this problem with JSLR: www.businessinfo.co.uk/labs/jslr/jslr.php

As long as the javascript is executed first then you can protect the page from untrusted javascript execution by whitelisting allowed scripts using randomized tokens. Even when the site is vulnerable to type 1 XSS and DOM based XSS, even within href attributes.

I have also developed a whitelisted Javascript environment that can extended by simulating real dom objects making it difficult to discover you are in a protected environment. www.businessinfo.co.uk/labs/jsreg/jsreg.html

It might be of interest to the people of this list to check out Mario Heiderich's slides as he discussed how to use ES5 methods to create a safe js environment. www.owasp.org/images/a/a3/Mario_Heiderich_OWASP_Sweden_Locking_the_throneroom.pdf

# David Bruant (13 years ago)

Le 30/12/2011 17:07, Russell Leggett a écrit :

On Fri, Dec 30, 2011 at 6:53 AM, David Bruant <bruant.d at gmail.com <mailto:bruant.d at gmail.com>> wrote:

Le 30/12/2011 02:28, John J Barton a écrit :
On Thu, Dec 29, 2011 at 5:11 PM, David Bruant <bruant.d at gmail.com
<mailto:bruant.d at gmail.com>> wrote:

    Le 30/12/2011 01:00, Lasse Reichstein a écrit :
    > On Thu, Dec 29, 2011 at 8:41 PM, Mark S. Miller
    <erights at google.com <mailto:erights at google.com>> wrote:
    I've been thinking about this "run first" idea for some time.
    Since on a
    webpage, security seems to depend on your ability to run code
    first, it
    would be interesting if there was a way to ensure that some code
    (preferably defensive) is run before *any* other code. Though
    I find
    this interesting, I'm still not sure whether this would be a
    good or bad
    idea. I'm also clueless on how it would look like.
    Creative ideas welcome.


The browser runs first: what can't it do that you want to support?
I was thinking of the case of XSS for instance where your code is
in competition with unexpected and malicious code. What I've said
before applies and even against an XSS attack, you can prevent
cookie theft as long as you run first.
I can't see a way for the browser to enforce that trusted code run
before untrusted code.

I must be missing something here, but if you put your defensive script first in the head of the html, how can XSS code run first? Most XSS attacks are base on user data unescaped and put into the body of the page somewhere.

In some cases, the unescaped data is put in the <title> or in the <meta>

element as keywords or description, so before the body.

I think there is no unique solution. It's up to the web dev to know what the dangers are, what the points that are dangerous and maybe write a defensive script before that. Why not inlined if it's short enough.

# David Bruant (13 years ago)

Le 30/12/2011 18:05, John J Barton a écrit :

On Fri, Dec 30, 2011 at 3:53 AM, David Bruant <bruant.d at gmail.com <mailto:bruant.d at gmail.com>> wrote:

Le 30/12/2011 02:28, John J Barton a écrit :
On Thu, Dec 29, 2011 at 5:11 PM, David Bruant <bruant.d at gmail.com
<mailto:bruant.d at gmail.com>> wrote:

    Le 30/12/2011 01:00, Lasse Reichstein a écrit :
    > On Thu, Dec 29, 2011 at 8:41 PM, Mark S. Miller
    <erights at google.com <mailto:erights at google.com>> wrote:
    I've been thinking about this "run first" idea for some time.
    Since on a
    webpage, security seems to depend on your ability to run code
    first, it
    would be interesting if there was a way to ensure that some code
    (preferably defensive) is run before *any* other code. Though
    I find
    this interesting, I'm still not sure whether this would be a
    good or bad
    idea. I'm also clueless on how it would look like.
    Creative ideas welcome.


The browser runs first: what can't it do that you want to support?
I was thinking of the case of XSS for instance where your code is
in competition with unexpected and malicious code.

How did this competition begin? "competition" was an expression :-)

A use case for JS security environment is loading app components cross-site. The other site is 'trusted' in that you believe it has code helpful to the user. But you want to limit its control, for reliability rather than security in the normal sense. Is this what you have in mind?

Why not for security? I've recently watched a talk by Marc Stiegler: www.youtube.com/watch?v=vrbmMPlCp3U In this talk he suggests that we can better cooperate if we have security in mind since it can lead to safely cooperate even with untrusted parties. Even among mutually suspicious parties. It's an interesting idea considering that the usual answer to "I don't trust a party" often leads to "I won't cooperate with this party at all". Same Origin Policy is an instance of this. And I heard some people are frustrated by the fact that one domain is unable to interact with another domain: louisremi.com/2011/12/06/cors-an-insufficient-solution-for-same-origin-restrictions

What I've said before applies and even against an XSS attack, you
can prevent cookie theft as long as you run first.
I can't see a way for the browser to enforce that trusted code run
before untrusted code.

What causes 'untrusted' code to run at all?

Loading code from another domain and being victim of DNS spoofing. I don't have a link right now, but I think that what put WebSockets on hold for some time was that a researcher showed that he was able to play with the version of the protocol at the time. With his hack, he was doing DNS spoofing and basically, when you thought you were loading the jquery script from Google CDN, you were actually running the attacker code. That's an untrusted code, which run on your browser.

The protocol and implementations have been fixed accordingly, I think, but who knows in what other circumstances a malicious script can run on your page.