B.3.1 The __proto__ pseudo property
+1 to everything, but I would drop the literal too instead of promoting two ways to do things.
off topic: I also hope __proto__
will be spec'd with a descriptor that
exposes the setter as it is now in Firefox, and not only the getter, as
conceptual language nonsense/restriction.
Le 21/04/2013 01:37, Axel Rauschmayer a écrit :
proto can be globally switched off by deleting Object.prototype.proto. I'm assuming that that is useful for security-related applications (Caja et al.). But I'm wondering: doesn't that go too far? I'm seeing three ways of using proto:
- Read the [[Prototype]] of an object. Already possible via Object.getPrototypeOf().
- Set the [[Prototype]] of a fresh object created via an object literal (i.e., an alternative to the rejected <| operator). Already (kind of) possible via Object.create().
Also possible with class syntax and the "extends" keyword (with all the @@create internal semantics).
- Mutate the [[Prototype]] of an existing object.
Globally, I would only want to switch off #3.
You can re-enable #1 by re-adding Object.prototype.proto as your own getter wrapping Object.getPrototypeOf. Or maybe instead of "delete Object.prototype.proto", just do: Object.defineProperty(Object.prototype, "proto", {set: undefined});
.#2 is possible with Object.create and class syntax "extends". Are there use cases for #2 where both Object.create and "extends" would be inappropriate?
On Apr 20, 2013, at 4:37 PM, Axel Rauschmayer wrote:
proto can be globally switched off by deleting Object.prototype.proto. I’m assuming that that is useful for security-related applications (Caja et al.). But I’m wondering: doesn’t that go too far? I’m seeing three ways of using proto:
- Read the [[Prototype]] of an object. Already possible via Object.getPrototypeOf().
- Set the [[Prototype]] of a fresh object created via an object literal (i.e., an alternative to the rejected <| operator). Already (kind of) possible via Object.create().
Deleting Object.prototype.proto will not be be specified as disabling {proto: foo}. Use of proto in an object literal is a distinct syntax linked feature because the semantics of {key:value} is normally [[DefineOwnProperty]] rather than [[Put]]. There is also no particular reason to want to disable that usage. It is ugly but is no more insecure than any other way of creating a new object with an explicitly provided prototype.
Also note that JSON.parse('{"proto": null}') does not create an object whose [[Protoype]] is null because JSON.parse uses [[DefineOwnProperty]] to create all its properties so this will just result in an own property whose value is null.
- Mutate the [[Prototype]] of an existing object.
Globally, I would only want to switch off #3. Rationale: the only security-critical operation of the three(?) The use case for performing this operation goes mostly away by ES6 allowing us to subtype built-ins. Could #3 be forbidden in strict mode?
Not as the MOP is currently structured. [[Set]] does not currently have a parameters that tells the target object whether or not a property assignment originated from strict mode code.
#1 and #2 should not be possible if an object does not have Object.prototype in its prototype chain. Rationale: objects as dictionaries via Object.create(null) or { proto: null }
yes for #1, no for #2
On Apr 21, 2013, at 5:22 AM, David Bruant wrote:
Le 21/04/2013 01:37, Axel Rauschmayer a ?crit :
Globally, I would only want to switch off #3.
You can re-enable #1 by re-adding
Object.prototype.__proto__
as your own getter wrappingObject.getPrototypeOf
. Or maybe instead ofdelete Object.prototype.__proto__
, just do:
Object.defineProperty(Object.prototype, "__proto__", {set: undefined});
I still think that Dunder proto should not be exposed at all by Object.getOwnPropertyDescriptor
(or any other reflection) and that there is no need to leak either a working or always throwing __proto__
setter function into the hands of a ES programmer.
My preferred spec for it is at meetings:rev_15_proto_.pdf
Note that the behavior that some people have expressed a preference for (Dunder proto is observably an accessor property but its set function when retrieved always throws) will also require an exotic Object prototype object to specify so my proposal is not adding any spec. complexity.
As an exercise to the reader, it isn't hard to demonstrate that specified approach could be expressed by using a Proxy to defined Object.prototype
(if the proxy handler had access to [[SetInheritance]]). Since proxy objects are allowed to occur on the [[Prototype]] chain, if an implementation has the mechanism to implement Proxy it will also have the mechanism necessary to implement this definition of Dunder proto.
Allen Wirfs-Brock wrote:
On Apr 21, 2013, at 5:22 AM, David Bruant wrote:
Hi Axel,
Le 21/04/2013 01:37, Axel Rauschmayer a écrit :
proto can be globally switched off by deleting Object.prototype.proto. I’m assuming that that is useful for security-related applications (Caja et al.). But I’m wondering: doesn’t that go too far? I’m seeing three ways of using proto:
- Read the [[Prototype]] of an object. Already possible via Object.getPrototypeOf().
- Set the [[Prototype]] of a fresh object created via an object literal (i.e., an alternative to the rejected <| operator). Already (kind of) possible via Object.create(). Also possible with class syntax and the "extends" keyword (with all the @@create internal semantics).
- Mutate the [[Prototype]] of an existing object.
Globally, I would only want to switch off #3. You can re-enable #1 by re-adding Object.prototype.proto as your own getter wrapping Object.getPrototypeOf. Or maybe instead of "delete Object.prototype.proto", just do: Object.defineProperty(Object.prototype, "proto", {set: undefined});
I still think that Dunder proto should not be exposed at all by Object.getOwnPropertyDescriptor (or any other reflection) and that there is no need to leak either a working or always throwing proto setter function into the hands of a ES programmer.
This isn't what we seemed to agree on at past TC39 meetings.
It's also not what the engines trying to converge on ES6 semantics have implemented.
On Apr 21, 2013, at 8:55 AM, Allen Wirfs-Brock <allen at wirfs-brock.com> wrote:
Deleting
Object.prototype.__proto__
will not be be specified as disabling{__proto__: foo}
.
Was that what we'd agreed to? I hadn't remembered that. I don't like it because it's special-case syntax, but I can also live with it since it's no more powerful than Object.create()
.
Do you know of a meeting minutes where we might have captured that decision?
you can hot-swap the chain, something Object.create() cannot do so it is much more powerful, unless things changed that much lately ...
David Herman wrote:
On Apr 21, 2013, at 8:55 AM, Allen Wirfs-Brock<allen at wirfs-brock.com> wrote:
Deleting
Object.prototype.__proto__
will not be be specified as disabling{__proto__: foo}
.Was that what we'd agreed to?
I think what Allen means is, whether or not there's a magic
Object.prototype.__proto__
, you can define (as in [[DefineOwnProperty]])
a plain old data property (or an accessor, for that matter, just
different syntax) whose name is '__proto__'
in an object literal.
This is specified by ES5, already.
On Apr 21, 2013, at 10:03 AM, Brendan Eich wrote:
Allen Wirfs-Brock wrote:
I still think that Dunder proto should not be exposed at all by
Object.getOwnPropertyDescriptor
(or any other reflection) and that there is no need to leak either a working or always throwing__proto__
setter function into the hands of a ES programmer.This isn't what we seemed to agree on at past TC39 meetings.
It's also not what the engines trying to converge on ES6 semantics have implemented.
It's not clear to me, yet what convergence we actually have.
Regardless, it's only observable via Object.getOwnPropertyDescriptor(Object.prototype, "__proto__")
which in my proposal returns undefined when Dunder proto is active and in other proposals returns a function that when evaluated throws something. The semantics of Dunder proto that have been discussed isn't just that of an accessor property and can't be purely implemented as such so I no value in trying to masquerade it as an accessor for getOwnPropertyDescriptor
. Returning get/set functions that always throws is just adding complexity that delivers no value.
On Apr 21, 2013, at 11:03 AM, David Herman wrote:
On Apr 21, 2013, at 8:55 AM, Allen Wirfs-Brock <allen at wirfs-brock.com> wrote:
Deleting
Object.prototype.__proto__
will not be be specified as disabling{__proto__: foo}
.Was that what we'd agreed to? I hadn't remembered that. I don't like it because it's special-case syntax, but I can also live with it since it's no more powerful than
Object.create()
.Do you know of a meeting minutes where we might have captured that decision?
Don't know, we have talked about it before.
It is special-case syntax that requires special case semantics because object literals use [[DefiineOwnProperty]] rather than [[Set]] as the semantics of {key : value}
. So {__proto__:value}
and {"__proto__": value}
really are special syntax.
Making it conditional on the status of Object.prototype.__proto__
does make it less special, it just make the semantics even more complex. It also makes the semantics of an object literal dependent upon what is essentially a remote global switch. So, nobody can reliably use that form of object creation unless they control every bit of code that runs in their application.
What possible benefit would there be in tying the runtime behavior of this syntax to the existence of Dunder proto?
Agreed, but if you threat proto as a different descriptor you also add extra complexity.
Why not making as any other Object.prototype property exposing getters and setters since these are what they are? You have to add a description of these two and everything else will be consistent with all other descriptors.
You will also make possible, eventually, to have a proto free environment where only the "main" program could hold and use "for good" that descriptor.set magic behavior
Allen Wirfs-Brock wrote:
On Apr 21, 2013, at 10:03 AM, Brendan Eich wrote:
Allen Wirfs-Brock wrote: This isn't what we seemed to agree on at past TC39 meetings.
It's also not what the engines trying to converge on ES6 semantics have implemented.
It's not clear to me, yet what convergence we actually have.
Read the meeting notes to remind yourself:
esdiscuss/2013-February/028631
Look for
BE: (Review of latest changes to proto in Firefox)
and read on.
Regardless, it's only observable via Object.getOwnPropertyDescriptor(Object.prototype, "proto") which in my proposal returns undefined when Dunder proto is active and in other proposals returns a function that when evaluated throws something.
No. From those notes:
EA: Throws if called with object and setter coming from different realms
The semantics of Dunder proto that have been discussed isn't just that of an accessor property and can't be purely implemented as such so I no value in trying to masquerade it as an accessor for getOwnPropertyDescriptor. Returning get/set functions that always throws is just adding complexity that delivers no value.
The proposal we've discussed does no such thing. What's implemented in JSC, SpiderMonkey, V8 does no such thing.
js> var d = Object.getOwnPropertyDescriptor(Object.prototype, 'proto')
js> d ({configurable:true, enumerable:false, get:function () { [native code] }, set:function () { [native code] }}) js> var o = {}
js> d.get.call(o) ({}) js> d.get.call(o) === Object.prototype
true js> d.set.call(o, null)
js> d.get.call(o)
null
The censorship requires a realm-check. That seems (and seemed, to TC39 in January) better than a wart in Object.getOwnPropertyDescriptor that turns a blind eye to the existence of 'proto' in Object.prototype in contradiction to plainly observable facts!
On Apr 21, 2013, at 11:12 AM, Brendan Eich wrote:
David Herman wrote:
On Apr 21, 2013, at 8:55 AM, Allen Wirfs-Brock<allen at wirfs-brock.com> wrote:
Deleting
Object.prototype.__proto__
will not be be specified as disabling{__proto__: foo}
.Was that what we'd agreed to?
I think what Allen means is, whether or not there's a magic
Object.prototype.__proto__
, you can define (as in [[DefineOwnProperty]]) a plain old data property (or an accessor, for that matter, just different syntax) whose name is'__proto__'
in an object literal.
No, see the spec. strawman I posted.
What I mean is that:
let obj = {__proto__: null}
will always create an object whose [[Prototype]] is null
. Regardless of whether or not anybody has done:
delete Object.prototype.__proto__
There is no good reason to link the semantics of __proto__
in an object literal to the existence of Dunder proto on Object.prototype
. The standard semantics of object literal properties in ES5 have no dependencies upon the shape of Object.prototype
.
This is specified by ES5, already.
Doesn't matter because what ES5 specifies is already incompatible with web reality when the property name is __proto__
.
Allen Wirfs-Brock wrote:
On Apr 21, 2013, at 11:12 AM, Brendan Eich wrote:
David Herman wrote:
On Apr 21, 2013, at 8:55 AM, Allen Wirfs-Brock<allen at wirfs-brock.com> wrote:
Deleting Object.prototype.proto will not be be specified as disabling {proto: foo}. Was that what we'd agreed to? I think what Allen means is, whether or not there's a magic Object.prototype.proto, you can
Note "can" here.
define (as in [[DefineOwnProperty]]) a plain old data property (or an accessor, for that matter, just different syntax) whose name is 'proto' in an object literal.
No, see the spec. strawman I posted.
What I mean is that: let obj = {proto: null} will always create an object whose [[Prototype]] is null. Regardless of whether or not anybody has done: delete Object.prototype.proto.
Yes, that's what I just wrote!
What part was unclear?
There is no good reason to link the semantics of proto in an object literal to the existence of Dunder proto on Object.prototype. The standard semantics of object literal properties in ES5 have no dependencies upon the shape of Object.prototype.
We agree.
This is specified by ES5, already.
Doesn't matter because what ES5 specifies is already incompatible with web reality when the property name is proto.
No. Browsers implementing ES5 and de-facto proto use [[DefineOwnProperty]] per ES5 to make 'proto' in 'var o = {proto: "haha"}' an own data property shadowing Object.prototype.proto.
Anything else (some variation on de-facto proto that uses a magic per-object hidden [[DefineOwnProperty]], e.g.) breaks ES5.
Brendan Eich wrote:
Allen Wirfs-Brock wrote:
On Apr 21, 2013, at 11:12 AM, Brendan Eich wrote:
David Herman wrote:
On Apr 21, 2013, at 8:55 AM, Allen Wirfs-Brock<allen at wirfs-brock.com> wrote:
Deleting Object.prototype.proto will not be be specified as disabling {proto: foo}. Was that what we'd agreed to? I think what Allen means is, whether or not there's a magic Object.prototype.proto, you can
Note "can" here.
define (as in [[DefineOwnProperty]]) a plain old data property (or an accessor, for that matter, just different syntax) whose name is 'proto' in an object literal.
No, see the spec. strawman I posted.
What I mean is that: let obj = {proto: null} will always create an object whose [[Prototype]]
Didn't you mean "an object whose property named 'proto'" here?
is null. Regardless of whether or not anybody has done: delete Object.prototype.proto.
Yes, that's what I just wrote!
What part was unclear?
Sorry, I misread your "[[Prototype]] is null" as "property named 'proto' is null".
But you cannot break ES5. Why are you changing things to deviate from it, never mind from ES6 consensus?
V8 already poisons when getOwnPropertyDescriptor has a setter and this
setters is the __proto__
one:
code.google.com/p/v8/source/browse/trunk/src/v8natives.js#390
This means V8 always throws and does not preserve the same realm, if I understand what that means:
document.body.appendChild(document.createElement('iframe'))
frames[0].Object.prototype.__proto__ = Object.prototype;
Array.prototype.__proto__ = frames[0].Array.prototype;
Or maybe was about cross domain security ?
I've also already landed a bug+patch for V8 so that a flag at launch time can eventually make that setter available: code.google.com/p/v8/issues/detail?id=2645
Let's see how this goes
On Apr 21, 2013, at 11:53 AM, Brendan Eich wrote:
Allen Wirfs-Brock wrote:
...
This is specified by ES5, already.
Doesn't matter because what ES5 specifies is already incompatible with web reality when the property name is
__proto__
.No. Browsers implementing ES5 and de-facto
__proto__
use [[DefineOwnProperty]] per ES5 to make__proto__
invar o = {__proto__: "haha"}
an own data property shadowingObject.prototype.__proto__
.Anything else (some variation on de-facto
__proto__
that uses a magic per-object hidden [[DefineOwnProperty]], e.g.) breaks ES5.
from FF 22 scratchpad:
var p = {__proto__: "silly"};
Object.getOwnPropertyDescriptor(p,"__proto__");
/*
undefined
*/
Object.getPrototypeOf(p)===Object.prototype
/*
true
*/
var p = {__proto__: "silly"};
Object.getOwnPropertyDescriptor(p,"__proto__");
/*
undefined
*/
Object.getPrototypeOf(p)===Object.prototype
/*
true
*/
var p = {__proto__: "silly"};
Object.getOwnPropertyDescriptor(p,"__proto__");
/*
undefined
*/
Object.getPrototypeOf(p)===Object.prototype
/*
true
*/
var p = {__proto__: "silly"};
Object.getOwnPropertyDescriptor(p,"__proto__");
/*
undefined
*/
Object.getPrototypeOf(p)===Object.prototype
/*
true
*/
var p = {__proto__: "silly"};
Object.getOwnPropertyDescriptor(p,"__proto__");
/*
undefined
*/
Object.getPrototypeOf(p)===Object.prototype
/*
true
*/
var p = {__proto__: "silly"};
Object.getOwnPropertyDescriptor(p,"__proto__");
/*
undefined
*/
Object.getPrototypeOf(p)===Object.prototype
/*
true
*/
var p = {__proto__: "silly"};var p = {__proto__: "silly"};
var p = {__proto__: "silly"};
Object.getOwnPropertyDescriptor(p,"__proto__");
/*
undefined
*/
Object.getPrototypeOf(p)===Object.prototype
/*
true
*/
p.__proto__ === Object.prototype
/*
true
*/
var r = {__proto__: p};
Object.getOwnPropertyDescriptor(r,"__proto__");
/*
undefined
*/
Object.getPrototypeOf(r)===Object.prototype
/*
false
*/
I don't see any shadowing or __proto__
going on here.
Allen that's correct/expected ... as proto is "own" in the Object.prototype only or am I missing something?
On Apr 21, 2013, at 12:05 PM, Brendan Eich wrote:
What I mean is that: let obj = {proto: null} will always create an object whose [[Prototype]]
Didn't you mean "an object whose property named 'proto'" here?
is null. Regardless of whether or not anybody has done: delete Object.prototype.proto.
Yes, that's what I just wrote!
What part was unclear?
Sorry, I misread your "[[Prototype]] is null" as "property named 'proto' is null".
But you cannot break ES5. Why are you changing things to deviate from it, never mind from ES6 consensus?
We must be talking across each other...web reality is that var obj = {proto: someObj};
creates a new object whose [[Prototype]] is the value of someObj (assuming it is valid for that use). Right? Doesn't that mean that ES5 implementations that support that semantics already deviate from the ES5 spec which says that an own property named "proto" should be created via [[DefineOwnProperty]]?
Allen Wirfs-Brock wrote:
On Apr 21, 2013, at 12:05 PM, Brendan Eich wrote:
What I mean is that: let obj = {proto: null} will always create an object whose [[Prototype]] Didn't you mean "an object whose property named 'proto'" here?
is null. Regardless of whether or not anybody has done: delete Object.prototype.proto. Yes, that's what I just wrote!
What part was unclear? Sorry, I misread your "[[Prototype]] is null" as "property named 'proto' is null".
But you cannot break ES5. Why are you changing things to deviate from it, never mind from ES6 consensus?
We must be talking across each other...web reality is that var obj = {proto: someObj};
creates a new object whose [[Prototype]] is the value of someObj (assuming it is valid for that use). Right?
Argh, you're right. I'm wrong, the de-facto standard wants [[Put]] not [[DefineOwnProperty]] and that's what ES5 specified.
I plead jetlag and throw myself on the mercy of the court!
Doesn't that mean that ES5 implementations that support that semantics already deviate from the ES5 spec which says that an own property named "proto" should be created via [[DefineOwnProperty]]?
That follows.
However, let's get back to (a) coffee :-); (b) ES6 and the ability to delete Object.prototype.proto.
You don't want that to affect object literals evaluated in the same realm after such a deletion. Why not?
SpiderMonkey at least goes out of its way to do [[Set]] (let's call it) not [[DefineOwnProperty]] for 'o = {proto: 42}', so why wouldn't [[Set]] create a fresh property, seeing nothing on Object.prototype named 'proto' with a setter to run?
for how much I love being ignored, I agree on Brendan, once deleted, which is an explicit action that means "get this stuff out of my environment" nothing else should be affected.
If people would like to create enumerable/configurable/writable properties at runtime together with specific inheritance they can do that easily as I've done in redefine.js [1] or as shown here
function create(inheritance, properties) { var k, result = Object.create(inheritance); for (k in result) { if (properties.hasOwnProperty(k)) { result[k] = properties[k]; } } return result; }
so instead of
var o = { proto: null, a: 'a' };
var o = create(null, { a: 'a' });
not a big deal to cover the case, impossible to get rid of proto otherwise if sticks around even after deleting it from Object.prototype (again, as explicit developer/user intention)
However, let's get back to (a) coffee :-); (b) ES6 and the ability to delete Object.prototype.proto.
You don't want that to affect object literals evaluated in the same realm after such a deletion. Why not?
[Sorry for cutting in, but this is the core point of my confusion.]
I’d argue: delete Object.prototype.proto is a measure to disable operations for untrusted code that pose a security risk.
==> FORBID mutating [[Prototype]]: foo.proto = ... // set (1)
==> ALLOW: { proto: ... } // (2) foo.proto // get (3)
I’d allow the latter two in order not to break untrusted code that uses operations that are already possible in standard ES5 (Object.create() and Object.getPrototypeOf). AFAICT, these two operations pose no security risk.
Additionally, (1) and (3) should be disabled in a dict setting (Object.prototype not in prototype chain). Previously, I referred to the wrong numbers here.
Axel
then you'll have ambiguous operations
obj[key] = value;
will not always do the same since
obj[key]
will not always do the same neither.
If a program decides/needs/wants no magic then magic should disappear and if not in the chain it should not be inherited.
I'd rather leave, if really necessary for reasons behind my comprehension, the literal {proto} but obj.proto won't have anymore any sense if there's nothing to inherit as property behavior.
Andrea Giammarchi wrote:
for how much I love being ignored,
Not rehashing != ignoring.
I agree on Brendan, once deleted, which is an explicit action that means "get this stuff out of my environment" nothing else should be affected.
Yay, we agree.
On this particular case, I don't see how anyone couldn't, though!
On Apr 21, 2013, at 12:31 PM, Brendan Eich wrote:
Allen Wirfs-Brock wrote:
On Apr 21, 2013, at 12:05 PM, Brendan Eich wrote:
What I mean is that: let obj = {proto: null} will always create an object whose [[Prototype]] Didn't you mean "an object whose property named 'proto'" here?
is null. Regardless of whether or not anybody has done: delete Object.prototype.proto. Yes, that's what I just wrote!
What part was unclear? Sorry, I misread your "[[Prototype]] is null" as "property named 'proto' is null".
But you cannot break ES5. Why are you changing things to deviate from it, never mind from ES6 consensus?
We must be talking across each other...web reality is that var obj = {proto: someObj};
creates a new object whose [[Prototype]] is the value of someObj (assuming it is valid for that use). Right?
Argh, you're right. I'm wrong, the de-facto standard wants [[Put]] not [[DefineOwnProperty]] and that's what ES5 specified.
I plead jetlag and throw myself on the mercy of the court!
Suspended sentence...and I really should be finishing preparing for my own European speaking trip rather than being sucked into this dismal mess...
Doesn't that mean that ES5 implementations that support that semantics already deviate from the ES5 spec which says that an own property named "proto" should be created via [[DefineOwnProperty]]?
That follows.
However, let's get back to (a) coffee :-); (b) ES6 and the ability to delete Object.prototype.proto.
You don't want that to affect object literals evaluated in the same realm after such a deletion. Why not?
Why should it? We already used the existence of {proto: whatever} got rid of <| as declarative syntax for defining an object literal with a [[Prototype]] other than object prototype. Making {proto: whatever} only work some of the times means it isn't a reliable declarative syntax. Why would we want to do that? There is arguably a good motivation wanting disable the ability to dynamically proto modify arbitrary pare-existing objects. But what is the motifacation for doing that on newly created objects?
SpiderMonkey at least goes out of its way to do [[Set]] (let's call it) not [[DefineOwnProperty]] for 'o = {proto: 42}', so why wouldn't [[Set]] create a fresh property, seeing nothing on Object.prototype named 'proto' with a setter to run?
Because the semantics that says you can't use [[DefineOwnProperty]] may say to go out of the way to do something else. It my strawman spec. it says use [[SetInhertiance]] rather than [[Put]]. Either is a special case semantics and [[SetInheritance]] is a much more direct expression of the likely user intent.
Apparently Axel can ... he wants the "inherited from nowhere" getter instead of passing through Object.getPrototypeOf(object) ... thing is Axel, if you can get rid of proto in the Object.prototype, and that property simply disappears, you have all possibilities to redefine it as convenient as you want.
Not true the other way round
SpiderMonkey at least goes out of its way to do [[Set]] (let's call it) not [[DefineOwnProperty]] for 'o = {proto: 42}', so why wouldn't [[Set]] create a fresh property, seeing nothing on Object.prototype named 'proto' with a setter to run?
SpiderMonkey/JSC currently just use [[Set]] without any further checks, i.e. when you re-define Object.prototype.proto, you're able to interfere object creation which uses proto. Is this intentional?
js> Object.defineProperty(Object.prototype, "proto", {set:
function(){print("setter")}}) ({}) js> ({proto: null})
setter ({})
- André
Apparently Axel can ... he wants the "inherited from nowhere" getter instead of passing through Object.getPrototypeOf(object) ... thing is Axel, if you can get rid of proto in the Object.prototype, and that property simply disappears, you have all possibilities to redefine it as convenient as you want.
Not true the other way round
Good point. That gives you a choice.
love it ... reminds me those days when [] or {} were invoking Array and Object in some env ...
Anyway, I believe Allen is trying to say that {proto:whatever} should be spec'd as syntax, regardless what will be of the Object.prototype.proto property/descriptor so that you can get rid of it but then you can always trust that syntax.
In this case proto looks ugly but makes sense ... also because it does not need to loop/copy on JS side
So basically that would do exactly what an utopian/never-existent Object.setOwnPrototype({literal}, whatever):{literal} would do
Allen Wirfs-Brock wrote:
On Apr 21, 2013, at 12:31 PM, Brendan Eich wrote:
You don't want that to affect object literals evaluated in the same realm after such a deletion. Why not?
Why should it?
... because it did in ES5-conforming implementations that support proto as a de-facto standard and allow delete Object.prototype.proto.
We already used the existence of {proto: whatever} got rid of<| as declarative syntax for defining an object literal with a [[Prototype]] other than object prototype. Making {proto: whatever} only work some of the times means it isn't a reliable declarative syntax.
What?
Mark insists on delete Object.prototype.proto making the magic go away. (Summoning Mark.)
Triangle died for several reasons, and I'm surprised to hear it here. Grinding an axe?
Why would we want to do that? There is arguably a good motivation wanting disable the ability to dynamically proto modify arbitrary pare-existing objects. But what is the motifacation for doing that on newly created objects?
I will tag Mark in here, but first make my own move:
Answer: because the clear way to implement this in ES5-conforming implementations that support proto is to call [[Put]] not [[DefineOwnProperty]] if the name of the property being initialized in the object literal is 'proto', and that's what engines implement.
That makes a new de-facto standard, which you should not be wasting energy trying to break!
SpiderMonkey at least goes out of its way to do [[Set]] (let's call it) not [[DefineOwnProperty]] for 'o = {proto: 42}', so why wouldn't [[Set]] create a fresh property, seeing nothing on Object.prototype named 'proto' with a setter to run?
Because the semantics that says you can't use [[DefineOwnProperty]] may say to go out of the way to do something else.
Too late, ES5+reality happened. You are now proposing to break the web, at the limit. JS implementors will not go for that, so you are wasting your time.
It my strawman spec. it says use [[SetInhertiance]] rather than [[Put]]. Either is a special case semantics and [[SetInheritance]] is a much more direct expression of the likely user intent.
Direct, schmirect.
This is about compatibility and consistency, not what you can edit into a draft. Please reconsider. Must we put this on the next meeting's agenda?
2013/4/21 Allen Wirfs-Brock <allen at wirfs-brock.com>
Also note that JSON.parse('{"proto": null}') does not create an object whose [[Protoype]] is null because JSON.parse uses [[DefineOwnProperty]] to create all its properties so this will just result in an own property whose value is null.
Side-tracking the discussion but perhaps someone will find it interesting: That varies with JS implementations. v8 as of Chrome and Node stable sets [[Prototype]] so you can cause all kinds of crazy things happening by doing a JSON.parse('{"proto": null}'), JSON.parse('{"proto": []}') or why not a JSON.parse('{"proto": {"sessionid": "cantbedeleted"}}'). v8 bleeding edge and Chrome Canary has fixed this. code.google.com/p/v8/issues/detail?id=621 for more info.
agreed for consistency too, once deleted, the magic should disappear 100%
reality will be that most devs won't delete it so I don't see concrete side effects out there.
However, this is how I would spec it (or better, how I would drop the axe accepting a compromise for this property)
Object.getOwnPropertyDescriptor(Object.prototype, 'proto') must return { enumerable: false, configurable: true, get: function usableGetter(){}, set: function usableSetter(){} }
Being configurable anyone can decide to poison the setter, if necessary, or trap it and get read of the magic. It is not possible to have options if the setter is poisoned by default or not exposed at all. The magic can be preserved, if necessary, and reused.
"First come first serve" as same rule exists today if somebody wants, for some reason, Object.freeze(global); or Object.freeze(Object.prototype); with all toString, etc consequences we know ... it worked 'till now, libraries can evolve and/or agree on the expected surrounding environment.
Once deleted, the whole magic should disappear. var o = {proto:null};
will be an instanceof Object with .hasOwnProperty('proto') true and .propertyIsEnumerable('proto') true and o.proto === null
In few words, once proto has been deleted, that name means that's a property name and nothing else.
I think this makes specs easy to define: if not there, everything as it is for everything else
if there, define the property descriptor without adding any extra/special case on it
This is the easiest way to go, this can make SES the same is today doing:
Object.defineProperty( Object.prototype, 'proto', { get: Object.getOwnPropertyDescriptor( Object.prototype, 'proto' ).get, set: function () { throw new Error('you are not supposes to do this'); } } );
This will make possible to create e single entry point for the magic and monitor it:
var set = Object.getOwnPropertyDescriptor( Object.prototype, 'proto' ).set;
delete Object.prototype.proto;
This will make whoever wants to do anything with this magic able to do it and, accordingly, I believe a happy JavaScripter!
My 2 cents
apologies, I've realized SES might want more something like this:
(function(getOwnPropertyDescriptor){ Object.defineProperty( Object, 'getOwnPropertyDescriptor', { enumerable: false, configurable: false, writable: false, value: function(object, property) { var d = getOwnPropertyDescriptor(object, property); if (d && property === 'proto') { d.set = function () { throw new Error('nope'); }; } } } ); }(Object.getOwnPropertyDescriptor));
as this is basically what's in V8 now (assuming d && property === 'proto' can be true only with Object.prototype)
well, I am sure you all got the point about the possibility of replicating or being completely free from this property
On Sun, Apr 21, 2013 at 1:37 PM, Brendan Eich <brendan at mozilla.com> wrote:
Allen Wirfs-Brock wrote:
On Apr 21, 2013, at 12:31 PM, Brendan Eich wrote:
You don't want that to affect object literals evaluated in the same realm after such a deletion. Why not?
Why should it?
... because it did in ES5-conforming implementations that support proto as a de-facto standard and allow delete Object.prototype.proto.
We already used the existence of {__proto__: whatever} got rid of<| as
declarative syntax for defining an object literal with a [[Prototype]] other than object prototype. Making {proto: whatever} only work some of the times means it isn't a reliable declarative syntax.
What?
Mark insists on delete Object.prototype.proto making the magic go away. (Summoning Mark.)
;)
I agree with the spirit of what Allen is saying, but I'm not sure if I agree on the particulars. {proto: whatever, ....} is special syntax sets the [[Prototype]] property of the resulting object to the value of whatever. This syntax does not invoke [[Put]] anything and has no relationship (beyond evocative similarity) to the property Object.prototype.proto, whether it has been deleted or not. This special status does not apply to obj.proto = whatever, nor to object.proto. Those two do a simple [[Set]] (or, in ES5 terms, [[Put]]) and [[Get]] respectively, and so do depend on the existence and nature of an inherited proto property.
Triangle died for several reasons, and I'm surprised to hear it here. Grinding an axe?
No, Allen's point here is valid. Only if {proto: ....} is honored as special syntax do we no longer gain enough additional benefit from triangle for it to be worth the cost.
Why would we want to do that? There is arguably a good motivation
wanting disable the ability to dynamically proto modify arbitrary pare-existing objects. But what is the motifacation for doing that on newly created objects?
I will tag Mark in here, but first make my own move:
Answer: because the clear way to implement this in ES5-conforming implementations that support proto is to call [[Put]] not [[DefineOwnProperty]] if the name of the property being initialized in the object literal is 'proto', and that's what engines implement.
That makes a new de-facto standard, which you should not be wasting energy trying to break!
It sounds like you agree that defining {proto: whatever} in terms of [[Put]] (aka [[Set]]) causes problems. So let's not do that. We don't actually need to define it in terms of [[DefineOwnProperty]] either. Like triangle, this happens when we create a new object, so we can just create a new object with its [[Prototype]] already initialized to whatever.
SpiderMonkey at least goes out of its way to do [[Set]] (let's call it)
not [[DefineOwnProperty]] for 'o = {proto: 42}', so why wouldn't [[Set]] create a fresh property, seeing nothing on Object.prototype named 'proto' with a setter to run?
Because the semantics that says you can't use [[DefineOwnProperty]] may say to go out of the way to do something else.
Too late, ES5+reality happened. You are now proposing to break the web, at the limit. JS implementors will not go for that, so you are wasting your time.
For all parties, some examples of legacy uses of {proto: ....}, whether hypothetical or observed, would help a lot. I doubt the stance "special literal syntax for initializing [[Prototype]] without using either [[Put]]/[[Set]] nor [[DefineOwnProperty]]" would be incompatible with web reality.
Off Topic: it's really funny that it took 10 years to teach people how inheritance works in JS and today there's some legacy that instead of constructors uses their objects or just objects as prototype so that {proto:whatever} is considered a used pattern.
If the most common case is {proto:null} though, I would not spend time explaining an exception in the syntax instead of keep promoting Object.create(null); which is the way to go (it was at ES5 time at least)
Warning: The following is a sickening idea. I would really hate to see us do it. But I feel obliged to post it as it may in fact be the right thing to do.
Given: Web reality drives us towards recognizing {...., proto: ...., ....} as special syntax for initializing [[Prototype]].
Given: JSON demands that the "proto" in JSON.parse('{...., "proto": ...., ....}') not be treated as a special case, and causes just the normal [[DefineOwnProperty]].
Given: Web reality does not make demands on the meaning of {...., "proto": ...., ....}
Given: The ES5 JSON spec demands that JSON.parse('{...., proto: ...., ....}') be rejected as an error.
This suggests that, in JS as well, the "proto" in {...., "proto": ...., ....} not be treated as a special case. Quoting it turns off the special treatment.
that won't work?
"proto" in {"proto":{}}; // true "proto" in {proto:{}}; // still true, since "proto" in Object.prototype, unless deleted
am I wrong?
uh wait ... you meant ... uh ... wait, sorry, OK :D
so {proto:Array.prototype} is an instanceof Array {"proto":Array.prototype} is an object with a "proto" property that points to Array.prototype
which means that "proto" is magically overwritten ... yak?!
that said, I like the dual behavior if not because there's no way even by accident somebody can change a prototype chain in a for/in loop
var copied = {proto:null,"proto":Array.prototype}; var copy = {};
for (var key in copied) { if (copied.hasOwnProperty(key)) { copy[key] = copied[key]; // different from // copy.proto = copied.proto; } }
copy["proto"] === Array.prototype; // true
I've got the feeling this is not going that far though ... but it promotes proto avoidance ... now I am in conflict with myself :D
Mark S. Miller wrote:
For all parties, some examples of legacy uses of {proto: ....}, whether hypothetical or observed, would help a lot. I doubt the stance "special literal syntax for initializing [[Prototype]] without using either [[Put]]/[[Set]] nor [[DefineOwnProperty]]" would be incompatible with web reality.
It's possible we could make the change and nothing would break. Usually the burden of proof is on the people proposing the change, though (you and Allen).
If no implementation supports reconfiguration (delete or replacement) of Object.prototype.proto, then as you note, it can't yet matter and there's no observable compatibility break. But is there no such implementation?
On Apr 21, 2013, at 1:37 PM, Brendan Eich wrote:
Allen Wirfs-Brock wrote:
On Apr 21, 2013, at 12:31 PM, Brendan Eich wrote:
You don't want that to affect object literals evaluated in the same realm after such a deletion. Why not?
Why should it?
... because it did in ES5-conforming implementations that support proto as a de-facto standard and allow delete Object.prototype.proto.
Was that intentional WRT object literals or an un intended consequence of using [[Put]] in processing that property. Prior, to the ES6 discussions over the last year was that even the case? For each major implementation how long has deleting Object.prototype.proto (if it was even possible has that behavior).
The point is that I don't think there is any long standing behavior in this regard relating to object literals and deleting Object.prototype.proto that the web is dependent upon. We are free to specify a semantics that will make sense, now and for the long term.
We already used the existence of {proto: whatever} got rid of<| as declarative syntax for defining an object literal with a [[Prototype]] other than object prototype. Making {proto: whatever} only work some of the times means it isn't a reliable declarative syntax.
What?
Mark insists on delete Object.prototype.proto making the magic go away. (Summoning Mark.)
The major is foo.proto = bar changing foo's dynamically changing foo's [[Prototype]]. There is nothing magic about an object literal that uses some weird syntax to specify the [[Prototype]] that a newly allocated object will have.
Triangle died for several reasons, and I'm surprised to hear it here. Grinding an axe?
Not at all! You frequently have said that standardizing proto takes the wind out of other features that support various use cases of defining/manipulating an object's [[Prototype]]. One of those features is using an object literal as a way to declaratively describe an object that inherits from something other than Object.prototype (including null).
So, if {proto: null} is a solution for that use case it should be specified with reasonable semantics and not be a conditional feature whose availability is tied to something else that many consider undesirable. There is no need for any magic here and no need to introduce additional imperative behavior by calling [[Set]]. It is just creating an object with a syntactically provided [[Prototype]] value.
Specifying it as doing a [[Set]] just opens the door for people inserting their own accessor.
In ES5 we explicitly change object literals to be immune from such tampering and with apparently no ill-effect. I don't see why we would want to re-open such an avenue. It certainly is necessary from a specification perspective and it isn't going to have a big implementation impact to do it right.
Why would we want to do that? There is arguably a good motivation wanting disable the ability to dynamically proto modify arbitrary pare-existing objects. But what is the motifacation for doing that on newly created objects?
I will tag Mark in here, but first make my own move:
Answer: because the clear way to implement this in ES5-conforming implementations that support proto is to call [[Put]] not [[DefineOwnProperty]] if the name of the property being initialized in the object literal is 'proto', and that's what engines implement.
I'd say that the clearest conforming way is to just create the new object with the [[Prototype]] value that was provided using the literal. [[Put]] isn't needed. [[SetInhertiance]] also isn't really need, but it is semantically much closer than dong an operation that invokes an arbitrary set accessor.
That makes a new de-facto standard, which you should not be wasting energy trying to break!
I don't think there currently is a de facto standard at these edges. Our job is to sort it out and make the best possible language for the long run.
SpiderMonkey at least goes out of its way to do [[Set]] (let's call it) not [[DefineOwnProperty]] for 'o = {proto: 42}', so why wouldn't [[Set]] create a fresh property, seeing nothing on Object.prototype named 'proto' with a setter to run?
Because the semantics that says you can't use [[DefineOwnProperty]] may say to go out of the way to do something else.
Too late, ES5+reality happened. You are now proposing to break the web, at the limit. JS implementors will not go for that, so you are wasting your time.
No both, FF and V8 apparently have been evolving in this area over the last year and IE hasn't even entered the field yet. Show me any significant, browser interoperable code on the web that is observably dependent upon object literals doing an observable [[Put]].
It my strawman spec. it says use [[SetInhertiance]] rather than [[Put]]. Either is a special case semantics and [[SetInheritance]] is a much more direct expression of the likely user intent.
Direct, schmirect.
This is about compatibility and consistency, not what you can edit into a draft. Please reconsider. Must we put this on the next meeting's agenda?
I have to respectfully disagree. So me the code that would break and explain why what ever consistency you are imaging is worth making object literals that specify a [[Prototype]] a unreliable features. How is that better for anyone?
yes, I intended to put this on the agenda since I now have sold spec. language to review.
We're not concerned if there is any such implementation. We care about the intersection of enough implementations that it becomes an issue for cross-browser code. And code that only works on one browser has been code that we've always[1] been willing to break going forward.
[1] At least during all the time I've been on the committee.
On Apr 21, 2013, at 3:11 PM, Mark S. Miller wrote:
On Sun, Apr 21, 2013 at 1:37 PM, Brendan Eich <brendan at mozilla.com> wrote: Allen Wirfs-Brock wrote: On Apr 21, 2013, at 12:31 PM, Brendan Eich wrote:
You don't want that to affect object literals evaluated in the same realm after such a deletion. Why not?
Why should it?
... because it did in ES5-conforming implementations that support proto as a de-facto standard and allow delete Object.prototype.proto.
We already used the existence of {proto: whatever} got rid of<| as declarative syntax for defining an object literal with a [[Prototype]] other than object prototype. Making {proto: whatever} only work some of the times means it isn't a reliable declarative syntax.
What?
Mark insists on delete Object.prototype.proto making the magic go away. (Summoning Mark.)
;)
I agree with the spirit of what Allen is saying, but I'm not sure if I agree on the particulars. {proto: whatever, ....} is special syntax sets the [[Prototype]] property of the resulting object to the value of whatever. This syntax does not invoke [[Put]] anything and has no relationship (beyond evocative similarity) to the property Object.prototype.proto, whether it has been deleted or not. This special status does not apply to obj.proto = whatever, nor to object.proto. Those two do a simple [[Set]] (or, in ES5 terms, [[Put]]) and [[Get]] respectively, and so do depend on the existence and nature of an inherited proto property.
Mark, as far as I could tell you are agreeing with the particulars I specified for {proto: whatever, ...}. Please check the spec. strawman I posted.
Where we disagree is may be WRT whether it is reasonable for Dunder proto be a regular accessor property. Note that other characteristics of it that have been proposed (delete behavior, "not reflected", get function that throws when directly invoked, etc.) seems to require the use of an exotic object for Object.prototype. Once you go that far, we might as well define Dunder prototype behavior as part of the exotic [[Get]] and [[Set]] behavior of that object.
On Apr 21, 2013, at 3:27 PM, Mark S. Miller wrote:
Warning: The following is a sickening idea. I would really hate to see us do it. But I feel obliged to post it as it may in fact be the right thing to do.
Given: Web reality drives us towards recognizing {...., proto: ...., ....} as special syntax for initializing [[Prototype]].
Given: JSON demands that the "proto" in JSON.parse('{...., "proto": ...., ....}') not be treated as a special case, and causes just the normal [[DefineOwnProperty]].
Given: Web reality does not make demands on the meaning of {...., "proto": ...., ....}
Given: The ES5 JSON spec demands that JSON.parse('{...., proto: ...., ....}') be rejected as an error.
This suggests that, in JS as well, the "proto" in {...., "proto": ...., ....} not be treated as a special case. Quoting it turns off the special treatment. \
I've seriously considered proposing this. It slightly complicates the specification but all-in-all I think it might be a good idea.
On Sun, Apr 21, 2013 at 6:11 PM, Allen Wirfs-Brock <allen at wirfs-brock.com>wrote:
On Apr 21, 2013, at 1:37 PM, Brendan Eich wrote:
Allen Wirfs-Brock wrote:
On Apr 21, 2013, at 12:31 PM, Brendan Eich wrote:
You don't want that to affect object literals evaluated in the same realm after such a deletion. Why not?
Why should it?
... because it did in ES5-conforming implementations that support proto as a de-facto standard and allow delete Object.prototype.proto.
Was that intentional WRT object literals or an un intended consequence of using [[Put]] in processing that property. Prior, to the ES6 discussions over the last year was that even the case? For each major implementation how long has deleting Object.prototype.proto (if it was even possible has that behavior).
The point is that I don't think there is any long standing behavior in this regard relating to object literals and deleting Object.prototype.proto that the web is dependent upon. We are free to specify a semantics that will make sense, now and for the long term.
We already used the existence of {proto: whatever} got rid of<| as declarative syntax for defining an object literal with a [[Prototype]] other than object prototype. Making {proto: whatever} only work some of the times means it isn't a reliable declarative syntax.
What?
Mark insists on delete Object.prototype.proto making the magic go away. (Summoning Mark.)
The major is foo.proto = bar changing foo's dynamically changing foo's [[Prototype]]. There is nothing magic about an object literal that uses some weird syntax to specify the [[Prototype]] that a newly allocated object will have.
Triangle died for several reasons, and I'm surprised to hear it here. Grinding an axe?
Not at all! You frequently have said that standardizing proto takes the wind out of other features that support various use cases of defining/manipulating an object's [[Prototype]]. One of those features is using an object literal as a way to declaratively describe an object that inherits from something other than Object.prototype (including null).
So, if {proto: null} is a solution for that use case it should be specified with reasonable semantics and not be a conditional feature whose availability is tied to something else that many consider undesirable. There is no need for any magic here and no need to introduce additional imperative behavior by calling [[Set]]. It is just creating an object with a syntactically provided [[Prototype]] value.
Specifying it as doing a [[Set]] just opens the door for people inserting their own accessor.
In ES5 we explicitly change object literals to be immune from such tampering and with apparently no ill-effect. I don't see why we would want to re-open such an avenue.
I am embarrassed that I hadn't noticed that earlier. Allen is right. Having this literal syntax imply a [[Set]] re-introduces a security vulnerability that ES5 had fixed. This is a big deal. The object-literal semantics that it seems Allen and I agree on has no such vulnerability.
It certainly is necessary from a specification perspective and it isn't going to have a big implementation impact to do it right.
Why would we want to do that? There is arguably a good motivation wanting disable the ability to dynamically proto modify arbitrary pare-existing objects. But what is the motifacation for doing that on newly created objects?
I will tag Mark in here, but first make my own move:
Answer: because the clear way to implement this in ES5-conforming implementations that support proto is to call [[Put]] not [[DefineOwnProperty]] if the name of the property being initialized in the object literal is 'proto', and that's what engines implement.
I'd say that the clearest conforming way is to just create the new object with the [[Prototype]] value that was provided using the literal. [[Put]] isn't needed. [[SetInhertiance]] also isn't really need, but it is semantically much closer than dong an operation that invokes an arbitrary set accessor.
That makes a new de-facto standard, which you should not be wasting energy trying to break!
I don't think there currently is a de facto standard at these edges. Our job is to sort it out and make the best possible language for the long run.
SpiderMonkey at least goes out of its way to do [[Set]] (let's call it) not [[DefineOwnProperty]] for 'o = {proto: 42}', so why wouldn't [[Set]] create a fresh property, seeing nothing on Object.prototype named 'proto' with a setter to run?
Because the semantics that says you can't use [[DefineOwnProperty]] may say to go out of the way to do something else.
Too late, ES5+reality happened. You are now proposing to break the web, at the limit. JS implementors will not go for that, so you are wasting your time.
No both, FF and V8 apparently have been evolving in this area over the last year and IE hasn't even entered the field yet. Show me any significant, browser interoperable code on the web that is observably dependent upon object literals doing an observable [[Put]].
I'll go further with the challenge by dropping the "on the web". Show any significant legacy-browser interoperable code (IE aside since it hadn't implemented proto historically) which would not be compatible with the literal semantics Allen proposes.
On Sun, Apr 21, 2013 at 6:21 PM, Allen Wirfs-Brock <allen at wirfs-brock.com>wrote:
On Apr 21, 2013, at 3:27 PM, Mark S. Miller wrote:
Warning: The following is a sickening idea. I would really hate to see us do it. But I feel obliged to post it as it may in fact be the right thing to do.
Given: Web reality drives us towards recognizing {...., proto: ...., ....} as special syntax for initializing [[Prototype]].
Given: JSON demands that the "proto" in JSON.parse('{...., "proto": ...., ....}') not be treated as a special case, and causes just the normal [[DefineOwnProperty]].
Given: Web reality does not make demands on the meaning of {...., "proto": ...., ....}
Given: The ES5 JSON spec demands that JSON.parse('{...., proto: ...., ....}') be rejected as an error.
This suggests that, in JS as well, the "proto" in {...., "proto": ...., ....} not be treated as a special case. Quoting it turns off the special treatment. \
I've seriously considered proposing this. It slightly complicates the specification but all-in-all I think it might be a good idea.
I fear you may be right. But I don't have to like it ;)
On Sun, Apr 21, 2013 at 6:11 PM, Allen Wirfs-Brock <allen at wirfs-brock.com>wrote:
We are free to specify a semantics that will make sense, now and for the long term.
then, for the long term, if all I understood about this thing is that stinks for everybody, you should really consider to give developers the possibility to get rid of this property completely, if desired, instead of making it indestructible, IMHO
Allen's immediately previous sentence was
"The point is that I don't think there is any long standing behavior in this regard relating to object literals and deleting Object.prototype.proto that the web is dependent upon."
This sets the context for understanding Allen's next sentence. We are constrained by cross-browser legacy. So long as IE was not planning to implement proto, we avoided standardizing it. In the current situation, TC39 is powerless to prevent proto becoming a cross-browser standard. Our only choices are
- We design and codify a spec that all these browsers can agree on, that does not break existing cross-browser (IE aside) web content, and that is as clean as possible within those constraints.
- We do not do so, in which case each browser gropes separately to be compatible enough with what the other browsers seem to be doing.
And example of the consequences of #2 is the wildly different and often bizarre semantics of block nested functions. This was the consequence of omitting these from "official" JavaScript in a social/political context where all browsers felt compelled to implement them anyway. They groped towards compatibility without coordination and arrived at painfully incoherent results. (Fortunately, we were able to quarantine this bizarreness to sloppy mode.)
As a standards committee, we need to be realistic about when we can exercise normative power and when we can't. I'll even agree that, when we're uncertain, we should err on the side of cleaning things up. Until IE changed expectation, we were doing exactly that by avoiding proto. Today, we no longer have that happy uncertainty.
So as my last post said, if there's no observable difference in the field in switching from [[Put]] to [[SetInheritance]], then you and Mark are right, and we can do that in ES6 without fear of breaking any code.
This change is observable in ES6, though. That's why I summoned Mark: to make sure there's no SES issue with setting ab initio the [[Prototype]] (or now I guess it's called [[Inheritance]]) of a fresh object created via an object literal.
Allen Wirfs-Brock wrote:
Triangle died for several reasons, and I'm surprised to hear it here. Grinding an axe?
Not at all! You frequently have said that standardizing proto takes the wind out of other features that support various use cases of defining/manipulating an object's [[Prototype]]. One of those features is using an object literal as a way to declaratively describe an object that inherits from something other than Object.prototype (including null).
The argument is that anyone code that deletes Object.prototype.proto should not affect object literals wiring up from-birth inheritance is fair. I shouldn't have mentioned axes, sorry.
The only issue is the ES6 => SES one. If Mark's happy, then we are done
as far as this [[Put]] vs. [[SetInheritance]] argument goes.
The quoted vs. unquoted difference is horrifying to me, I threw up in my mouth a little. More on that elsewhere.
Andrea may be asking for less than the standard someday removing proto, if I read him right. He's asking not to keep it "indestructible", i.e., to make
delete Object.prototype.proto
remove all the magic, including for 'o = {proto: p}'.
But that seems to require using [[Put]] rather than [[SetInheritance]], and you said that's a security problem. Could you show how? I asked in my immediately preceding message how creating custom proto-chains for guaranteed fresh objects via literals makes trouble for SES.
Again, you're inventing something new. Always risky, ignoring the throw-up-in-mouth effect :-|.
Now the burden is on someone (who?) to find code on the web that uses the quoted form expecting the same results as the unquoted form, which is what all implementations I know of indeed provide.
Who will do that search? Finding such code would disprove the hypothesis of course, but usually we end up with absence of evidence, which is not evidence of absence.
On Sun, Apr 21, 2013 at 8:41 PM, Brendan Eich <brendan at mozilla.com> wrote:
So as my last post said, if there's no observable difference in the field in switching from [[Put]] to [[SetInheritance]], then you and Mark are right, and we can do that in ES6 without fear of breaking any code.
This change is observable in ES6, though. That's why I summoned Mark: to make sure there's no SES issue with setting ab initio the [[Prototype]] (or now I guess it's called [[Inheritance]]) of a fresh object created via an object literal.
Given that the programmer is aware of this syntactic special case, I don't think there's a problem. As someone else commented, it seems equivalent to what you express otherwise using Object.create. I see a few potential problems:
-
If a programmer is unaware of the syntax, he might innocently write this (implausible) or innocently read this and misinterpret its meaning (much more plausible). It adds yet another special case to a language already burdened with them.
-
If a code generating program generates an object literal without checking for the "proto" special case in its input, it can generate code that does not mean what it intends. This would be an encoding error that's structurally similar to some encoding errors that lead to XSS, but is much less likely to do so in this case.
-
If we extend object literals with the computed name expression square-bracket syntax, consider
{ [ e ]: .... }
where e is an expression that evaluates to the string "proto". This must not fall into the equivalent of the syntactic special case. To do so is to move the encoding error of #2 from code-generating programs to regular JS reflective programs.
My sense is that we can live with #1 and #2, but these are worth debating. If the computed case of #3 were similarly special cased though, my sense is we could not live with that.
Allen Wirfs-Brock wrote:
Triangle died for several reasons, and I'm surprised to hear it here.
Grinding an axe?
Not at all! You frequently have said that standardizing proto takes the wind out of other features that support various use cases of defining/manipulating an object's [[Prototype]]. One of those features is using an object literal as a way to declaratively describe an object that inherits from something other than Object.prototype (including null).
The argument is that anyone code that deletes Object.prototype.proto should not affect object literals wiring up from-birth inheritance is fair. I shouldn't have mentioned axes, sorry.
The only issue is the ES6 => SES one. If Mark's happy, then we are done as far as this [[Put]] vs. [[SetInheritance]] argument goes.
As long as we're agreed on #3 being exempt from that special case, I'm happy.
The quoted vs. unquoted difference is horrifying to me, I threw up in my mouth a little. More on that elsewhere.
Glad you share my misery over that one!
Fortunately, answering your previous question, I realized that we have a much safer escape hatch, even though we would lose the correspondence with JSON of the previous proposal:
{ [ "__proto__" ]: .... }
defines a normal property named "proto" with no special case. As I write above, I think we're forced into this anyway by other considerations.
yep, I was asking to make it neutral 100% but that specific case, as written many replies before not only by me, seems to be reasonable, as long as delete Object.prototype.proto is possible and, if this is possible, hoping that Object.getOwnPropertyDescriptor(Object.prototype, 'proto').set will be reusable instead of poisoned.
Ok, so (after pushing back based on recorded consensus, which I think is fair), I'm ok with
-
Object.prototype.proto is configurable.
-
o = {proto: p} semantics changes from ES5's [[Put]] to ES6's [[SetInheritance]].
I'm not sure everyone agrees, but let's assume these two.
Why, given these, must Object.prototype.proto not reflect at all via Object.getOwnPropertyDescriptor? What about Object.getOwnPropertyNames? We have to sink some cost somewhere, and TC39 had a January near-consensus of "accessor, with get and set functions that throw on cross-realm |this| vs. the accessor function". That seems the best solution to me still, and it should fall out of cross-window/frame wrapper policy hooks fairly cheaply in SpiderMonkey.
Re: Mark's
- If we extend object literals with the computed name expression
square-bracket syntax, consider
{ [ e ]: .... }
where e is an expression that evaluates to the string "proto".
This must not fall into the equivalent of the syntactic special case. To do so is to move the encoding error of #2 from code-generating programs to regular JS reflective programs.
I agree with Mark. This does indeed mean that
n = "proto"; o = { [n]: p }
will define a data property, not set inheritance. That's the only way quoting can matter, i.e.
o = { ['proto']: p }
defines a plain old data property. No need to mimic JSON and make a quoted vs. unquoted difference without the [] for computed property names. Making that change requires a search of the web to find disproof, which is never a complete search, which leaves us hanging and makes implementors unhappy to take unknown risk.
On Sun, Apr 21, 2013 at 8:45 PM, Brendan Eich <brendan at mozilla.com> wrote:
Andrea may be asking for less than the standard someday removing proto, if I read him right. He's asking not to keep it "indestructible", i.e., to make
delete Object.prototype.proto
remove all the magic, including for 'o = {proto: p}'.
Andrea, my apologies. I jumped into this thread in the middle and misinterpreted. I still don't like that idea, but not on the grounds previously stated.
But that seems to require using [[Put]] rather than [[SetInheritance]], and you said that's a security problem. Could you show how? I asked in my immediately preceding message how creating custom proto-chains for guaranteed fresh objects via literals makes trouble for SES.
I truly hate to make any security argument whatsoever based on the Same Origin Policy (SOP). However, one of the legacy constraints of the web we must respect is not to break security that was successfully built on the SOP, even if the form of security at stake was a twisted miserable affair. I agree that we should generally respect this constraint.
An old argument about the safety of having <script> tags do cross origin
HTTP GETs without any special arrangement is that the requesting page could not read information contained in the script unless the script's behavior intends to reveal that information. I think this rationale was silly, as the script is being loaded into a potentially hostile environment that can already cause the script to behave in ways very different from its author's intentions. The counter-argument was that, at least literals themselves are safe. The counter-counter argument, presented as a vulnerability demonstrated by an attack (citation anyone?), is that on browsers with getters and setters, the ES3 literal-invokes-[[Put]] behavior enables the requestor to steal even information within JS literals. The argument should have stopped there, with the defenders of the SOP conceding that script tags allow cross origin reads with no further protection.
Instead, this continued to be seen as a meaningful vulnerability to be fixed. ES5 fixed this vulnerability regarding literals. In my opinion, I think this fix creates only confusion and a false sense of security. The script as a whole is still being loaded into a hostile environment, and it is not realistic to expect programmers to write interesting scripts that keep secrets when running in that environment. But I know of no way to get the advocates of the SOP to stop. Witness the recent agreement, without dissent, that a cross-origin module-source GET where the module loader is parameterized with the translate hook should require the special UMP/CORS permission. Whereas a cross-origin module-source GET without a translate hook does not need this permission.
Given that we are stuck with this as the explanation of browser security constraints, we at least need a simple line between what is and is not directly protected. With ES5 the line is "literals". This is simpler to understand than "literals that don't use the special proto syntax." If programmers rely on this simpler explanation, the following literal should still be able to protect its secrets when loaded into a hostile environment:
{ __proto__: { secret: "857234850349859234" }}
My preference is that we get the world of the web to admit that there is no meaningful protection of a script's secrets from its loading environment. But I am not hopeful. In the absence of this agreement, we must not break the supposed security vulnerability that the web thinks we fixed in ES5. And so long as programmers are encouraged to count on this protection, we must not make the explanation of the delicate property they are counting on more complex than they will accurately remember.
The process of writing this down makes me even more aware of how convoluted it is. I retract my statement that "This is a big deal." I could probably be argued out of it if there was something significant to be gained. In this case, I don't think there is.
On Sun, Apr 21, 2013 at 9:42 PM, Brendan Eich <brendan at mozilla.com> wrote:
Ok, so (after pushing back based on recorded consensus, which I think is fair), I'm ok with
Object.prototype.proto is configurable.
o = {proto: p} semantics changes from ES5's [[Put]] to ES6's [[SetInheritance]].
I'm not sure everyone agrees, but let's assume these two.
Why, given these, must Object.prototype.proto not reflect at all via Object.**getOwnPropertyDescriptor?
What? Where is that proposed? (And again, I am jumping into this thread in the middle, so apologies for lack of context.)
What about Object.getOwnPropertyNames? We have to sink some cost somewhere, and TC39 had a January near-consensus of "accessor, with get and set functions that throw on cross-realm |this| vs. the accessor function". That seems the best solution to me still, and it should fall out of cross-window/frame wrapper policy hooks fairly cheaply in SpiderMonkey.
That sounds good to me too. Where is the alternative explained?
if Object.getPrototypeOf() works there, how is that secret protected?
In any case, do you agree since you can configure Object.prototype.proto you could also poison it by your own instead of proposing an unusable poisoned setter as it is now in V8?
I am talking about this possibility (already in Firefox):
var set = Object.getOwnPropertyDescriptor( Object.prototype, 'proto' ).set;
delete Object.prototype.proto;
together with this for, eventually, SES
(function(getOwnPropertyDescriptor){ Object.defineProperty( Object, 'getOwnPropertyDescriptor', { enumerable: false, configurable: false, writable: false, value: function(object, property) { var d = getOwnPropertyDescriptor(object, property); if (d && property === 'proto') { d.set = function () { throw new Error('nope'); }; } } } ); }(Object.getOwnPropertyDescriptor));
So that whoever want to delete can reuse same "magic" and work in a proto free environment, literals special syntax a part (don't care much, equivalent shortcut of Object.create() so it's OK)
What do you think?
On Sun, Apr 21, 2013 at 9:56 PM, Andrea Giammarchi < andrea.giammarchi at gmail.com> wrote:
if Object.getPrototypeOf() works there, how is that secret protected?
Assuming the script does not make the object that the literal evaluates to available to the hostile environment it is running it. This was always the delicate assumption of the old argument.
In any case, do you agree since you can configure Object.prototype.proto you could also poison it by your own instead of proposing an unusable poisoned setter as it is now in V8?
I am talking about this possibility (already in Firefox):
var set = Object.getOwnPropertyDescriptor( Object.prototype, 'proto' ).set;
delete Object.prototype.proto;
together with this for, eventually, SES
(function(getOwnPropertyDescriptor){ Object.defineProperty( Object, 'getOwnPropertyDescriptor', { enumerable: false, configurable: false, writable: false, value: function(object, property) { var d = getOwnPropertyDescriptor(object, property); if (d && property === 'proto') { d.set = function () { throw new Error('nope'); }; } } } ); }(Object.getOwnPropertyDescriptor));
So that whoever want to delete can reuse same "magic" and work in a proto free environment, literals special syntax a part (don't care much, equivalent shortcut of Object.create() so it's OK)
What do you think?
I'm sorry, I don't get the point. What are you trying to demonstrate?
right now this is the V8 situation: code.google.com/p/v8/issues/detail?id=2645
which is different from SpiderMonkey one where you can:
var protoSetter = Object.getOwnPropertyDescriptor( Object.prototype, 'proto' ); delete Object.prototype.proto; // optionally
var a = {}, b = {}; protoSetter.call(a, b); b.isPrototypeOf(a); // true
Which makes the "magic" of the proto available. Otherwise it does not make sense to have it configurable but unusable because in some very specific case that setter might be needed, as it is now, mostly to promote NodeList collections into something else (the Zepto case)
I don't think Zepto should change (apparently it won't in any case) but I don't think for that single, very specific, case, the whole environment should be exposed to proto.
In SES case, you could poison upfront getOwnpropertyDescriptor on SES side instead of expecting a poisoned thing from native V8 as proto descriptor and it's setter.
The patch I've linked and proposed makes that setter available, but this should, in my opinion, be the standard behavior, so that is possible to drop the proto and reuse in those very few little edge cases.
Is this any more clear? Thanks for your patience, it's clearly hard for me to express myself in your own terms.
oh dear ...
var protoSetter = Object.getOwnPropertyDescriptor( Object.prototype, 'proto' ).set; // <==== forgot the setter in previous example
Mark Miller wrote:
On Sun, Apr 21, 2013 at 9:42 PM, Brendan Eich <brendan at mozilla.com <mailto:brendan at mozilla.com>> wrote:
Ok, so (after pushing back based on recorded consensus, which I think is fair), I'm ok with * Object.prototype.__proto__ is configurable. * o = {__proto__: p} semantics changes from ES5's [[Put]] to ES6's [[SetInheritance]]. I'm not sure everyone agrees, but let's assume these two. Why, given these, must Object.prototype.__proto__ not reflect at all via Object.getOwnPropertyDescriptor?
What? Where is that proposed? (And again, I am jumping into this thread in the middle, so apologies for lack of context.)
Up-thread, near the o.p.:
What about Object.getOwnPropertyNames? We have to sink some cost somewhere, and TC39 had a January near-consensus of "accessor, with get and set functions that throw on cross-realm |this| vs. the accessor function". That seems the best solution to me still, and it should fall out of cross-window/frame wrapper policy hooks fairly cheaply in SpiderMonkey.
That sounds good to me too. Where is the alternative explained?
esdiscuss/2013-April/029962 which links to a PDF.
I have difficulties catching up with everything that has been said in this thread, my apologies if I repeat parts of the discussion. Here is what I thought we agreed upon earlier:
__proto__
is an accessor property onObject.prototype
- reflecting it via
Object.getOwnPropertyDescriptor
gives you a poisoned setter __proto__
can be deleted fromObject.prototype
__proto__
in object literals is a special piece of syntax independent from (1)
I think this all makes sense, and we are working towards making V8
implement this. (3) implies that Object.prototype.__proto__
is
configurable, which should also imply that you can just remove the
setter, not the getter, which I believe addresses Alex' initial
request.
The main question I still had was what "poisoning" means. Currently, V8 simply returns an always-throw function. Others (e.g. Brendan) seemed to assume that we only poison sets for other realms.
However, in that case, I actually think that there is no need to have
any special poisoning semantics when reflecting __proto__
-- mainly
because the cross-realm check is already necessary in the unreflected
case: you can construct an object o in realm A with an
Object.prototype
from another realm B on its proto chain. If you
deleted __proto__
on realm's A Object.prototype
, I don't think it
should still be possible to assign to o.__proto__
, should it? If
that's so then there is absolutely nothing magic remaining about
__proto__
as a property.
As for __proto__
in object literals, I definitely agree with Allen and
Mark that this should be special syntax, and not dependent on
Object.prototype.___proto__
, for the reasons mentioned. Whether a
difference should be made between quoted and unquoted I don't know, I
must have missed the rationale for such a move.
Andreas Rossberg wrote:
I have difficulties catching up with everything that has been said in this thread, my apologies if I repeat parts of the discussion. Here is what I thought we agreed upon earlier:
- proto is an accessor property on Object.prototype
- reflecting it via Object.getOwnPropertyDescriptor gives you a poisoned setter
Not poisoned for all calls, though -- see the meeting notes from January:
esdiscuss/2013-February/028631
(find "proto" to search to start of the relevant notes).
- proto can be deleted from Object.prototype
- proto in object literals is a special piece of syntax independent from (1)
Should (4) be independent from (3)? Ah, you get to this later.
I think this all makes sense, and we are working towards making V8 implement this. (3) implies that Object.prototype.proto is configurable, which should also imply that you can just remove the setter, not the getter, which I believe addresses Alex' initial request.
The main question I still had was what "poisoning" means. Currently, V8 simply returns an always-throw function. Others (e.g. Brendan) seemed to assume that we only poison sets for other realms.
That was discussed at the January meeting (I forget whether you were there then). Look for ARV in the notes, at the end of the proto discussion.
However, in that case, I actually think that there is no need to have any special poisoning semantics when reflecting proto -- mainly because the cross-realm check is already necessary in the unreflected case: you can construct an object o in realm A with an Object.prototype from another realm B on its proto chain. If you deleted proto on realm's A Object.prototype, I don't think it should still be possible to assign to o.proto, should it?
Why not, if in realm A we evaluate 'var o = Object.create(B.Object.prototype)'? You specified 'delete A.Object.prototype' happened, and A.Object.prototype is not on o's proto chain.
If that's so then there is absolutely nothing magic remaining about proto as a property.
Did you get something switched between A & B above? Either that, or I still need coffee!
As for proto in object literals, I definitely agree with Allen and Mark that this should be special syntax, and not dependent on Object.prototype._proto, for the reasons mentioned.
Ok, I can go along here. This is quite different from ES5 + de-facto proto as implemented by real engines, but without configurable Object.prototype.proto it has not thus far (AFAIK) been observable -- so no compatibility break.
Whether a difference should be made between quoted and unquoted I don't know, I must have missed the rationale for such a move.
I think we're not going to induce vomiting by making a quoted vs. unquoted distinction, in light of Mark's point about computed property names.
On 22 April 2013 15:49, Brendan Eich <brendan at mozilla.com> wrote:
However, in that case, I actually think that there is no need to have any special poisoning semantics when reflecting
__proto__
-- mainly because the cross-realm check is already necessary in the unreflected case: you can construct an object o in realm A with anObject.prototype
from another realm B on its proto chain. If you deleted__proto__
on realm's AObject.prototype
, I don't think it should still be possible to assign too.__proto__
, should it?Why not, if in realm A we evaluate
var o = Object.create(B.Object.prototype)
? You specifieddelete A.Object.prototype
happened, andA.Object.prototype
is not ono
's proto chain.
My understanding of the motivation for poisoning was to enable the
deletion of O.p.__proto__
when configuring a realm as a means for
guaranteeing that no object from that realm can ever have its
prototype mutated. Allowing the above case would seem to shoot a hole
into that.
// Realm A
delete Object.prototype.__proto__ // no messing around
let other = getObjectFromSomewherePotentiallyAnotherRealmB()
let p1 = Object.create(other, {a: {value: 1}})
let o = Object.create(p1)
let p2 = Object.create({})
o.__proto__ = p2 // say what?
Whether a difference should be made between quoted and unquoted I don't know, I must have missed the rationale for such a move.
I think we're not going to induce vomiting by making a quoted vs. unquoted distinction, in light of Mark's point about computed property names.
OK, good. :)
We don't currently have the concept of an object "belonging" to a realm. Functions have a realm association, but not non-function object.
Object.create(parent); //we have no way to determine if parent "belongs" to the same realm as Object.create.
we also currently have no way to determine whether the caller of Object.create
is in the same or different realm as Object.create
.
someObject.__proto__ = someParent; //the setter function from Object.prototype has no way to determine a realm association for someParent.
let protoSetter = Object.getOwnPropertyDescriptor(Object.prototype, "__proto__"); //a proto setter from some realm
let x = {};
protosetter.call(x, another); //no realm info to validate.
protosetter
is essentially a universal setPrototypeOf
function.
The only way I see to tame any aspect (for example not allowing Object.prototype.__proto__ = something
) of __proto__
setting is via a [[Set]] over-ride on Object.prototype
.
then what's the point to poison Object.getOwnPropertyDescriptor(Object.prototype, "proto").set if anyone can always use proto to change the chain ?
I don't understand this poisoning ... I don't see any advantage, only problems if some library would like to drop proto and use in edge case that setter for good.
Thanks for clarifications
On Mon, Apr 22, 2013 at 1:15 PM, Allen Wirfs-Brock <allen at wirfs-brock.com>wrote:
We don't currently have the concept of an object "belonging" to a realm. Functions have a realm association, but not non-function object.
The current idea on how to solve the security issue with weak references (and AFAIK the only solution that has been suggested) assumes that objects are identified with the realm of their creation, so that we can distinguish intra-realm vs inter-realm pointing. I will let others speak of the implementation cost. Here, I'll see if we can define a simple semantics for an object's realm of origin.
-
We agree that functions are identified with a realm. An object literal within a function should evaluate to an object in the same realm as that function.
-
Various built-in functions, notably Object.create but also, e.g. Array.prototype.map, create objects. For each such "Chapter 15" builtin function, again the object should be created in the realm of the function that creates it.
What other cases are there? (I thought it would be a longer list ;).)
Ok, I have read more messages on this thread and looked at some of the supporting material that has been pointed at. The notes from the last meeting record a conversation before I arrived, and I'm not quite clear what it says was agreed on. In any case, I think the primary goals should be and seem to have been
- minimize magic
- maximize security
- codify something everyone can agree to implement
The first two goals generally align well anyway. I think this is best served by something that seems at least close to what was agreed on:
-
The syntax that we've already agreed to on this thread: {proto: ....} is special syntax that initialized the [[Prototype]]. No need for anything even as mildly imperative as [[SetPrototype]].
-
{ [ "proto" ]: .... } is not special in any way, and creates a normal property named "proto".
-
Every object with a potentially mutable [[Prototype]] must be identified with a realm of origin. (Practically this will be "any object", which is good because that is what Weak References will need anyway.)
-
In the initial state of a normal realm, Object.prototype.proto is an accessor property with the descriptor (making up names for the internal functions -- don't take the names seriously):
{ getter: [[ProtoGetter]], setter: [[ProtoSetter]], enumerable: false, configurable: true }
-
In this initial state, Object.getOwnPropertyDescriptor(Object.prototype, 'proto') returns the above descriptor. No magic.
-
In this initial state, Object.getOwnPropertyNames(Object.prototype) returns a list which includes the string "proto". No magic.
-
Likewise for all other reflective operations, including "in". No magic.
-
The behavior of [[ProtoGetter]] is approximately
function [[ProtoGetter]] () { return Object.getPrototypeOf(this); }
except of course that it uses the internal function rather than the current binding of Object.getPrototypeOf. Just like Object.getPrototypeOf, this behavior is independent of Realm. It is also independent of whether [[ProtoGetter]] is invoked as an accessor or invoked otherwise, for example by using Function.prototype.call.
-
The behavior of [[ProtoSetter]] is approximately
function [[ProtoSetter]] (newValue) { if ([GetRealm] !== [GetRealm]) { throw new TypeError(....); // or should this be RangeError ? } this.[SetPrototype]; }
This behavior is independent of whether [[ProtoSetter]] is invoked as an accessor or invoked otherwise, for example by using Function.prototype.call.
-
Normal objects have a [[SetPrototype]] method like
function [[SetPrototype]] (newValue) { // normal checks for proto acceptability // * either null or an object // * would not create an inheritance cycle this.[[Prototype]] = newValue; }
======== Warning: The rest of this is half baked ============
- Direct proxies have a [[SetPrototype]] method that invokes the handler's "setPrototype" trap. It is the handler's responsibility, not the proxy's, to set the target's [[Prototype]] to newValue. Once the handler returns to the proxy, the proxy checks if target.[[Prototype]] === newValue. If not, it throws. This enforces that a handler can only reflect the mutation of [[Prototype]] transparently if it already has setter which is the capability to do so.
OMG. I omitted the most important constraint:
On Mon, Apr 22, 2013 at 8:11 PM, Mark S. Miller <erights at google.com> wrote: [...]
- Normal objects have a [[SetPrototype]] method like
function [[SetPrototype]] (newValue) { // normal checks for proto acceptability // * either null or an object // * would not create an inheritance cycle
if (! this.[[Extensible]]) {
throw new TypeError(....);
}
this.[[Prototype]] = newValue;
}
This indicates that the rest of my message should be read with salt. I wrote it with as much care and attention. Sigh.
At least this one is minor:
On Mon, Apr 22, 2013 at 8:11 PM, Mark S. Miller <erights at google.com> wrote: [...]
{ getter: [[ProtoGetter]], setter: [[ProtoSetter]], enumerable: false, configurable: true }
Should be
{ get: [[ProtoGetter]], set: [[ProtoSetter]], enumerable: false, configurable: true }
On 22 April 2013 22:15, Allen Wirfs-Brock <allen at wirfs-brock.com> wrote:
We don't currently have the concept of an object "belonging" to a realm. Functions have a realm association, but not non-function object.
I thought the current plan of record is to require this check for reflected uses of protosetter.call? So I don't see how we can avoid introducing that concept (and to answer Mark's question re implementation: in V8 at least it has always been available -- it may be that the DOM already requires it, but I'm not sure).
My only new(?) observation was that we already need that check for unreflected uses if we are serious about maintaining the invariant that I thought we intended to maintain. And as a consequence, we don't actually need any magic for proto. You should like that. :)
On 23 April 2013 05:11, Mark S. Miller <erights at google.com> wrote:
The first two goals generally align well anyway. I think this is best served by something that seems at least close to what was agreed on:
The syntax that we've already agreed to on this thread: {proto: ....} is special syntax that initialized the [[Prototype]]. No need for anything even as mildly imperative as [[SetPrototype]].
{ [ "proto" ]: .... } is not special in any way, and creates a normal property named "proto".
Every object with a potentially mutable [[Prototype]] must be identified with a realm of origin. (Practically this will be "any object", which is good because that is what Weak References will need anyway.)
In the initial state of a normal realm, Object.prototype.proto is an accessor property with the descriptor (making up names for the internal functions -- don't take the names seriously):
{ getter: [[ProtoGetter]], setter: [[ProtoSetter]], enumerable: false, configurable: true }
In this initial state, Object.getOwnPropertyDescriptor(Object.prototype, 'proto') returns the above descriptor. No magic.
In this initial state, Object.getOwnPropertyNames(Object.prototype) returns a list which includes the string "proto". No magic.
Likewise for all other reflective operations, including "in". No magic.
The behavior of [[ProtoGetter]] is approximately
function [[ProtoGetter]] () { return Object.getPrototypeOf(this); }
except of course that it uses the internal function rather than the current binding of Object.getPrototypeOf. Just like Object.getPrototypeOf, this behavior is independent of Realm. It is also independent of whether [[ProtoGetter]] is invoked as an accessor or invoked otherwise, for example by using Function.prototype.call.
The behavior of [[ProtoSetter]] is approximately
function [[ProtoSetter]] (newValue) { if ([GetRealm] !== [GetRealm]) { throw new TypeError(....); // or should this be RangeError ? } this.[SetPrototype]; }
This behavior is independent of whether [[ProtoSetter]] is invoked as an accessor or invoked otherwise, for example by using Function.prototype.call.
Normal objects have a [[SetPrototype]] method like
function [[SetPrototype]] (newValue) { // normal checks for proto acceptability // * either null or an object // * would not create an inheritance cycle this.[[Prototype]] = newValue; }
That matches my thinking exactly (modulo the fix in your follow-up).
On Apr 23, 2013, at 3:31 AM, Mark S. Miller wrote:
On Mon, Apr 22, 2013 at 1:15 PM, Allen Wirfs-Brock <allen at wirfs-brock.com> wrote: We don't currently have the concept of an object "belonging" to a realm. Functions have a realm association, but not non-function object.
The current idea on how to solve the security issue with weak references (and AFAIK the only solution that has been suggested) assumes that objects are identified with the realm of their creation, so that we can distinguish intra-realm vs inter-realm pointing. I will let others speak of the implementation cost. Here, I'll see if we can define a simple semantics for an object's realm of origin.
We agree that functions are identified with a realm. An object literal within a function should evaluate to an object in the same realm as that function.
Various built-in functions, notably Object.create but also, e.g. Array.prototype.map, create objects. For each such "Chapter 15" builtin function, again the object should be created in the realm of the function that creates it.
What other cases are there? (I thought it would be a longer list ;).)
There a many syntactic constructs that allocated objects (class definitions, template strings, comprehensions, anything that does a ToObject, etc. Also, many ES6 chapter 15 functions (including all the @@create methods) allocate objects. In both cases, the actual allocations are usually indirected through abstraction operations.
All ECMAScript functions and all chapter 15 built-ins belong to a realm and when such functions are called its realm is recorded as part of a new execution context (in ES6, calling a built-ins is specified to create a new execution contexts). By the time the ES6 spec. is completed, I'm sure that ECMAScript scripts will also belong to realms and have execution contexts. Calling abstract operation don't create execution contexts. So, every specified way to create an object occurs within the scope of an execution context that is associated with a specific realm. (however, host or implementation provided exotic function objects that don't use the ordinary [[Call]] are unspecified and might be implemented to do something that doesn't including creating a new execution context).
So I'm not particular concerned about the specification complexity of associating every object with a realm. However, I do think that potential implementation impacts should be studied very carefully.
Even if we had per object realm associations, it isn't clear to me what exactly we are trying to block WRT cross-realm [[Prototype]] chains. Is the assertion that all objects in a [[Prototype]] chain must come from the same realm. Does that mean that we must block creating such chains via Object.create or class declarations or classic constructor functions. It isn't clear to me why such cross-realm chains are necessarily evil. It also isn't clear to me why proto should be prevented from creating them if we don't also prevent all other ways of doing so. In that case, the appropiate place to put semantic restrictions on prototype chain construction/modification is in the semantics of the ordinary object [[SetInheritance]] internal method rather than in individual built-in functions and abstract operations that invoke [[SetInheritance]].
Aren't sandboxed natives a JS technique that actually relies on cross-realm prototype chains?
msdn.microsoft.com/en-us/magazine/gg278167.aspx
My understanding is that they create a separate origin in order to get their own copies of the natives so that they can fiddle with those natives' prototypes without polluting the current page's global namespace/natives. If you were to restrict prototype chains to only containing same-realm objects, that seems like it would break any application that currently uses a technique like this. Or is the intent that realms are a new complement to the cross origin policy, and all iframes in a HTML document would share a realm?
On 23 April 2013 14:54, Allen Wirfs-Brock <allen at wirfs-brock.com> wrote:
Even if we had per object realm associations, it isn't clear to me what exactly we are trying to block WRT cross-realm [[Prototype]] chains. Is the assertion that all objects in a [[Prototype]] chain must come from the same realm. Does that mean that we must block creating such chains via Object.create or class declarations or classic constructor functions. It isn't clear to me why such cross-realm chains are necessarily evil. It also isn't clear to me why proto should be prevented from creating them if we don't also prevent all other ways of doing so. In that case, the appropiate place to put semantic restrictions on prototype chain construction/modification is in the semantics of the ordinary object [[SetInheritance]] internal method rather than in individual built-in functions and abstract operations that invoke [[SetInheritance]].
Preventing cross-realm prototype chains was not the intention. The intention was to prevent using a protosetter from another realm to mutate a prototype chain in a realm where Object.prototype.proto has been removed.
Here is my example again:
// Realm A delete Object.prototype.proto // no messing around
let other = getObjectFromSomewherePotentiallyAnotherRealmB()
let p1 = Object.create(other, {a: {value: 1}}) let o = Object.create(p1) let p2 = Object.create({}) o.proto = p2 // say what?
Everything is fine up to the last step. In particular, o has a cross-realm prototype chain just fine. The only problem is with the last line, which actually makes use of the proto inherited from a different realm to mutate o's prototype. My understanding was that that is the kind of thing we anted to prevent.
Agreed. Just to clarify though, I'm going to make a pedantic change to your wording ;).
On Tue, Apr 23, 2013 at 6:00 AM, Andreas Rossberg <rossberg at google.com>wrote: [...]
Preventing cross-realm prototype chains was not the intention. The intention was to prevent using a protosetter from another realm to
mutate the [[Prototype]] of an object in a realm where Object.prototype.proto
has been removed.
(In the case of a cross-realm prototype chain, the notion of "a prototype chain in a realm" is not well defined.)
Here is my example again:
// Realm A delete Object.prototype.proto // no messing around
let other = getObjectFromSomewherePotentiallyAnotherRealmB()
let p1 = Object.create(other, {a: {value: 1}})
For example, at this point in your scenario, we would have no problem with
other.__proto__ = {a: 2};
However, the is mutating at least one prototype chain that starts in Realm A.
Mark, below what you refer to as [[SetProtoype]] is essentially the [[SetInhertiance]] MOP operations in the current spec. draft there is also a [[GetInheritance]]. It is called Get/SetInheritance because it doesn't necessarily manipulate the [[Prototype]] of the object it is invoked upon (eg, if it is a Proxy) and for exotic objects property inheritance isn't constrained to use [[Prototype]].
On Apr 23, 2013, at 5:11 AM, Mark S. Miller wrote:
Ok, I have read more messages on this thread and looked at some of the supporting material that has been pointed at. The notes from the last meeting record a conversation before I arrived, and I'm not quite clear what it says was agreed on. In any case, I think the primary goals should be and seem to have been
- minimize magic
- maximize security
- codify something everyone can agree to implement
The first two goals generally align well anyway. I think this is best served by something that seems at least close to what was agreed on:
- The syntax that we've already agreed to on this thread: {proto: ....} is special syntax that initialized the [[Prototype]]. No need for anything even as mildly imperative as [[SetPrototype]].
The semantics of the syntax still should be specified in terms of the MOP as it's in the ordinary object MOP internal methods that we specify their semantics
- { [ "proto" ]: .... } is not special in any way, and creates a normal property named "proto".
I don't believe this is legal. Didn't we agree w to support [ ] property keys that evaluate to symbols.
- Every object with a potentially mutable [[Prototype]] must be identified with a realm of origin. (Practically this will be "any object", which is good because that is what Weak References will need anyway.)
- In the initial state of a normal realm, Object.prototype.proto is an accessor property with the descriptor (making up names for the internal functions -- don't take the names seriously):
{ getter: [[ProtoGetter]], setter: [[ProtoSetter]], enumerable: false, configurable: true }
In this initial state, Object.getOwnPropertyDescriptor(Object.prototype, 'proto') returns the above descriptor. No magic.
In this initial state, Object.getOwnPropertyNames(Object.prototype) returns a list which includes the string "proto". No magic.
Likewise for all other reflective operations, including "in". No magic.
So, getOwnPropertyKeys(Object.prototype) is expected to yield "proto"
The behavior of [[ProtoGetter]] is approximately
function [[ProtoGetter]] () { return Object.getPrototypeOf(this); }
it would actually be specified in terns of [[GetInheritance]] MOP operations
except of course that it uses the internal function rather than the current binding of Object.getPrototypeOf. Just like Object.getPrototypeOf, this behavior is independent of Realm. It is also independent of whether [[ProtoGetter]] is invoked as an accessor or invoked otherwise, for example by using Function.prototype.call.
The behavior of [[ProtoSetter]] is approximately
function [[ProtoSetter]] (newValue) { if ([GetRealm] !== [GetRealm]) { throw new TypeError(....); // or should this be RangeError ? } this.[SetPrototype]; }
In the past there were other restrictions that have been suggested. For example, not allowing: Object.prototype.proto = notNull; to do what the above a=names suggest.
Regardless, what is so special about the [[ProtoSetter]] operation that it needs to be restricted in this way? It's just a capability and you know how to control access to capabilities. You also know how to protect objects from having their [[Prototype]] mutated. If I have any object, that inherits from a different realm's Object.prototype I can navigate to its constructor property which gives me access to that other realm's, Object.create, Object[[@@create], and all the other Object.* functions. Why isn't being able to find and apply some other realms Object.free just as scary as finding its [[ProtoSetter]]?
What is Object.free?
Andreas Rossberg wrote:
On 22 April 2013 22:15, Allen Wirfs-Brock<allen at wirfs-brock.com> wrote:
We don't currently have the concept of an object "belonging" to a realm. Functions have a realm association, but not non-function object.
I thought the current plan of record is to require this check for reflected uses of protosetter.call? So I don't see how we can avoid introducing that concept (and to answer Mark's question re implementation: in V8 at least it has always been available -- it may be that the DOM already requires it, but I'm not sure).
In practice, all engines must classify objects by realm, somehow. Usually by GC-page-cluster or "compartment".
My only new(?) observation was that we already need that check for unreflected uses if we are serious about maintaining the invariant that I thought we intended to maintain. And as a consequence, we don't actually need any magic for proto. You should like that. :)
Yes! The magic goes away (except that ({proto: ...}) is still a special form).
Le 23/04/2013 15:30, Allen Wirfs-Brock a écrit :
- { [ "proto" ]: .... } is not special in any way, and creates a normal property named "proto". I don't believe this is legal. Didn't we agree w to support [ ] property keys that evaluate to symbols.
I don't know what the agreement is, but that would be wise to forbid strings in [ ] propert keys given that ES6 introduces Maps which seem to be a better host for dynamically generated string keys.
Taking bite-sized pieces:
Allen Wirfs-Brock wrote:
- { [ "proto" ]: .... } is not special in any way, and creates a normal property named "proto".
I don't believe this is legal. Didn't we agree w to support [ ] property keys that evaluate to symbols.
No, [n] is good for any computed property name -- evaluating n and if symbol, using that, else (doing the equivalent, e.g., engines optimize indexes) converting to string -- Dave's ToPropertyName from the wiki, is all that's needed here.
I do not recall us ever agreeing that the bracketed property-name-in-literal syntax was only for symbols.
David Bruant wrote:
Le 23/04/2013 15:30, Allen Wirfs-Brock a écrit :
- { [ "proto" ]: .... } is not special in any way, and creates a normal property named "proto". I don't believe this is legal. Didn't we agree w to support [ ] property keys that evaluate to symbols. I don't know what the agreement is, but that would be wise to forbid strings in [ ] propert keys given that ES6 introduces Maps which seem to be a better host for dynamically generated string keys.
What?
Maps are for abitrary values as keys -- any type.
Objects have string-equated keys so far. In ES6 we sum symbol | string as the property name type. There is no problem with the o = {[n]: v} syntax for computed property name n (an expression, we argued about wither /Expression/ or /AssignmentExpression/ but I forget the outcome) resulting in either a symbol, or else a value equated to a string as in object literals today.
On Apr 23, 2013, at 3:35 PM, Mark Miller wrote:
What is Object.free?
ugh, jet lag...
Object.freeze
On Apr 23, 2013, at 3:50 PM, Brendan Eich wrote:
Taking bite-sized pieces:
Allen Wirfs-Brock wrote:
- { [ "proto" ]: .... } is not special in any way, and creates a normal property named "proto".
I don't believe this is legal. Didn't we agree w to support [ ] property keys that evaluate to symbols.
No, [n] is good for any computed property name -- evaluating n and if symbol, using that, else (doing the equivalent, e.g., engines optimize indexes) converting to string -- Dave's ToPropertyName from the wiki, is all that's needed here.
[n] in object literals and classes has come, gone, and reappeared. It originally allowed strings. I think the last time it reappear (when at-names were dropped) it was only for symbols. I need to go notes digging.
Le 23/04/2013 15:53, Brendan Eich a écrit :
David Bruant wrote:
Le 23/04/2013 15:30, Allen Wirfs-Brock a écrit :
- { [ "proto" ]: .... } is not special in any way, and creates a normal property named "proto". I don't believe this is legal. Didn't we agree w to support [ ] property keys that evaluate to symbols. I don't know what the agreement is, but that would be wise to forbid strings in [ ] propert keys given that ES6 introduces Maps which seem to be a better host for dynamically generated string keys.
What?
Maps are for abitrary values as keys -- any type.
Objects have string-equated keys so far. In ES6 we sum symbol | string as the property name type. There is no problem with the o = {[n]: v} syntax for computed property name n (an expression, we argued about wither /Expression/ or /AssignmentExpression/ but I forget the outcome) resulting in either a symbol, or else a value equated to a string as in object literals today. From a language perspective, I agree o = {[n]: v} can work with n as a string. From a developer perspective, I wonder if this isn't confusing as it provides another way to do something that's already possible.
o = {[n]: v}
If n is a string literal, developers might as well get rid of the brackets. In all other cases, the following work already: o = {}; o[n] = v;
If someone feels like doing: o = { [a]: v1, [b]: v2, [c]: v3, [d]: v4, }
Maybe what they want is a Map (dynamically computed strings) or an array (fixed length).
I don't see a use case where dynamic key strings in object literals is a good idea.
On Tue, Apr 23, 2013 at 6:30 AM, Allen Wirfs-Brock <allen at wirfs-brock.com>wrote:
Mark, below what you refer to as [[SetProtoype]] is essentially the [[SetInhertiance]] MOP operations in the current spec. draft there is also a [[GetInheritance]]. It is called Get/SetInheritance because it doesn't necessarily manipulate the [[Prototype]] of the object it is invoked upon (eg, if it is a Proxy) and for exotic objects property inheritance isn't constrained to use [[Prototype]].
I knew I was using the wrong names and being sloppy about how this relates to the actual MOP. I accept all those corrections. Thanks.
On Apr 23, 2013, at 5:11 AM, Mark S. Miller wrote:
[...]
In this initial state, Object.getOwnPropertyNames(Object.prototype) returns a list which includes the string "proto". No magic.
Likewise for all other reflective operations, including "in". No magic.
So, getOwnPropertyKeys(Object.prototype) is expected to yield "proto"
Sure. If Object.getOwnPropertyNames returns "proto" wouldn't it be bizarre for getOwnPropertyKeys not to?
[...]
The behavior of [[ProtoSetter]] is approximately
function [[ProtoSetter]] (newValue) { if ([GetRealm] !== [GetRealm]) { throw new TypeError(....); // or should this be RangeError ? } this.[SetPrototype]; }
In the past there were other restrictions that have been suggested. For example, not allowing: Object.prototype.proto = notNull; to do what the above a=names suggest.
What is "a=names"?
Regardless, what is so special about the [[ProtoSetter]] operation that it needs to be restricted in this way? It's just a capability and you know how to control access to capabilities. You also know how to protect objects from having their [[Prototype]] mutated. If I have any object, that inherits from a different realm's Object.prototype I can navigate to its constructor property which gives me access to that other realm's, Object.create, Object[[@@create], and all the other Object.* functions. Why isn't being able to find and apply some other realms Object.free[ze] just as scary as finding its [[ProtoSetter]]?
SES includes Object.freeze in its set of universally available primordials. Thus, an Object.freeze from a foreign non-SES-secured realm is not a threat since Object.freeze is already universally available to confined ("untrusted") code within the local SES-secured realm.
I expect SES initialization will delete Object.prototype.proto or at least remove its setter. It is conceivable SES will hold the setter off on the side for some special use. But regardless, it will probably[*] deny these powers to confined code within that SES-secured realm. Thus, SES code, including confined code, should be able to safely assume that the [[Prototype]] of its own objects won't be mutated.
When SES code interacts only with SES code, whether intra or inter realm, then this is the end of the story and the cross realm threat is not an issue. But SES objects must be able to maintain their own integrity even when exposed to non-SES objects from other frames. That was impossible for ES5/3, the Caja translator from ES5 to ES3, since Object.freeze was emulated, and thus only enforced for translated code. As of ES5, Object.freeze protects unconditionally (which was why < bugzilla.mozilla.org/show_bug.cgi?id=674195> gave Caja so much
trouble).
So if SES code should generally be able to assume that its own [[Prototype]]s are stable, we need to preserve the safety of that assumption when objects from a SES-secured realm are exposed to objects from a non-SES-secured realm.
[*] I say "probably" to hedge my bets. The hard constraint we absolutely require is already guaranteed by ES5: That the [[Prototype]] of a non-extensible object cannot be mutated. Given that, it is possible (though unlikely) that SES will choose to make the setter universally available, in which case you are correct and the inter-realm checks I'm insisting on are for naught.
On 23 April 2013 17:10, Mark S. Miller <erights at google.com> wrote:
[*] I say "probably" to hedge my bets. The hard constraint we absolutely require is already guaranteed by ES5: That the [[Prototype]] of a non-extensible object cannot be mutated.
I'm confused now. How does ES5 guarantee that?
On Apr 23, 2013, at 5:10 PM, Mark S. Miller wrote:
On Tue, Apr 23, 2013 at 6:30 AM, Allen Wirfs-Brock <allen at wirfs-brock.com> wrote: Mark, below what you refer to as [[SetProtoype]] is essentially the [[SetInhertiance]] MOP operations in the current spec. draft there is also a [[GetInheritance]]. It is called Get/SetInheritance because it doesn't necessarily manipulate the [[Prototype]] of the object it is invoked upon (eg, if it is a Proxy) and for exotic objects property inheritance isn't constrained to use [[Prototype]].
In the past there were other restrictions that have been suggested. For example, not allowing: Object.prototype.proto = notNull; to do what the above a=names suggest.
What is "a=names"?
just "names". In other words is it ok set the [[Prototype]] of Object.prototype to something other than null.
Regardless, what is so special about the [[ProtoSetter]] operation that it needs to be restricted in this way? It's just a capability and you know how to control access to capabilities. You also know how to protect objects from having their [[Prototype]] mutated. If I have any object, that inherits from a different realm's Object.prototype I can navigate to its constructor property which gives me access to that other realm's, Object.create, Object[[@@create], and all the other Object.* functions. Why isn't being able to find and apply some other realms Object.free[ze] just as scary as finding its [[ProtoSetter]]?
SES includes Object.freeze in its set of universally available primordials. Thus, an Object.freeze from a foreign non-SES-secured realm is not a threat since Object.freeze is already universally available to confined ("untrusted") code within the local SES-secured realm.
I expect SES initialization will delete Object.prototype.proto or at least remove its setter. It is conceivable SES will hold the setter off on the side for some special use. But regardless, it will probably[*] deny these powers to confined code within that SES-secured realm. Thus, SES code, including confined code, should be able to safely assume that the [[Prototype]] of its own objects won't be mutated.
When SES code interacts only with SES code, whether intra or inter realm, then this is the end of the story and the cross realm threat is not an issue. But SES objects must be able to maintain their own integrity even when exposed to non-SES objects from other frames. That was impossible for ES5/3, the Caja translator from ES5 to ES3, since Object.freeze was emulated, and thus only enforced for translated code. As of ES5, Object.freeze protects unconditionally (which was why bugzilla.mozilla.org/show_bug.cgi?id=674195 gave Caja so much trouble).
So if SES code should generally be able to assume that its own [[Prototype]]s are stable, we need to preserve the safety of that assumption when objects from a SES-secured realm are exposed to objects from a non-SES-secured realm.
Which you would just do so by make your object's non-extensible, right? That does more than just limit setting of [[Prototype]] but it does that job too. I've previously proposed we have an independent per object immutablePrototype state which would be more targeted, but that didn't find any support.
[*] I say "probably" to hedge my bets. The hard constraint we absolutely require is already guaranteed by ES5: That the [[Prototype]] of a non-extensible object cannot be mutated. Given that, it is possible (though unlikely) that SES will choose to make the setter universally available, in which case you are correct and the inter-realm checks I'm insisting on are for naught.
Since you've decided it's ok to make Object.freeze, defineProperty, etc. universally available why wouldn't you also make setPrototypeOf (ie, [[ProtoSetter]]) universally available and just preventExtensions on object you need to protect from that. It it just the general distaste for proto that most of us share with you?
You haven't yet convinced me that there is actually a need for these realm restrictions on [[ProtoSetter]] and for something seemingly so arbitrary we should really have a strong case for why it is important. The ES world would be simpler and cleaner within the restrictions and there would be no particular reason for not making Object.setPrototypeOf available as an alternative API for those that prefer it.
On Apr 23, 2013, at 5:18 PM, Andreas Rossberg wrote:
On 23 April 2013 17:10, Mark S. Miller <erights at google.com> wrote:
[*] I say "probably" to hedge my bets. The hard constraint we absolutely require is already guaranteed by ES5: That the [[Prototype]] of a non-extensible object cannot be mutated.
I'm confused now. How does ES5 guarantee that?
See ecma-international.org/ecma-262/5.1/#sec-8.6.2 third paragraph beyond table 8
discussion oriented to SES again, I hope this won't be spec'd blindly after some SES requirement that might be very different from, let's say, node.js requirements, where the concept of security is not about evaluating runtime unknonw code ... right? :-)
I keep being amazed by how many problems is causing inheritance in specs.
Meanwhile, in a parallel ES3 like Universe:
delete Object.prototype.proto;
function AnotherObject(){} AnotherObject.prototype = AnotherProto = frames[0].Object.prototype;
var o = new AnotherObject; o.proto = whatever;
"be careful what you wish" ... if it's about making things that hard on server side JS too.
V8 apparently won't accept even a flag for this, regardless zero side effects on web whatever decision is made. code.google.com/p/v8/issues/detail?id=2645
This is bad, IMHO!
You are filing the wrong bug, asking for the wrong thing.
What V8 implemented was based on a misunderstanding of the January tentative consensus. The setter should not always throw.
We're working through the details to re-establish consensus here, in advance of the mid-May meeting. In the mean time, I suggest you not file v8 issues.
On 04/21/2013 03:27 PM, Mark S. Miller wrote:
Warning: The following is a sickening idea. I would really hate to see us do it. But I feel obliged to post it as it may in fact be the right thing to do.
This suggests that, in JS as well, the "proto" in {...., "proto": ...., ....} not be treated as a special case. Quoting it turns off the special treatment.
For the lulz, what do these print in engines?
print(eval('[{"proto": 17}]')[0].hasOwnProperty("proto")); print(eval('[{"proto":0x17}]')[0].hasOwnProperty("proto"));
And considering what motivated real-world engine behaviors here, what constraints might possibly SunSpider imply? (Conceivably none, to be sure, although I have my doubts a strstr could be eaten here. Or maybe just another mode for the parser. Ha, ha, ha. Lulz, I told you, lulz!)
then is more Ha, ha, ha. Lulz, I told you, lulz! this one?
'__proto__' in {__proto__:null,"__proto__":null}
To clarify, since I was perhaps "somewhat" terse here. :-)
print(eval('[{"proto": 17}]')[0].hasOwnProperty("proto")); print(eval('[{"proto":0x17}]')[0].hasOwnProperty("proto"));
SunSpider uses eval() on JSONish input, so engines have to make that fast. Most/all engines for potential JSON-looking input (wrapped in '()' or '[]') attempt to JSON-parse the string before doing a full parse. If the JSON-parse fails (probably quickly, if it does), fall back to a full parse. If it succeeds, yay, you probably saved a bunch of time, return the resulting value.
(It's no longer that simple, of course. |"use strict"; eval('({"x":2,"x":4})');| must throw a SyntaxError for duplicate property, so the hack can't be used if the caller's strict. And then timelessrepo.com/json-isnt-a-javascript-subset observed that JSON allows U+2028 and U+2029 where JS doesn't. So at least SpiderMonkey does a linear search for them in the string [post-JSON-parse] and falls back to the main JS parser if either's found.)
The weird behavior is because JSON-parsing treats proto as a regular old property. (The second case isn't JSON: hex's forbidden.) If the engine hacks JSON parsing into eval, and implements JSON.parse('{"proto":2}').hasOwnProperty("proto") correctly, you get this oddity.
The obvious workaround is to search for proto in the eval-string and not JSON-parse if it's found. I have my doubts the SunSpider-score hit for the strstr over the entire input string can be eaten, but I could be mistaken.
Summary: horrible pile of hacks for a, er, venerable benchmark.
The reason for mentioning this here/now is that if quotes-means-[[DefineOwnProperty]] were standard, using the JSON parser on eval input would be fine here, and all complexity of this quirk would disappear. Or we could just add yet more modes of JSON-looking parsing. Or something. Blargh.
Le 23/04/2013 17:52, Allen Wirfs-Brock a écrit :
On Apr 23, 2013, at 5:10 PM, Mark S. Miller wrote:
[*] I say "probably" to hedge my bets. The hard constraint we absolutely require is already guaranteed by ES5: That the [[Prototype]] of a non-extensible object cannot be mutated. Given that, it is possible (though unlikely) that SES will choose to make the setter universally available, in which case you are correct and the inter-realm checks I'm insisting on are for naught.
Since you've decided it's ok to make Object.freeze, defineProperty, etc. universally available why wouldn't you also make setPrototypeOf (ie, [[ProtoSetter]]) universally available and just preventExtensions on object you need to protect from that. It it just the general distaste for proto that most of us share with you?
You haven't yet convinced me that there is actually a need for these realm restrictions on [[ProtoSetter]] and for something seemingly so arbitrary we should really have a strong case for why it is important. The ES world would be simpler and cleaner within the restrictions and there would be no particular reason for not making Object.setPrototypeOf available as an alternative API for those that prefer it.
This certainly begs for a longer and more educated answer than I can provide, but I believe one issue is that as far as web browsers are concerned, new realms can be created dynamically (document.createElement('iframe')). So if you have a piece of code in which assume [[ProtoSetter]] is generally not available (because you delete-d it or kept it for you), then this assumption can be broken by creating a new realm, extracting its [[ProtoSetter]] and using it in ways you didn't expect.
Now, this all relies on the ability for a partially trusted third party to create a new realm. If it is assumed that any code and create new realms without restrictions, then, a realm check in [[ProtoSetter]] is necessary. However, if the ability to create new realms is kept under control, then the realm check in [[ProtoSetter]] should not be necessary.
For the browser, it may be possible to confine code, list all capabilities that creates new realms and prohibit access to the [[ProtoSetter]] of any new confined realms. I think it should be possible, but same-origin iframes are a headeache. Basically, document.createElement('iframe').contentWindow does not give access to one new realm. A new realm is created anytime the iframe is navigated [1] to a new resource (iframe.src is set to a new value or whatever inside the iframe caused it to be navigated, etc.). In that case, is it possible always possible for the confiner to access the new realm [[ProtoSetter]] before anyone else? ("iframe confinement question" hereafter) I don't have the answer to this question. I'm particularly worried of scenarii where an iframe would open, run some code unconfined and can "hide" [[ProtoSetter]] somewhere and pass the capability to supposedly confined code later down the road. Here, the important question is: can same-origin iframe code run unconfined?
If the answer to the "iframe confinement question" is yes, then new realms can be confined and [[ProtoSetter]] is under control and the [[ProtoSetter]] realm check isn't necessary. If the answer is no, then the [[ProtoSetter]] realm check is necessary.
David
[1] www.whatwg.org/specs/web-apps/current-work/multipage/history.html#navigate
Earlier private correspondence with Allen about a line of reasoning I promised to write up and post to es-discuss. I still haven't found the time to write this up, so simply posting this correspondence in the meantime.
[with a slight bit of editing]
---------- Forwarded message ---------- From: Mark S. Miller <erights at google.com>
Date: Sun, Apr 28, 2013 at 10:52 AM Subject: Re: B.3.1 The proto pseudo property To: Allen Wirfs-Brock <allen at wirfs-brock.com>
On Sun, Apr 28, 2013 at 10:41 AM, Allen Wirfs-Brock <allen at wirfs-brock.com>wrote:
(private)
Do you think we can come to some sort of agreement, as discussed below, that [[ProtoSetter]] doesn't need to be realm restricted. Such an agreement would let us write the simplest possible specification of proto.
Very timely question. I've discussed this within the other Cajadores and the answer is yes. While the range restriction help security in some ways, it doesn't help a lot, and it actually hurts in other ways. Such simplicity itself is of benefit to security, and weighs in the overall tradeoff. On balance we're better off without it. I'll be posting publicly on this soon.
It would eliminated having to introduce general object/realm associations into the spec. including whatever unanticipated complications come with them.
We will still need this in ES7 to support weak references, but it is good we can postpone till ES7.
It would also remove all obstacles to having Object.setPrototypeOf which a number of vocal community members would really prefer to have built-in and available rather than having to use proto ugliness.
Yes. All objections to this disappear. And likewise for having proxies handle trapping proto changes differently from their handling of other changes.
I'd really like to get resolution on this so we can get it out of the way once and for all and concentrate on higher leverage spec. work.
What do you think?
Happiness all around !
Allen
(PS, If you don't think [[PreventExtensions]] gives SES fine-graned enough control on [[ProtoSetter]] then I'd vastly perfer having a per object [[PreventProtoTampering]] over some sort of realm-based control.)
That would help some. But the pressure for it from our new understanding of the security implications isn't high, and so probably not high enough to cause it to happen. I'm fine without this extra bit.
Do you think we can come to some sort of agreement, as discussed below, that [[ProtoSetter]] doesn't need to be realm restricted. Such an agreement would let us write the simplest possible specification of proto. Very timely question. I've discussed this within the other Cajadores and the answer is yes. While the range restriction help security in some ways, it doesn't help a lot, and it actually hurts in other ways. Such simplicity itself is of benefit to security, and weighs in the overall tradeoff. On balance we're better off without it. I'll be posting publicly on this soon.
...
It would also remove all obstacles to having Object.setPrototypeOf which a number of vocal community members would really prefer to have built-in and available rather than having to use proto ugliness.
Yes. All objections to this disappear. And likewise for having proxies handle trapping proto changes differently from their handling of other changes.
If I didn't misinterpret, this sounds like a very, very a welcome discussion -- one for which I would like to restate that I have a real use-case which is not 100% solvable with realm-confined proto[1].
I would like to add that, should setPrototypeOf
, be admitted, it should work on objects which don't inherit from Object.prototype
in order to settle my use-case (and also from a purist's point of view of how the language should behave). If setPrototypeOf
is not admitted, I would hope that at least proto will be a setter which can be retrieved with getOwnPropertyDescriptor
and applied to objects which don't inherit from Object.prototype
.
Please keep up the discussions around this issue!
On Tue, May 7, 2013 at 11:09 AM, Nathan Wall <nathan.wall at live.com> wrote:
If I didn't misinterpret, this sounds like a very, very a welcome discussion -- one for which I would like to restate that I have a real use-case which is not 100% solvable with realm-confined
__proto__
[1].I would like to add that, should
setPrototypeOf
, be admitted, it should work on objects which don't inherit fromObject.prototype
in order to settle my use-case (and also from a purist's point of view of how the language should behave). IfsetPrototypeOf
is not admitted, I would hope that at least__proto__
will be a setter which can be retrieved withgetOwnPropertyDescriptor
and applied to objects which don't inherit fromObject.prototype
.
Agreed on both. The only restriction we need is the one that ES5 already gives us: You can't change the [[Prototype]] of a non-extensible object.
fine for non-extensible objects, you might desire to keep a dictionary a dictionary though, allowing properties extensions avoiding hot-swap inheritance
2 options in my mind:
- Object.freezeInheritance(generic), setting a [[MutablePrototype]] internal property to false (true by default)
- a new native constructor such Dict/Dictionary with immutable [[Prototype]] as exception
I rather prefer a first-like option so that second constructor is easy to implement and any other object can still be used and marked as safe in terms of inheritance.
This would be really nice, IMHO!
So here is the plan that I'll review at the next TC39 meeting:
-
Add
Object.setPrototypeOf(obj, proto)
A obj must be extensible in order to change its [[Prototype]]. There are no realm restrictions. It's just like all the other
Object.*
methods in operating on any object, independent of realm association. -
Object.prototype.__proto__
is moved back to Annex B. It is defined as an accessor property with attributes{enumerable: true, configurable: true}
. The get and set functions are defined equivalently toObject.setPrototypeOf
andObject.getPrototypeOf
. No realm restrictions. No reflection restrictions.Object.getOwnPropertyNames(Object.prototype)
includes"__proto__"
. -
__proto__
as a property key in an object literal (but not a class definition) is syntax with special semantics of setting the literal object's [[Prototype]] when it is created. It is a clause 11 feature and is not tied to the presence ofObject.prototype.__proto__
. -
Both
Object.setPrototypeOf
andObject.prototype.__proto__
are defined in terms of the [[SetInheritance]]/[[GetInheritance]] MOP operations (the names can still change). There are corresponding Proxy traps. There are no exceptional restrictions placed on the handlers. Just the normal invariants. In particular, if the target is non-extensible then the [[SetInheritance]] Proxy handler can't change the observable [[GetInheritance]] result for the proxy object.
On Tue, May 7, 2013 at 1:52 PM, Allen Wirfs-Brock <allen.wirfsbrock at gmail.com> wrote:
So here is the plan that I'll review at the next TC39 meeting:
- Add
Object.setPrototypeOf(obj, proto)
A obj must be extensible in order to change its [[Prototype]]. There are no realm restrictions. It's just like all the other
Object.*
methods in operating on any object, independent of realm association.
+1
Object.prototype.__proto__
is moved back to Annex B.
Since __proto__
, unlike __defineGetter__
, provides functionality that is
otherwise unavailable, all JS platforms will treat it as mandatory whether
we put it into Appendix B or the main text. At this point, I think moving
this back to Appendix B would be an obviously meaningless gesture
It is defined as an accessor property with attributes
{enumerable: true, configurable: true}
. The get and set functions are defined equivalently toObject.setPrototypeOf
andObject.getPrototypeOf
. No realm restrictions. No reflection restrictions.Object.getOwnPropertyNames(Object.prototype)
includes"__proto__"
.
+1
__proto__
as a property key in an object literal (but not a class definition) is syntax with special semantics of setting the literal object's [[Prototype]] when it is created. It is a clause 11 feature and is not tied to the presence ofObject.prototype.__proto__
.
I hadn't thought about this irregularity if it appears within a class definition. That aside, +1.
- Both
Object.setPrototypeOf
andObject.prototype.__proto__
are defined in terms of the [[SetInheritance]]/[[GetInheritance]] MOP operations (the names can still change). There are corresponding Proxy traps. There are no exceptional restrictions placed on the handlers. Just the normal invariants. In particular, if the target is non-extensible then the [[SetInheritance]] Proxy handler can't change the observable [[GetInheritance]] result for the proxy object.
+1. Excellent!
On Tue, May 7, 2013 at 1:59 PM, Mark S. Miller <erights at google.com> wrote:
On Tue, May 7, 2013 at 1:52 PM, Allen Wirfs-Brock <allen.wirfsbrock at gmail.com> wrote:
Object.prototype.__proto__
is moved back to Annex B.Since
__proto__
, unlike__defineGetter__
, provides functionality that is otherwise unavailable, all JS platforms will treat it as mandatory whether we put it into Appendix B or the main text. At this point, I think moving this back to Appendix B would be an obviously meaningless gesture
My "since" is incorrect, as the functionality is available via
Object.setPrototypeOf
. Nevertheless, I still think this would be a
meaningless gesture. OTOH, since it is meaningless, it is also mostly
harmless.
Looks like a very clean solution. The only thing I’m not entirely convinced about is Object.setPrototypeOf()...
... given how one is normally discouraged from using such functionality (=proto as a setter) and ... given that the most frequent use case goes away in ES6 (thanks to it allowing one to subclass built-ins).
Hence, honest question: Does it make sense to expose a new API for something that is mainly used for hacks? If you really needed it, you could retrieve it like this:
const mySetPrototypeOf = Object.getOwnPropertyDescriptor(Object.prototype, 'proto').set; // mySetPrototypeOf.call(obj, aProto); // Alternatively: mySetPrototypeOf = Function.prototype.call.bind(...)
Mark S. Miller wrote:
Object.prototype.__proto__
is moved back to Annex B.Since
__proto__
, unlike__defineGetter__
, provides functionality that is otherwise unavailable, all JS platforms will treat it as mandatory whether we put it into Appendix B or the main text. At this point, I think moving this back to Appendix B would be an obviously meaningless gestureMy "since" is incorrect, as the functionality is available via
Object.setPrototypeOf
. Nevertheless, I still think this would be a meaningless gesture. OTOH, since it is meaningless, it is also mostly harmless.
Having __proto__
in the main spec be a special form when used as a
property name in an object literal, but relegating
Object.prototype.__proto__
to Annex B, seems inconsistent just on that
basis, too. One place or the other -- main spec or Annex B -- but not both.
The special syntax can't go into Annex B; it must remain in the main text. Allen's message agrees with this. I agree that consistency suggests that the property go in the main text, but doesn't demand it. What would be gained by moving the property alone to Annex B? If nothing, then I think this consistency should win.
On 8 May 2013 07:10, Mark Miller <erights at gmail.com> wrote:
What would be gained by moving the property alone to Annex B? If nothing, then I think this consistency should win.
JavaScript implementations in new or existing eco systems that are not poisoned by web legacy wouldn't be obliged to support it. It's the difference between acknowledging web reality and forcing web reality on everybody.
On May 8, 2013, at 12:01 AM, Andreas Rossberg wrote:
On 8 May 2013 07:10, Mark Miller <erights at gmail.com> wrote:
What would be gained by moving the property alone to Annex B? If nothing, then I think this consistency should win.
JavaScript implementations in new or existing eco systems that are not poisoned by web legacy wouldn't be obliged to support it. It's the difference between acknowledging web reality and forcing web reality on everybody.
+1
The object literal special form could go either place. I think the O.p properties should be in Annex B and I'm fine with also placing the object literal proto special form there too, based on a consistency argument.
What about your triangle argument?
On May 8, 2013, at 8:31 AM, Mark Miller wrote:
What about your triangle argument?
There is another way:
let obj = Object.setPrototypeOf({x:0, y:0}, pointProto};
Let's keep {__proto__: foo}
in the slightly disrespectable Annex B box. That keeps it together with O.p.__proto__
and leaves room for future, more elegant object literal syntax extensions if we decided we really need them (and we probably won't).
On 8 May 2013 17:41, Allen Wirfs-Brock <allen at wirfs-brock.com> wrote:
On May 8, 2013, at 8:31 AM, Mark Miller wrote:
What about your triangle argument?
There is another way:
let obj = Object.setPrototypeOf({x:0, y:0}, pointProto};
Let's keep {proto: foo} in the slightly disrespectable Annex B box. That keeps it together with O.p.__proto and leaves room for future, more elegant object literal syntax extensions if we decided we really need them (and we probably won't).
Isn't Object.create the proper alternative to both {proto: } and triangle for objects? What has setPrototypeOf got to do with it? (And why is that on the table all of a sudden?)
Le 08/05/2013 16:46, Andreas Rossberg a ?crit :
Isn't
Object.create
the proper alternative to both{__proto__: }
and triangle for objects? What hassetPrototypeOf
got to do with it? (And why is that on the table all of a sudden?)
Object.create
only creates "normal" objects, not arrays, functions,
dates, etc.
Le 08/05/2013 08:01, Andreas Rossberg a écrit :
On 8 May 2013 07:10, Mark Miller <erights at gmail.com> wrote:
What would be gained by moving the property alone to Annex B? If nothing, then I think this consistency should win. JavaScript implementations in new or existing eco systems that are not poisoned by web legacy wouldn't be obliged to support it. It's the difference between acknowledging web reality and forcing web reality on everybody.
What are you saying? V8 releases versions where annoying and ugly de facto standards are out so that software built on top of Node.js, MongoDB and other embedders only use a cleaner JS? Awesome! ;-)
On 5/8/2013 9:00 AM, David Bruant wrote:
Le 08/05/2013 16:46, Andreas Rossberg a écrit :
Isn't Object.create the proper alternative to both {proto: } and triangle for objects? What has setPrototypeOf got to do with it? (And why is that on the table all of a sudden?) Object.create only creates "normal" objects, not arrays, functions, dates, etc.
Same is true of __proto__
in object literals.
The benefit of __proto__
in literals is the succinctness of it. Using
Object.create
to create new objects requires using descriptors which
are horribly verbose.
Circle.prototype = {
__proto__: Shape,
constructor: Circle,
/* etc. */
};
This becomes much less important with classes.
On 8 May 2013 18:06, David Bruant <bruant.d at gmail.com> wrote:
Le 08/05/2013 08:01, Andreas Rossberg a écrit :
On 8 May 2013 07:10, Mark Miller <erights at gmail.com> wrote:
What would be gained by moving the property alone to Annex B? If nothing, then I think this consistency should win.
JavaScript implementations in new or existing eco systems that are not poisoned by web legacy wouldn't be obliged to support it. It's the difference between acknowledging web reality and forcing web reality on everybody.
What are you saying? V8 releases versions where annoying and ugly de facto standards are out so that software built on top of Node.js, MongoDB and other embedders only use a cleaner JS? Awesome! ;-)
That would be an option -- I'd very much like to move some of these things behind a flag.
I proposed a flag for a reusable setter they told me they have no interest to fragment the language behind these kind of flags ...
To all: a new syntax is also more suitable for shims/polyfills, something broken/partial implementation of proto.set descriptor cannot replace so, as direction, is cleaner and easier to adopt/shim
(function(O,p){O[p]||(O[p]=function(o,p){o.proto=p;return o})}(Object,'setPrototypeOf'));
Anything any web page could put in without problems (IE10 and lower needs a
reference swap and manual implementation of proto through new
and loop
over getOwnPropertyNames descriptors but this is another story plus this is
still easier to shim via Object.function instead of a magic property in the
Object.prototype)
Otherwise try to deal with broken implementation of Object.getOwnPropertyDescriptor(Object.protototype, 'proto').set with try/catch to see if poisoned or not and see that as migration/cleaner way to do the same it fails in simplicity and portability.
Side note: I still would like to see in any debugger tons of warnings when anything not standard yet or marked as deprecated in MDN or anywhere else in specs, are shown.
Last, but not least, I am very happy about this direction, you all know that, so thanks
On May 8, 2013, at 8:46 AM, Andreas Rossberg wrote:
Isn't
Object.create
the proper alternative to both{__proto__: }
and triangle for objects? What hassetPrototypeOf
got to do with it? (And why is that on the table all of a sudden?)
I think that Brandon Benvie adequated addressed Object.create
.
Regarding setPrototypeOf
, once Mark agreed that the [[protoSetter]] function did not need to be Realm restricted it essentially became a publicly available API for modify the [[Prototype]] of arbitrary objects.
Object.getOwnPropertyDescriptor(Object.prototype, "__proto__").set.call(obj, proto)
There is a vocal part of the JS community who would prefer that the core language also offer Object.setPrototypeOf
as the preferred alternative to the above:
Object.setPrototypeOf(obj,proto)
This is only a cosmetic difference. But I agree that it is good cosmetics. Dynamic prototype modification seems to have won as a required feature of the language. Since that is the case, consistancy suggests that we should treat it cosmetically just like all the dynamic reflection operations defined on Object.
Andreas Rossberg wrote:
On 8 May 2013 18:06, David Bruant<bruant.d at gmail.com> wrote:
Le 08/05/2013 08:01, Andreas Rossberg a écrit :
On 8 May 2013 07:10, Mark Miller<erights at gmail.com> wrote:
What would be gained by moving the property alone to Annex B? If nothing, then I think this consistency should win. JavaScript implementations in new or existing eco systems that are not poisoned by web legacy wouldn't be obliged to support it. It's the difference between acknowledging web reality and forcing web reality on everybody. What are you saying? V8 releases versions where annoying and ugly de facto standards are out so that software built on top of Node.js, MongoDB and other embedders only use a cleaner JS? Awesome! ;-)
That would be an option -- I'd very much like to move some of these things behind a flag.
But not proto -- dream on if you think that is going away any time soon!
I see two problems:
-
Dumping stuff into Annex B to show disdain. This is pride, bad for the soul.
-
More important: people port code from the web. In what future super-web will we start fresh?
Let's do right by implementations and users, and not pretend mandatory stuff is optional. Let's not try to polish a turd, and actively be prideful about the result -- especially if our officially better polished form is as verbose as Object.setPrototypeOf or Object.create in practice.
Having Object.setPrototypeOf
to match Object.getPrototypeOf
is nice,
better for proxies (with necessary changes to them), and polyfillable.
Take my last note as an attitude adjustment, though. So long as
__proto__
endures, its brevity and legacy uses will tend to propagate
its use into the future.
In that light, pushing everything but the object literal __proto__
special form into Annex B still rubs me the wrong way. I'd do both
O.p.__proto__
and the special form in the main spec, or both in Annex B
(to make Andreas happy ;-). Not split them up.
Call me crazy but I can picture a world where you have to explicitly shim
in __proto__
(using Object.setPrototypeOf
) if you really need it. Not
anytime soon, sure, but maybe one day. Maybe...
Dean Landolt wrote:
Call me crazy but I can picture a world where you have to explicitly shim in
__proto__
(usingObject.setPrototypeOf
) if you really need it. Not anytime soon, sure, but maybe one day. Maybe...
Who can say? It's fruitless to speculate idly. Want to bet?
But aside from wagers, in the foreseeable future, we need to spec
__proto__
somewhere.
On 05/08/2013 01:58 PM, Brendan Eich wrote:
- Dumping stuff into Annex B to show disdain. This is pride, bad for the soul.
"Pride" doesn't seem like a reason one way or the other, to me. The reason would be to cordon off functionality whose mis-performance developers will not intuitively understand, so that they're less likely to use it. Some will, even still, perhaps just out of obstinacy ("pride", even, that they hacked their way to the tiniest solution :-) ). But some will take a second look, learn the reasons it's undesirable, and not use it.
- More important: people port code from the web. In what future super-web will we start fresh?
How much code gets ported from the web? Most libraries I can think of are pretty intricately tied to the event loop, the DOM, browser-isms like window.atob/btoa, and any number of other things. The true reason is that new environments may spin up that don't care about code ported from the web. SpiderMonkey even supports this with the proto feature-disabling macro. Supposing v8 didn't have proto, would Node have been less successful? I can't see proto as a dealbreaker for the success or failure of JS embeddings. Or Object.setPrototypeOf, either, for that matter.
I'd Annex-B the whole lot of this, if I were putting it anywhere in the spec. (Probably even the object-literal and [[SetInheritance]] bits of it, too, although these would be somewhat awkward out of line like that.)
On May 8, 2013, at 2:05 PM, Brendan Eich wrote:
Having Object.setPrototypeOf to match Object.getPrototypeOf is nice, better for proxies (with necessary changes to them), and polyfillable.
Take my last note as an attitude adjustment, though. So long as proto endures, its brevity and legacy uses will tend to propagate its use into the future.
In that light, pushing everything but the object literal proto special form into Annex B still rubs me the wrong way. I'd do both O.p.proto and the special form in the main spec, or both in Annex B (to make Andreas happy ;-). Not split them up.
Putting them both in Annex B is my proposal, as of my 8:41 AM message.
Jeff Walden wrote:
On 05/08/2013 01:58 PM, Brendan Eich wrote:
- Dumping stuff into Annex B to show disdain. This is pride, bad for the soul.
"Pride" doesn't seem like a reason one way or the other, to me.
Good.
The reason would be to cordon off functionality whose mis-performance
Why would Object.setPrototypeOf have any better perf?
developers will not intuitively understand, so that they're less likely to use it. Some will, even still, perhaps just out of obstinacy ("pride",
I think you missed that that was directed at TC39ers, not developers.
even, that they hacked their way to the tiniest solution :-) ). But some will take a second look, learn the reasons it's undesirable, and not use it.
And not use Object.setPrototypeOf?
meanwhile, if V8 devs would like to play with the "how knows if/where to put it" proposal, I've landed a patch that polish proto poison from V8 and add, with tests I could think of, Object.setPrototypeOf native: code.google.com/p/v8/issues/attachmentText?id=2675&aid=26750000000&name=set-prototype-of.patch&token=EgU0cdPaKMv9s2o9u0j57Mgu53A%3A1368055395105
premature, I know, but I could not resist :D
On 05/08/2013 04:10 PM, Brendan Eich wrote:
Why would Object.setPrototypeOf have any better perf?
It wouldn't.
developers will not intuitively understand, so that they're less likely to use it. Some will, even still, perhaps just out of obstinacy ("pride",
I think you missed that that was directed at TC39ers, not developers.
Some developers look at language specs, so spec position does provide meager influence that way. Documentation authors are the likelier target. They're going to look at specs to figure out what the methods do, to a much greater extent. Positioning in Annex B and not in main flow sends a small message that something's different. MDN documentation of octal syntax, for example, is only found in a deprecated/obsolete features page, for example.
even, that they hacked their way to the tiniest solution :-) ). But some will take a second look, learn the reasons it's undesirable, and not use it.
And not use Object.setPrototypeOf?
Yup. Everyone writing for the public non-mobile web has to do it now, it's not so bad.
it took 8 years to teach JS developers not to pollute Object.prototype, I understand your concern and I understand with the possibility to drop enumerability that could (and will) be proposed by someone.
At the same time it will be a stubborn move aim to fix some deprecated, old, not maintained anymore, library so ... hopefully that won't hurt much the community.
On 8 May 2013 22:58, Brendan Eich <brendan at mozilla.com> wrote:
Andreas Rossberg wrote:
On 8 May 2013 18:06, David Bruant<bruant.d at gmail.com> wrote:
Le 08/05/2013 08:01, Andreas Rossberg a écrit :
On 8 May 2013 07:10, Mark Miller<erights at gmail.com> wrote:
What would be gained by moving the property alone to Annex B? If nothing, then I think this consistency should win.
JavaScript implementations in new or existing eco systems that are not poisoned by web legacy wouldn't be obliged to support it. It's the difference between acknowledging web reality and forcing web reality on everybody.
What are you saying? V8 releases versions where annoying and ugly de facto standards are out so that software built on top of Node.js, MongoDB and other embedders only use a cleaner JS? Awesome! ;-)
That would be an option -- I'd very much like to move some of these things behind a flag.
But not proto -- dream on if you think that is going away any time soon!
I was thinking about V8 embedders other than browsers who could toggle that flag. And I absolutely do think that should be an option supported by the spec. To Annex B with it! (And let's bury setPrototypeOf quickly.)
I would rather bury proto sooner through a --no-black-magic-in-object-prototype V8 flag but again, I proposed a flag and V8 said they don't want to go for this direction ... actually it was you saying that
code.google.com/p/v8/issues/detail?id=2645#c3
"We have no interest in fragmenting the language space via ad-hoc flags"
?
Jeff Walden wrote:
On 05/08/2013 04:10 PM, Brendan Eich wrote:
Why would Object.setPrototypeOf have any better perf?
It wouldn't.
Then I don't know why you wrote "The reason would be to cordon off functionality whose mis-performance developers will not intuitively understand, so that they're less likely to use it" as a reason to put proto in Annex B. Adding an equivalent to the main spec does not cordon off the mis-performing (non-performant?) functionality. Seems like one step forward, one step backward -- and a bigger spec.
developers will not intuitively understand, so that they're less likely to use it. Some will, even still, perhaps just out of obstinacy ("pride", I think you missed that that was directed at TC39ers, not developers.
Some developers look at language specs, so spec position does provide meager influence that way. Documentation authors are the likelier target. They're going to look at specs to figure out what the methods do, to a much greater extent. Positioning in Annex B and not in main flow sends a small message that something's different. MDN documentation of octal syntax, for example, is only found in a deprecated/obsolete features page, for example.
I'm with you here, but then putting Object.setPrototypeOf in the main spec is an implicit endorsement.
even, that they hacked their way to the tiniest solution :-) ). But some will take a second look, learn the reasons it's undesirable, and not use it. And not use Object.setPrototypeOf?
Yup.
Not use Object.setPrototypeOf and do what instead? If people need to make ad-hoc inheritance relations and Object.create doesn't fit, then what?
We're adding Object.setPrototypeOf (and proto, however disdained) for a reason. There's a use case. Its future frequency, which Object.setPrototypeOf might satisfy, won't go down just by putting proto in Annex B.
Everyone writing for the public non-mobile web has to do it now, it's not so bad.
The non-mobile web is being eclipsed by mobile device growth. New content is written for smaller screens; old content lives on or dies, a flat to declining proposition. In the next ten years there'll be a lot of JS written for the web, AKA the mobile web. What should people use to make ad-hoc inheritance structures where Object.create does not suffice?
Andreas Rossberg wrote:
But not
__proto__
-- dream on if you think that is going away any time soon!I was thinking about V8 embedders other than browsers who could toggle that flag.
Node won't, if I recall correctly. Any other embeddings of note?
And I absolutely do think that should be an option supported by the spec. To Annex B with it!
Ok already!
(And let's bury
setPrototypeOf
quickly.)
Bury how? IIUC this goes in main spec alongside ES5's
Object.getPrototypeOf
, requires a proxy trap, etc.
On 05/09/2013 10:12 AM, Brendan Eich wrote:
Adding an equivalent to the main spec does not cordon off the mis-performing (non-performant?) functionality.
I may have misread, but I had thought there was argument to put Object.setPrototypeOf in Annex B as well. If it's added, that seems like the right place to me. There are somewhat orthogonal concerns here, for proto and an Object.* method. Special-form badness is only in proto the syntax. Prototype mutation after creation, with its erratic performance destabilization, and the impact upon proxies and [[SetInheritance]], is in proto the property and an Object.*. Both aspects raise concerns of varying degree for developers.
Not use Object.setPrototypeOf and do what instead? If people need to make ad-hoc inheritance relations and Object.create doesn't fit, then what?
The non-mobile web is being eclipsed by mobile device growth. New content is written for smaller screens; old content lives on or dies, a flat to declining proposition. In the next ten years there'll be a lot of JS written for the web, AKA the mobile web. What should people use to make ad-hoc inheritance structures where Object.create does not suffice?
Simply this: don't make ad-hoc inheritance relations where Object.create doesn't fit. Prototype mutation is extra expressiveness. But it is not necessary expressiveness, to write useful programs. There's an incredibly rich set of programs that can be written (have been written) without prototype mutation. I don't think it adds so much value to mandate it for every single embedding, considering its issues. Obviously you disagree to some degree. I suspect we'll have to leave it at that.
It's worth reiterating that Annex B would be better for all of this not for mobile or non-mobile web (although I don't think there's anything different about mobile and non-mobile with respect to proto's utility -- its utility on mobile is solely a matter of mobile being dominated by engines with proto). It's to not penalize embeddings working on a blank slate.
Jeff Walden wrote:
On 05/09/2013 10:12 AM, Brendan Eich wrote:
Adding an equivalent to the main spec does not cordon off the mis-performing (non-performant?) functionality.
I may have misread, but I had thought there was argument to put Object.setPrototypeOf in Annex B as well.
No, main spec is all that has been discussed and that's the only way to relegate proto to Annex B, per past consensus.
Simply this: don't make ad-hoc inheritance relations where Object.create doesn't fit.
No, as Allen said, such proto-setting is part of the de-facto standard language now.
Le 09/05/2013 18:14, Brendan Eich a écrit :
Andreas Rossberg wrote:
But not proto -- dream on if you think that is going away any time soon!
I was thinking about V8 embedders other than browsers who could toggle that flag.
Node won't, if I recall correctly. Any other embeddings of note?
MongoDB [1]. As far as I know, it only uses JS for Map-Reduce operations, so no big need for proto, etc. They might consider removing weak language constructs if they have the option.
David
2013/5/9 Brendan Eich <brendan at mozilla.com>
Andreas Rossberg wrote:
(And let's bury setPrototypeOf quickly.)
Bury how? IIUC this goes in main spec alongside ES5's Object.getPrototypeOf, requires a proxy trap, etc.
Here's one potential alternative: add Reflect.setPrototypeOf but not Object.setPrototypeOf.
Rationale:
- Under Allen's proposal upstream of this thread, proxies will require a setPrototypeOf trap, regardless of whether we expose Object.setPrototypeOf (proxies will need to intercept protosetter.call)
- every trap in the Proxy API has a corresponding method with matching signature in the reflection module to make it easy for proxies to forward intercepted ops.
- it follows that we'll have Reflect.setPrototypeOf in the reflection module
The question then becomes whether we additionally want to expose an Object.setPrototypeOf alias for this method.
Putting the setPrototypeOf method only in the reflection API but not on Object could be a way of telling developers that setting the prototype of an object is a reflective operation, to be used with some care, not a general utility function to be used routinely.
does Proxy
trap Object.getPrototypeOf
somehow ?
If yes, why do you think having two namespaces for the prototype operation
is better?
If not, why do you think that is not needed in case of getting the
prototype?
In any case, how Object.setPrototypeOf
differs anyhow compared to how the
__proto__
was suposed to be retrieved or set before Allen proposal (if
not that is cleaner, less obtrusive, and more elegant plus it works
consistently with Object.create(null)
objects too)?
Wasn't __proto__
demanding some trap too on a generic Proxy
?
I personally think that having getPrototypeOf
in the Object
and
setPrototypeOf
in the Reflect
is inconsistent and developers should be
aware of what they are doing regardless the chosen namespace.
Thanks for any extra clarification for my questions.
2013/5/20 Andrea Giammarchi <andrea.giammarchi at gmail.com>
does
Proxy
trapObject.getPrototypeOf
somehow ? If yes, why do you think having two namespaces for the prototype operation is better? If not, why do you think that is not needed in case of getting the prototype?
Yes, there's a getPrototypeOf
trap.
I'm not claiming that putting getPrototypeOf
on Object and in the
reflection module, while putting setPrototypeOf
only in the reflection
module is necessarily better. It is a bit inconsistent, but
Object.getPrototypeOf
existed before proxies and the reflection module,
while setPrototypeOf
is new.
The ES6 reflection module is the obvious best place to put functionality like this. Arguably, if we'd had a module like this earlier, it would have housed most of the static methods currently defined on Object. It is unfortunate that the reflection module must duplicate a lot of the Object statics from ES5. Going forward however, there is nothing forcing us from continuing this pollution of Object (except of course that the reflection module depends on modules, hence ES6 syntax, while adding a new static to Object does not. That would be a pragmatic reason to still add Object.setPrototypeOf.)
In any case, how
Object.setPrototypeOf
differs anyhow compared to how the__proto__
was suposed to be retrieved or set before Allen proposal (if not that is cleaner, less obtrusive, and more elegant plus it works consistently withObject.create(null)
objects too)?Wasn't
__proto__
demanding some trap too on a genericProxy
?
To date, proxies didn't specify how to interact with __proto__
since
__proto__
fell outside the spec proper.
For direct proxies, one does not necessarily need a trap. Indeed, I just tested on Firefox 21 and if you extract the proto setter and apply it to a proxy, it will set the prototype of its target object without trapping.
I personally think that having getPrototypeOf
in the Object
and
setPrototypeOf
in theReflect
is inconsistent and developers should be aware of what they are doing regardless the chosen namespace.
Indeed, it's inconsistent with ES5. But considering the broader ES6 and beyond time frame, now might be a good time to stop and consider whether we want to keep "polluting" the Object built-in with these methods. Admittedly it has worked fine in practice, and there is precedent even in ES6 to continue doing it (e.g. Object.getOwnPropertyKeys).
I believe having a counterpart in the Object, following a natural expectation where for a get you've got a set, is just fine but surely Reflect should have its own "reflection power" a part.
I see Reflect more like an introspection tool able to understand things and not necessarily mutate them ( yes, similar to what is ReflectionClass or ReflectionMethod in PHP, that worked there, still you cannot change an object class ).
Reflect is a good place to put a fn.caller
equivalent and not to set one,
so I don't see setPrototypeOf
a good fit for that namespace.
If it is, talking about graceful migration, setPrototypeOf
could be on
both globals, with the Object
version warning in console about being
deprecated as soon as next specs are aout where there won't be anything
anymore in the Object
but this sounds quite unrealistic so I'd rather
using what worked 'till now, the global Object
Just my 2 cents
On Mon, May 20, 2013 at 9:54 AM, Tom Van Cutsem <tomvc.be at gmail.com> wrote:
For direct proxies, one does not necessarily need a trap. Indeed, I just tested on Firefox 21 and if you extract the proto setter and apply it to a proxy, it will set the prototype of its target object without trapping.
if you want to know my opinion, this is scary and undesired :-)
On 5/20/2013 10:55 AM, Andrea Giammarchi wrote:
I believe having a counterpart in the Object, following a natural expectation where for a get you've got a set, is just fine but surely Reflect should have its own "reflection power" a part.
I see Reflect more like an introspection tool able to understand things and not necessarily mutate them ( yes, similar to what is ReflectionClass or ReflectionMethod in PHP, that worked there, still you cannot change an object class ).
Reflect is a good place to put a
fn.caller
equivalent and not to set one, so I don't seesetPrototypeOf
a good fit for that namespace.
One of the primary purposes of the Reflect module is to serve as support
for Proxy handlers. For every type of trap that Proxy supports, there is
a corresponding function in Reflect that does the default behavior for
that trap. Given mutable [[Prototype]], a Proxy trap for setPrototype
needs to exist, and by extension Reflect.setPrototype[Of]
needs to exist.
that's fine with what I am thinking/saying ... it's used as reflection, to
intercept, or to trap, and not used to "do the action" of setting the
prototype so Object
is, and you confirmed this, a better place for
setPrototypeOf
Andrea Giammarchi wrote:
that's fine with what I am thinking/saying ... it's used as reflection, to intercept, or to trap, and not used to "do the action" of setting the prototype so
Object
is, and you confirmed this, a better place forsetPrototypeOf
No, Brandon wrote:
"[for every meta-level operation], there is a corresponding function in Reflect that does the default behavior for that [meta-level operation]".
Don't confuse proxy handler traps with Reflect.* methods. This is why Tom asked whether we really need Object.setPrototypeOf, given the identital (in behavior, could even be the same function object) Reflect.setPrototypeOf.
This all needs to be discussed at this week's TC39 meeting. Allen, could you please add it to the agenda.
can I ask when is next TC39 meeting?
can I also suggest to analyze, if there's still any doubt left on a method VS a property yet, this piece of code if not highlighted before?
Behavior in Safari and FirefoxNightly (V8 still by its own here)
var obj = JSON.parse('{"__proto__":[]}');
console.log(obj instanceof Array); // false
for(var key in obj) console.log(key); // logs: __proto__
obj[key || "__proto__"] = {};
console.log(obj instanceof Array); // false
// immutable via string
var obj = {"__proto__":[]};
console.log(obj instanceof Array); // true
for(var key in obj) alert(key); // never happens
obj[key || "__proto__"] = {};
console.log(obj instanceof Array); // false
// changed via string
Those look like two different kind of Object instance. Nothing would ever happen with an explicit method instead and less troubles in specs for JSON too (together with for/in loops and property behavior with null objects and not null objects)
Thanks and Best
On 5/20/2013 5:58 PM, Andrea Giammarchi wrote:
can I ask when is next TC39 meeting?
Starts tomorrow. meetings:meetings
Andrea Giammarchi wrote:
can I also suggest to analyze, if there's still any doubt left on a method VS a property yet, this piece of code if not highlighted before?
I do not understand what you mean here.
Behavior in Safari and FirefoxNightly (V8 still by its own here)
var obj = JSON.parse('{"__proto__":[]}'); console.log(obj instanceof Array); // false for(var key in obj) console.log(key); // logs: __proto__ obj[key || "__proto__"] = {}; console.log(obj instanceof Array); // false // immutable via string var obj = {"__proto__":[]}; console.log(obj instanceof Array); // true for(var key in obj) alert(key); // never happens obj[key || "__proto__"] = {}; console.log(obj instanceof Array); // false // changed via string
Those look like two different kind of Object instance.
JSON is not JS.
I'm not sure what this has to do with anything discussed recently in this thread.
We are not going to go around the "don't standardize proto" barn again. Anyone trying to get web share with a new browser (which these days must include "the mobile web") needs to implement proto. That's why it is going into annex B at least.
2013/5/20 Andrea Giammarchi <andrea.giammarchi at gmail.com>
I believe having a counterpart in the Object, following a natural expectation where for a get you've got a set, is just fine but surely Reflect should have its own "reflection power" a part.
Yeah, given the existence of Object.getPrototypeOf, I agree it would be awkward to have Reflect.setPrototypeOf but not Object.setPrototypeOf.
I see Reflect more like an introspection tool able to understand things and not necessarily mutate them ( yes, similar to what is ReflectionClass or ReflectionMethod in PHP, that worked there, still you cannot change an object class ).
Reflect is a good place to put a
fn.caller
equivalent and not to set one, so I don't seesetPrototypeOf
a good fit for that namespace.
Nit: I think you got it backwards: the term "reflection" was originally used to mean that you could both observe and mutate parts of a program. Observation-only reflection was historically called "introspection".
On Tue, May 21, 2013 at 12:56 AM, Brendan Eich <brendan at mozilla.com> wrote:
Andrea Giammarchi wrote:
can I also suggest to analyze, if there's still any doubt left on a method VS a property yet, this piece of code if not highlighted before?
I do not understand what you mean here.
I mean that JSON, as part of the specs, needs to consider that "magic"
property case, resulting into an instanceof Object
, with an enumerable
property that will show up in a for/in
loop but it's not able to mutate
the object.
var obj = JSON.parse('{"__proto__":[]}');
alert(obj instanceof Array); // false
alert(obj["__proto__"] instanceof Array); // true
obj["__proto__"] = obj["__proto__"];
// or
for (var key in obj) {
obj[key] = obj[key];
// could be a generic
// clone operation
}
alert(obj instanceof Array); // false
alert(obj instanceof Object); // true
Above kind of object is "not perfectly described in current specs" and is
different from any other where the __proto__
is the inherited and not own
property.
This is what developers should be aware of, that __proto__
might not be
what they think is while Object.setPrototypeOf(obj, proto):obj
will
always work as expected, as well as Object.getPrototypeOf(obj):proto
All I am saying is that I understood reasons __proto__
is there but I
hope there won't be any step backward about Object.setPrototypeOf
All the best and looking forward to read the notes.
for always work I meant as long as the object is not sealed/frozen as discussed a while ago
On 5/21/2013 9:43 AM, Andrea Giammarchi wrote:
On Tue, May 21, 2013 at 12:56 AM, Brendan Eich <brendan at mozilla.com <mailto:brendan at mozilla.com>> wrote:
Andrea Giammarchi wrote: can I also suggest to analyze, if there's still any doubt left on a method VS a property yet, this piece of code if not highlighted before? I do not understand what you mean here.
I mean that JSON, as part of the specs, needs to consider that "magic" property case, resulting into an
instanceof Object
, with an enumerable property that will show up in afor/in
loop but it's not able to mutate the object.var obj = JSON.parse('{"__proto__":[]}'); alert(obj instanceof Array); // false alert(obj["__proto__"] instanceof Array); // true obj["__proto__"] = obj["__proto__"]; // or for (var key in obj) { obj[key] = obj[key]; // could be a generic // clone operation } alert(obj instanceof Array); // false alert(obj instanceof Object); // true
Above kind of object is "not perfectly described in current specs" and is different from any other where the
__proto__
is the inherited and not own property.
JSON is not a subset of JS [1] already. There's no reason why it has to follow a newly specified JS syntax rule.
consider this then, same thing JSON is doing now in FF and Safari
var obj = Object.defineProperty({}, '__proto__', {
enumerable: true,
writable: true,
configurable: true,
value: []
});
console.log(obj instanceof Array); // false
obj.__proto__ = Array.prototype;
console.log(obj instanceof Array); // false
for(var key in obj) console.log(key); // "__proto__"
console.log(JSON.stringify(obj)); // {"__proto__":[]}
console.log(JSON.parse('{"__proto__":[]}') instanceof Array); // false
Above example would not even exist in a world where such magic property is
not present so, once again, I understood a while ago reasons is there but,
once again, I believe Object.setPrototypeOf
is much more needed than such
property for real world cases where you test if (Object.isExtensible(obj)) { /* setPrototypeOf */ }
instead of if (!Object.prototype.hasOwnProperty.call(obj, "__proto__") || !(obj instanceof Object) && Object.getOwnPropertyDescriptor(Object.prototype, "__proto__").set) { try { setter.call(obj, proto) } catch (o_O) { alert("setter was poisoned") } }
Best
proto can be globally switched off by deleting Object.prototype.proto. I’m assuming that that is useful for security-related applications (Caja et al.). But I’m wondering: doesn’t that go too far? I’m seeing three ways of using proto:
Globally, I would only want to switch off #3. Rationale: the only security-critical operation of the three(?) The use case for performing this operation goes mostly away by ES6 allowing us to subtype built-ins. Could #3 be forbidden in strict mode?
.#1 and #2 should not be possible if an object does not have Object.prototype in its prototype chain. Rationale: objects as dictionaries via Object.create(null) or { proto: null }