The global object as the "global scope instance object"

# Andreas Rossberg (13 years ago)

From the TC39 notes: DaveH:  What if there are duplicate top-level bindings? MarkM:  Top-level let should behave as much as possible the same as concatenated scripts.

It's my claim that it is impossible to reconcile all of the following three goals in a way that is even remotely sane:

  1. Top-level bindings are aliased as data properties of the global object.
  2. Top-level bindings allow redefinition of existing properties of the global object.
  3. Top-level let/const have a temporal dead-zone.

Abandoning (3) essentially means abandoning a reasonable "const" semantics for the toplevel (and future hostile to guards), so we don't want to do that. (For V8, we had long struggles with defining and implementing a const for the (current notion of) top-level that actually makes any sense. We failed.)

Hence, I would like to suggest another, more moderate idea for a global scope reform: treat the global object similar to a module instance objects. That is:

  • Top-level bindings (other than var) are reflected as accessor properties on the global object.
  • They are defined on the global object before the script starts executing.
  • Their getters/setters simply delegate to the bound variables.
  • For const bindings, there are no setters.
  • These properties are non-configurable.
  • Unlike a module instance object, the global object is extensible (multiple scripts!).

This disallows lexical bindings to redefine existing non-configurable properties of the global object (including ones reflecting bindings from earlier scripts), which is checked before the script starts executing. Likewise, let/const-defined properties cannot later be deleted from the global object.

Apart from that, not much should change in practical terms. You should pretty much be able to replace var with let. At the same time, static scoping actually is maintained, and there is no issue with temporal dead zones. Moreover, concatenating scripts would not affect the meaning or validity of bindings, AFAICS.

Regarding scope pollution, I think this is best solved by making the global object be a fresh empty object that has the object containing all primordials as its prototype. Overriding is allowed according to the standard defineProperty semantics. No special rules necessary.

# Mark S. Miller (13 years ago)

On Fri, Jan 20, 2012 at 5:55 AM, Andreas Rossberg <rossberg at google.com>wrote: [...]

Regarding scope pollution, I think this is best solved by making the global object be a fresh empty object that has the object containing all primordials as its prototype.

[...]

<very pedantic -- you've been warned>

Just a pedantic terminology point for now: Presumably you mean "global primordials other than the global object itself". Object.prototype.toString is a primordial, but presumably there is no direct binding from your proposed object to Object.prototype.toString. (An alternate interpretation of your "containing" might be "reachable", in which case it would apply to Object.prototype.toString, but IMO that's a confusing way to specify things.)

I would classify Object as a global primordial, because it is initially bound, according to ES specification, to the global variable named "Object".

The global object itself is bound to top level "this", which isn't exactly a global variable, so it is unclear whether the unqualified phrase "global primordials" should include it. Hence the qualification.

Although the global object is often also bound to a global variable like "window", this does not make it a global primordial, since that binding derives from the hosting environment rather than any ES spec.

Were we to adopt your proposal, this new object would itself be a new primordial, reachable directly from the global object but not named by any global variable. Indeed, it would be one of the very few primordials not named by a naming path rooted in a global variable. (Another example is the ThrowTypeError object of 13.2.3. Are there any others? I can't think of any.) </very pedantic -- you've been warned>

Substantive comments later after I've mulled over your very interesting suggestion.

# Allen Wirfs-Brock (13 years ago)

On Jan 20, 2012, at 5:55 AM, Andreas Rossberg wrote:

From the TC39 notes:

DaveH: What if there are duplicate top-level bindings? MarkM: Top-level let should behave as much as possible the same as concatenated scripts.

It's my claim that it is impossible to reconcile all of the following three goals in a way that is even remotely sane:

  1. Top-level bindings are aliased as data properties of the global object.
  2. Top-level bindings allow redefinition of existing properties of the global object.

the rules are more complex than this. see ecmascript#78 There are important distinctions between own and inherited properties of the global object and properties created using var/function declarations (all we have in ES5) and arbitrary properties directly created by the user or host.

  1. Top-level let/const have a temporal dead-zone.

Abandoning (3) essentially means abandoning a reasonable "const" semantics for the toplevel (and future hostile to guards), so we don't want to do that. (For V8, we had long struggles with defining and implementing a const for the (current notion of) top-level that actually makes any sense. We failed.)

Temporal dead-zone initialization tracking requires extra state and checking (except for the common cases within functions where it can be statically determined that there are no possible references within the dead zone). This additional requirement is reflected in the specification in the ES5 definition of the CreateImmutableBinding and InitializeImmutableBinding environment record methods. In ES5 this mechanism (and state) was only needed within DeclarativeEnviornmentRecords for (the rarely used) immutable bindings. For the ES6 spec. this has been generalize to also apply to ObjectEnvironmentRecords and to track initialization of both mutable and immutable bindings.

The specification doesn't address how an implementation might represent that state but it needs to be somewhere. Closure capturing the state in getter/setter functions may be one way to do this for the global scope. I imagine that there are other ways. For example, global object properties could potentially have an implementation specific "not yet initialized" bit associated with properties on the global object. Or you might "shadow" the entire global object with a special kind of environment record that that censors access to instantiated but initialized global bindings (add entry to the censor environment when the binding is instantiated, only create a property in the global object when it is initialized).

However, I agree that there are still significant issues with rationally integrating the semantics of global object property access semantics and the ES global declaration semantics. Both for ES5 and ES6.

Hence, I would like to suggest another, more moderate idea for a global scope reform: treat the global object similar to a module instance objects. That is:

  • Top-level bindings (other than var) are reflected as accessor properties on the global object.
  • They are defined on the global object before the script starts executing.
  • Their getters/setters simply delegate to the bound variables.

whose access presumably perform the necessary checks to enforce the temporal deadzone.

  • For const bindings, there are no setters.
  • These properties are non-configurable.

It is already the case (for ES5) that all properties created for global declarations are non-configurable (unless created by an eval).

  • Unlike a module instance object, the global object is extensible (multiple scripts!).

This disallows lexical bindings to redefine existing non-configurable properties of the global object (including ones reflecting bindings from earlier scripts), which is checked before the script starts executing. Likewise, let/const-defined properties cannot later be deleted from the global object.

I'm not yet convinced that you need to publicly expose such global object properties accessor properties. I think it is possible to hide the mechanisms.

Apart from that, not much should change in practical terms. You should pretty much be able to replace var with let. At the same time, static scoping actually is maintained, and there is no issue with temporal dead zones. Moreover, concatenating scripts would not affect the meaning or validity of bindings, AFAICS.

Independent of implementation schemes, I think we need to decide on the meaning of the following independent scripts (in different documents)

<script> "do not use strict"; //window.foo does not exist at this point. foo = 10 const foo=5; //or let print(foo); </script>

<script> //window.foo does not exist at this point. window.foo = 10 const foo=5; //or let print(foo); </script>

<script> //window.foo does not exist at this point. print(foo); const foo=5; //or let print(foo); </script>

...

Regarding scope pollution, I think this is best solved by making the global object be a fresh empty object that has the object containing all primordials as its prototype. Overriding is allowed according to the standard defineProperty semantics. No special rules necessary.

In that case, why is it an object at all? Why not simply a DeclarativeEnvironmentRecord in an environment whose outer is a GlobalEnvironmentRecord whose backing object is the global object? I'm actually think along some similar lines but I think this direction raises interesting questions, going forward, about about some current fundamental assumptions:

What is the role of the global object? Must all ES global declarations reify as properties on the global object? If not, which global declarations must reify as such? (what are the actual web dependencies). Must all built-in globally accessible bindings reify as global object properties?

I have some ideas. I'll see if they hold together as I try to fill them out.

# Andreas Rossberg (13 years ago)

On 21 January 2012 01:23, Allen Wirfs-Brock <allen at wirfs-brock.com> wrote:

Temporal dead-zone initialization tracking requires extra state and checking (except for the common cases within functions where it can be statically determined that there are no possible references within the dead zone).  This additional requirement is reflected in the specification in the ES5 definition of the CreateImmutableBinding and InitializeImmutableBinding environment record methods.  In ES5 this mechanism (and state) was only needed within DeclarativeEnviornmentRecords for (the rarely used) immutable bindings. For the ES6 spec. this has been generalize to also apply to ObjectEnvironmentRecords and to track initialization of both mutable and immutable bindings.

In my proposal, I think you wouldn't need this generalisation. Uninitialised state needs to be tracked for environment bindings only, not for objects. The indirection through accessors makes the latter unnecessary.

Hence, I would like to suggest another, more moderate idea for a global scope reform: treat the global object similar to a module instance objects. That is:

  • Top-level bindings (other than var) are reflected as accessor properties on the global object.
  • They are defined on the global object before the script starts executing.
  • Their getters/setters simply delegate to the bound variables.

whose access presumably perform the necessary checks to enforce the temporal deadzone.

Yes.

  • For const bindings, there are no setters.
  • These properties are non-configurable.

It is already the case (for ES5) that all properties created for global declarations are non-configurable (unless created by an eval).

  • Unlike a module instance object, the global object is extensible (multiple scripts!).

This disallows lexical bindings to redefine existing non-configurable properties of the global object (including ones reflecting bindings from earlier scripts), which is checked before the script starts executing. Likewise, let/const-defined properties cannot later be deleted from the global object.

I'm not yet convinced that you need to publicly expose such global object properties accessor properties.  I think it is possible to hide the mechanisms.

I don't think so, at least not as cleanly. With accessors, you avoid the whole notion of uninitialised object property, which is painful and pretty intrusive.

In any case, we should do the same for the toplevel and for modules.

Independent of implementation schemes, I think we need to decide on the meaning of the following independent scripts (in different documents)

Those are the kinds of examples I was alluding to. And the main point of my proposal, and the use of accessors, is that those examples get a natural and coherent meaning, without requiring any special rules. Namely:

<script> "do not use strict"; //window.foo does not exist at this point. foo = 10 const foo=5;  //or let print(foo); </script>

The effect of this will be the same as in any other scope, e.g.:

{ foo = 10 const foo=5;  //or let print(foo); }

I.e., the first assignment is an (early) error, because you can't assign to a const. If it was a let, it would still be a temporal dead zone violation. The global object doesn't enter the picture at all.

<script> //window.foo does not exist at this point. window.foo = 10 const foo=5;  //or let print(foo); </script>

This behaves the same as:

module W { W.foo = 10 const foo=5; print(foo); }

which in turn behaves like:

Object.defineProperty(window, "foo", {get: function() { return foo }}) window.foo = 10 const foo = 5

I.e., the assignment again is an error, this time because there is no setter for foo. If it was a let, then you'd have a temporal dead zone violation in the setter.

<script> //window.foo does not exist at this point. print(foo); const foo=5;  //or let print(foo); </script>

Again, same as in any other scope: temporal dead zone violation on the first print.

The idea is that the top level is just another declarative scope, and all rules regarding its reflection in the global object mirror the rules for module instance objects. Less rules, less pain. :)

Regarding scope pollution, I think this is best solved by making the global object be a fresh empty object that has the object containing all primordials as its prototype. Overriding is allowed according to the standard defineProperty semantics. No special rules necessary.

In that case, why is it an object at all?  Why not simply a DeclarativeEnvironmentRecord in an environment whose outer is a GlobalEnvironmentRecord whose backing object is the global object?

That's more or less what I am thinking, if, by "global object" in that sentence, you mean the object carrying the (other) primordials (as opposed to the "toplevel instance object").

The reflection of the toplevel scope as its own object becomes secondary, and does not affect the environment semantics. The object is only used as a value for toplevel `this'.

The only complication for this scheme might be toplevel var, which currently does not shadow (all) external global bindings. That is perhaps worth changing for extended mode.

I'm actually think along some similar lines but I think this direction raises interesting questions, going forward,  about about some current fundamental assumptions:

What is the role of the global object?

That's what I'm trying to answer with my proposal: the new global object is merely the "top-level instance object", in analogy to module instance objects. Only needed as a value for `this'.

Must all ES global declarations reify as properties on the global object? If not, which global declarations must reify as such?  (what are the actual web dependencies). Must all built-in globally accessible bindings reify as global object properties?

I only see two sensible answers: all or none. I was assuming the former.

Of course, you could go even further down the road of module-like behaviour, and let the programmer decide using export declarations. But that seems like overkill, and isn't backwards compatible either, so I wouldn't propose it.

# Allen Wirfs-Brock (13 years ago)

On Jan 21, 2012, at 12:25 AM, Andreas Rossberg wrote:

On 21 January 2012 01:23, Allen Wirfs-Brock <allen at wirfs-brock.com> wrote:

Temporal dead-zone initialization tracking requires extra state and checking (except for the common cases within functions where it can be statically determined that there are no possible references within the dead zone). This additional requirement is reflected in the specification in the ES5 definition of the CreateImmutableBinding and InitializeImmutableBinding environment record methods. In ES5 this mechanism (and state) was only needed within DeclarativeEnviornmentRecords for (the rarely used) immutable bindings. For the ES6 spec. this has been generalize to also apply to ObjectEnvironmentRecords and to track initialization of both mutable and immutable bindings.

In my proposal, I think you wouldn't need this generalisation. Uninitialised state needs to be tracked for environment bindings only, not for objects. The indirection through accessors makes the latter unnecessary.

Yes, but you still have the state because you are creating an actual environment binding for the global variable.

BTW, I tried to express you approach (as I understand it) as a desugaring, Here it is

<script>

print(foo); let foo="initialized"; </script>

desugars into something like:

<script> { //need to wrap a block around entire top level // declaration instantiation for global let and const binding in this program const __getter_foo = function() {return __foo}; const __setter_foo = function(v) {__foo = v};
Object.defineOwnProperty(this,'foo', {get:__getter_foo, set: __setter_foo,enumerable: true, configurable: false}); //end of declaration instantiation print(foo); //this.foo get accessor, but __foo will be uninitialized so it will actually throw reference error (temporal dead zone let __foo = "initialized"; //declare and initialize backing store for global variable foo } <script>

... This disallows lexical bindings to redefine existing non-configurable properties of the global object (including ones reflecting bindings from earlier scripts), which is checked before the script starts executing. Likewise, let/const-defined properties cannot later be deleted from the global object.

I'm not yet convinced that you need to publicly expose such global object properties accessor properties. I think it is possible to hide the mechanisms.

I don't think so, at least not as cleanly. With accessors, you avoid the whole notion of uninitialised object property, which is painful and pretty intrusive.

I sounds to me like you are making an ease of implementation argument. I'm saying that an implementor could hide your proposed mechanism or some other semantically equivalent mechanisms to just expose the mechanism. No double that is true, but it isn't clear to me that hiding the mechanism would be impossible or even excessively burdensome. From a user perspective, it would certainly be better to have it be hidden.

In any case, we should do the same for the toplevel and for modules.

I believe that the current plan intentionally requires different semantic for the top level and for the top interior level of modules. I agree that there that a scheme that minimized the difference is desirable. It isn't clear to me that eliminating all differences is possible given legacy top level semantics.

Independent of implementation schemes, I think we need to decide on the meaning of the following independent scripts (in different documents)

Those are the kinds of examples I was alluding to. And the main point of my proposal, and the use of accessors, is that those examples get a natural and coherent meaning, without requiring any special rules. Namely:

I just want to make sure we know what we want these to do, independent of implementation strategy, I don't think we should be using accessors or any other implementation tricks in deciding the user facing semantics.

<script> "do not use strict"; //window.foo does not exist at this point. foo = 10 const foo=5; //or let print(foo); </script>

The effect of this will be the same as in any other scope, e.g.:

{ foo = 10 const foo=5; //or let print(foo); }

I.e., the first assignment is an (early) error, because you can't assign to a const. If it was a let, it would still be a temporal dead zone violation. The global object doesn't enter the picture at all.

I might agree, but first we also would have to consider what the semantics are if window.foo already existed prior to starting the script block and whether or not it is acceptable for breaking the script into two scripts (split after the foo=10 statement) produces a different result.

<script> //window.foo does not exist at this point. window.foo = 10 const foo=5; //or let print(foo); </script>

This behaves the same as:

module W { W.foo = 10

Is the above legal in the module system? The implication is that every modules is reified as an object bound wthin its body and hence the use can use properties definitions (including assignment to non-existence properties) instead of explicit exports to define the module export?

const foo=5; print(foo); }

which in turn behaves like:

Object.defineProperty(window, "foo", {get: function() { return foo }}) window.foo = 10 const foo = 5

I.e., the assignment again is an error, this time because there is no setter for foo. If it was a let, then you'd have a temporal dead zone violation in the setter.

<script> //window.foo does not exist at this point. print(foo); const foo=5; //or let print(foo); </script>

Again, same as in any other scope: temporal dead zone violation on the first print.

The idea is that the top level is just another declarative scope, and all rules regarding its reflection in the global object mirror the rules for module instance objects. Less rules, less pain. :)

I'm not at all sure that it would be wise to think of the global object as a module instance object. I do agree that there is still some benefit and possibilities for using declarative scope semantics at the global level.

# Gavin Barraclough (13 years ago)

On Jan 20, 2012, at 4:23 PM, Allen Wirfs-Brock wrote:

I'm not yet convinced that you need to publicly expose such global object properties accessor properties. I think it is possible to hide the mechanisms.

I'm strongly inclined to agree with this sentiment – engine implementors might choose internally to utilize accessors in their implementations, but from a user perspective it seems to make much more sense that these remain as regular data properties. It is worth bearing in mind that 'const' is already available outside of strict mode and we need to be considerate of any breaking changes we make to existing semantics. (We could define incompatible semantics for strict-mode only, but this would likely lead to a confusing divergence in semantics for users – it could be horribly confusing if const variables in non-strict code were to be implemented as data properties whilst those in strict code were defined as accessor properties).

alert('value' in Object.getOwnPropertyDescriptor(this, 'x')); alert(Object.getOwnPropertyDescriptor(this, 'x').writable === true);

alert(Object.getOwnPropertyDescriptor(this, 'x').value); x = 1; alert(Object.getOwnPropertyDescriptor(this, 'x').value); const x = 2; alert(Object.getOwnPropertyDescriptor(this, 'x').value); x = 3; alert(Object.getOwnPropertyDescriptor(this, 'x').value);

This script tests the presence and writability of a property on the global object, and the effect of attempting to assign to it in non-strict code. It demonstrates a non-const var behaving as a regular non-writable property in non-strict code.

On FireFox the above script outputs "true, false, undefined, undefined, 2, 2", which to my mind makes a lot of sense. It perfectly matches, as far as one can test, any other regular non writable property that you could create on an object (of course there it not normally the opportunity to split creation and initialization of a non-writable property).

This behaviour is also sensibly consistent with that for let and var. Running the above script, changing 'const' to 'var' outputs "true, true, undefined, 1, 2, 3" (indicating the property is writable, and all assignments succeed), and for 'let' currently also outputs "true, true, undefined, 1, 2, 3" on FireFox. To my mind this behaviour should probably change slightly. With a temporal dead zone implemented I would expect an attempt to [[DefineOwnProperty]] to an uninitialized value to be rejected (though I would not expect an exception to be thrown from non-strict code) so I would expect the output for 'let' to be "true, true, undefined, undefined, 2, 3" (the assignment of 'a' silently fails leaving the value unchanged).

I would suggest that specifying this as a piece of additional hidden internal state on a data descriptor (rejecting attempts to define or access the value prior to initialization, throwing in strict-mode & silently ignoring in non-strict) makes most sense on a number of grounds:

– compatibility with existing non-strict const implementations. – consistency with appearance of var properties on the global object. – consistency with appearance of regular non-writable properties on other objects. – consistency with behaviour of access to regular non-writable properties on other objects. – data properties typically have higher performance in implementations, specifying all global let & const properties to be accessors may introduce unnecessary overhead.

# Andreas Rossberg (13 years ago)

On 21 January 2012 20:36, Allen Wirfs-Brock <allen at wirfs-brock.com> wrote:

On Jan 21, 2012, at 12:25 AM, Andreas Rossberg wrote:

On 21 January 2012 01:23, Allen Wirfs-Brock <allen at wirfs-brock.com> wrote:

Temporal dead-zone initialization tracking requires extra state and checking (except for the common cases within functions where it can be statically determined that there are no possible references within the dead zone).  This additional requirement is reflected in the specification in the ES5 definition of the CreateImmutableBinding and InitializeImmutableBinding environment record methods.  In ES5 this mechanism (and state) was only needed within DeclarativeEnviornmentRecords for (the rarely used) immutable bindings. For the ES6 spec. this has been generalize to also apply to ObjectEnvironmentRecords and to track initialization of both mutable and immutable bindings.

In my proposal, I think you wouldn't need this generalisation. Uninitialised state needs to be tracked for environment bindings only, not for objects. The indirection through accessors makes the latter unnecessary.

Yes, but you still have the state because you are creating an actual environment binding for the global variable.

Yet, it would be nicely encapsulated in the semantics of temporal dead zone for variables, i.e. environment bindings. You don't need to proliferate it into the semantics of objects and object properties, where it does not belong. I think this is important (see also my reply to Gavin).

BTW, I tried to express you approach (as I understand it) as a desugaring,  Here it is

<script> print(foo); let foo="initialized"; </script>

desugars into something like:

<script> {  //need to wrap a block around entire top level   // declaration instantiation for global  let and const  binding in this program   const  __getter_foo = function() {return __foo};   const  __setter_foo = function(v) {__foo = v};   Object.defineOwnProperty(this,'foo', {get:__getter_foo, set: __setter_foo,enumerable: true, configurable: false});   //end of declaration instantiation   print(foo);  //this.foo get accessor, but __foo will be uninitialized so it will actually throw reference error (temporal dead zone   let __foo =  "initialized";  //declare and initialize backing store for global variable foo } <script>

Yes, except that the print would simply amount to print(__foo) directly, not involving the global object at all. Just as in any other scope.

And of course, any decent implementation would optimize these special accessors, just as it will have to for modules, which introduce the same kind of structure.

I sounds to me like you are making an ease of implementation argument.  I'm saying that an implementor could hide your proposed mechanism or some other semantically equivalent mechanisms to just expose the mechanism.  No double that is true, but it isn't clear to me that hiding the mechanism would be impossible or even excessively burdensome. From a user perspective, it would certainly be better to have it be hidden.

Actually, I'm not making an implementation argument at all! Implementations will want to optimize the accessors, which clearly is work (though, again, work that has to be done for modules anyway).

I'm making a semantics argument. I claim that the accessor approach is simpler and more regular. Simpler to spec, simpler to explain (at least when it comes to the details).

In any case, we should do the same for the toplevel and for modules.

I believe that the current plan intentionally requires different semantic for the top level and for the top interior level of modules.  I agree that there that a scheme that minimized the difference is desirable.  It isn't clear to me that eliminating all differences is possible given legacy top level semantics.

That may be true, but we should try. (Not sure about the "intentionally", though, other than that modules "intentionally" avoid the issues with the current toplevel semantics.)

<script> "do not use strict"; //window.foo does not exist at this point. foo = 10 const foo=5;  //or let print(foo); </script>

The effect of this will be the same as in any other scope, e.g.:

{  foo = 10  const foo=5;  //or let  print(foo); }

I.e., the first assignment is an (early) error, because you can't assign to a const. If it was a let, it would still be a temporal dead zone violation. The global object doesn't enter the picture at all.

I might agree, but first we also would have to consider what the semantics are if window.foo already existed prior to starting the script block and whether or not it is acceptable for breaking the script into two scripts (split after the foo=10 statement) produces a different result.

The properties on the global object would have to be defined before the script starts. If they cannot be defined, that is an early error.

Splitting the script as you describe does not make sense in strict mode, because the first half would not compile (early reference error).

In classic mode (assuming we really take the pain of backporting const), I assume it would work, because foo is still configurable when you reach the second script.

<script> //window.foo does not exist at this point. window.foo = 10 const foo=5;  //or let print(foo); </script>

This behaves the same as:

module W {  W.foo = 10 const foo=5; print(foo); }

Is the above legal in the module system?  The implication is that every modules is reified as an object bound wthin its body and hence the use can use properties definitions (including assignment to non-existence properties) instead of explicit exports  to define the module export?

Yes, every module is reified to an instance object, and since modules are recursive, this is available in its own body.

But no, the example is not legal, in the sense that it will throw a runtime error, for the reason I described (there is no setter for foo; with let, you'd get a deadzone violation).

Nor would it work to assign another property `bar' to W, for that matter, because module instance objects are not extensible. As I said, in that respect, they would differ from the toplevel object.

I'm not at all sure that it would be wise to think of the global object as a module instance object.

You did not really tell why you think so. :)

# Andreas Rossberg (13 years ago)

On 22 January 2012 23:46, Gavin Barraclough <barraclough at apple.com> wrote:

On Jan 20, 2012, at 4:23 PM, Allen Wirfs-Brock wrote:

I'm not yet convinced that you need to publicly expose such global object properties accessor properties.  I think it is possible to hide the mechanisms.

I'm strongly inclined to agree with this sentiment – engine implementors might choose internally to utilize accessors in their implementations, but from a user perspective it seems to make much more sense that these remain as regular data properties.  It is worth bearing in mind that 'const' is already available outside of strict mode and we need to be considerate of any breaking changes we make to existing semantics.

Sigh. This may be exactly the reason why I am deeply sceptical that backporting ES6 features to classic mode is a wise move. Now we don't just have to stay backwards compatible with broken ES5 features, we even have to stay backwards compatible with broken ES5 non-features? Even in strict mode?

Harmony was supposed to be based on strict mode for a reason.

(We could define incompatible semantics for strict-mode only, but this would likely lead to a confusing divergence in semantics for users – it could be horribly confusing if const variables in non-strict code were to be implemented as data properties whilst those in strict code were defined as accessor properties).

alert('value' in Object.getOwnPropertyDescriptor(this, 'x')); alert(Object.getOwnPropertyDescriptor(this, 'x').writable === true);

alert(Object.getOwnPropertyDescriptor(this, 'x').value); x = 1; alert(Object.getOwnPropertyDescriptor(this, 'x').value); const x = 2; alert(Object.getOwnPropertyDescriptor(this, 'x').value); x = 3; alert(Object.getOwnPropertyDescriptor(this, 'x').value);

This script tests the presence and writability of a property on the global object, and the effect of attempting to assign to it in non-strict code.  It demonstrates a non-const var behaving as a regular non-writable property in non-strict code.

On FireFox the above script outputs "true, false, undefined, undefined, 2, 2", which to my mind makes a lot of sense.  It perfectly matches, as far as one can test, any other regular non writable property that you could create on an object (of course there it not normally the opportunity to split creation and initialization of a non-writable property).

I disagree completely. This is the current V8 semantics as well, but it does not make sense.

You observe that the value of a const is changing in the middle of your program. That fundamentally violates const semantics, and is incompatible with the idea of guards on let as well.

Furthermore, the property descriptor tells you that the property is a writable data property, but you cannot write it (e.g. replacing "x = 1" with "this.x = 1"). That violates the object model.

In other words, the current behaviour breaks both scoping and objects semantics.

This behaviour is also sensibly consistent with that for let and var.  Running the above script, changing 'const' to 'var' outputs "true, true, undefined, 1, 2, 3" (indicating the property is writable, and all assignments succeed), and for 'let' currently also outputs "true, true, undefined, 1, 2, 3" on FireFox.  To my mind this behaviour should probably change slightly.  With a temporal dead zone implemented I would expect an attempt to [[DefineOwnProperty]] to an uninitialized value to be rejected (though I would not expect an exception to be thrown from non-strict code) so I would expect the output for 'let' to be "true, true, undefined, undefined, 2, 3" (the assignment of 'a' silently fails leaving the value unchanged).

More importantly, an attempt to [[Get]] an uninitialized property also has to be rejected. Both requires proliferating the entire notion of being uninitialized into the object model. To be coherent, you'd also have to be able to reflect on it through property descriptors, make it available to proxies, and so on. I see no good reason why we should go there.

I would suggest that specifying this as a piece of additional hidden internal state on a data descriptor (rejecting attempts to define or access the value prior to initialization, throwing in strict-mode & silently ignoring in non-strict)

Hidden internal state? That seems to undermine the very idea of having property descriptors to reflect the state of a property.

makes most sense on a number of grounds:

– compatibility with existing non-strict const implementations.

Yeah, I think there is no reasonable way we can achieve that. Existing const implementations aren't even compatible with each other. Const was ruled out in strict mode for a reason.

– consistency with appearance of var properties on the global object.

Var is not lexically scoped, and has a very different semantics from let. Breaking "consistency" with var is the whole point of lexical scoping.

– consistency with appearance of regular non-writable properties on other objects.  – consistency with behaviour of access to regular non-writable properties on other objects.

It's not consistent, otherwise you couldn't observe writable === true, and wouldn't need "hidden internal state".

– data properties typically have higher performance in implementations, specifying all global let & const properties to be accessors may introduce unnecessary overhead.

Actually, as a data point, performance of the toplevel is currently poor in V8, because every variable is reflected simply as a data property directly, and hence has to be read and written through immediately. The most effective way to optimize that is implementing the global object using internal accessors (that look like data properties). So in terms of efficient implementation of the toplevel, both approaches are equivalent, and as an implementer, I don't care much either way.

Also, note that for ES6 modules, we will have to optimize these kinds of accessors anyway.

One thing I noticed with your comments is that they consider the global scope almost exclusively from the object perspective, i.e., as special syntax for putting properties on the global object. But first and foremost, it is a scope, so coherent scoping semantics is at least as important. If it is incoherent with other scopes, that is confusing, and a serious refactoring hazards. With static scoping,

# Andreas Rossberg (13 years ago)

On 23 January 2012 14:54, Andreas Rossberg <rossberg at google.com> wrote:

One thing I noticed with your comments is that they consider the global scope almost exclusively from the object perspective, i.e., as special syntax for putting properties on the global object. But first and foremost, it is a scope, so coherent scoping semantics is at least as important. If it is incoherent with other scopes, that is confusing, and a serious refactoring hazards. With static scoping,

Oops, vicious Send button... :)

... With static scoping and modules, we have to be more careful about the scope aspect of the toplevel.

# Sam Tobin-Hochstadt (13 years ago)

On Mon, Jan 23, 2012 at 8:54 AM, Andreas Rossberg <rossberg at google.com> wrote:

One thing I noticed with your comments is that they consider the global scope almost exclusively from the object perspective, i.e., as special syntax for putting properties on the global object. But first and foremost, it is a scope, so coherent scoping semantics is at least as important. If it is incoherent with other scopes, that is confusing, and a serious refactoring hazards.

One point about this that came up at the face-to-face meeting:

Putting properties on the global object is an important use case. We don't want people to need to reach for |var| when they want to do this (see t-shirt discussion). Therefore, we may well need to compromise some cleanups we might make to the top-level otherwise if it wasn't for these two constraints.

# Andreas Rossberg (13 years ago)

On 23 January 2012 15:58, Sam Tobin-Hochstadt <samth at ccs.neu.edu> wrote:

Putting properties on the global object is an important use case. We don't want people to need to reach for |var| when they want to do this (see t-shirt discussion).  Therefore, we may well need to compromise some cleanups we might make to the top-level otherwise if it wasn't for these two constraints.

Granted. The proposal is an attempt to address this use case without violating other goals.

# Brendan Eich (13 years ago)

Andreas Rossberg <mailto:rossberg at google.com> January 23, 2012 5:54 AM On 22 January 2012 23:46, Gavin Barraclough<barraclough at apple.com> wrote:

On Jan 20, 2012, at 4:23 PM, Allen Wirfs-Brock wrote:

I'm not yet convinced that you need to publicly expose such global object properties accessor properties. I think it is possible to hide the mechanisms. I'm strongly inclined to agree with this sentiment – engine implementors might choose internally to utilize accessors in their implementations, but from a user perspective it seems to make much more sense that these remain as regular data properties. It is worth bearing in mind that 'const' is already available outside of strict mode and we need to be considerate of any breaking changes we make to existing semantics.

Sigh. This may be exactly the reason why I am deeply sceptical that backporting ES6 features to classic mode is a wise move. Now we don't just have to stay backwards compatible with broken ES5 features, we even have to stay backwards compatible with broken ES5 non-features? Even in strict mode?

Harmony was supposed to be based on strict mode for a reason.

Hang on -- I think Gavin may have stated things in a way that set the cat among pigeons unnecessarily. We agreed (at least Gavin and I) at the f2f to try to bring our const and function-in-block implementations in line with non-strict ES6 in the version-free opt-in model toward which we're striving.

In this light I do not think we should preemptively concede anything to breaking changes of existing implementation semantics.

OTOH the rest of Gavin's mail invoked existing implementations (Firefox in particular) more for intuitive non-strict appeal than as binding precedent, if I read Gavin right.

On how to implement the suggested semantics, I do see disagreement. Indeed I hope we do not need to introduce not-yet-defined as a value state in the object model. But the user-facing non-strict semantics should be separable.

(We could define incompatible semantics for strict-mode only, but this would likely lead to a confusing divergence in semantics for users – it could be horribly confusing if const variables in non-strict code were to be implemented as data properties whilst those in strict code were defined as accessor properties).

alert('value' in Object.getOwnPropertyDescriptor(this, 'x')); alert(Object.getOwnPropertyDescriptor(this, 'x').writable === true);

alert(Object.getOwnPropertyDescriptor(this, 'x').value); x = 1; alert(Object.getOwnPropertyDescriptor(this, 'x').value); const x = 2; alert(Object.getOwnPropertyDescriptor(this, 'x').value); x = 3; alert(Object.getOwnPropertyDescriptor(this, 'x').value);

This script tests the presence and writability of a property on the global object, and the effect of attempting to assign to it in non-strict code. It demonstrates a non-const var behaving as a regular non-writable property in non-strict code.

On FireFox the above script outputs "true, false, undefined, undefined, 2, 2", which to my mind makes a lot of sense. It perfectly matches, as far as one can test, any other regular non writable property that you could create on an object (of course there it not normally the opportunity to split creation and initialization of a non-writable property).

I disagree completely. This is the current V8 semantics as well, but it does not make sense.

You observe that the value of a const is changing in the middle of your program. That fundamentally violates const semantics, and is incompatible with the idea of guards on let as well.

Yes, we agree to all that for strict code (ES5 or above), but this is non-strict code. In non-strict code, implementations have already violated const semantics, and we have not added guards yet.

Furthermore, the property descriptor tells you that the property is a writable data property,

No -- false is the second value alerted.

but you cannot write it (e.g. replacing "x = 1" with "this.x = 1"). That violates the object model.

In other words, the current behaviour breaks both scoping and objects semantics.

Non-strict or really pre-strict implementations broke scoping semantics -- no temporal dead zone. Whether we can retroactively force the new semantics on any JS that worked with the old scoping semantics remains to be seen.

Object semantics, I think you misread the alert output. Or did I?

This behaviour is also sensibly consistent with that for let and var. Running the above script, changing 'const' to 'var' outputs "true, true, undefined, 1, 2, 3" (indicating the property is writable, and all assignments succeed), and for 'let' currently also outputs "true, true, undefined, 1, 2, 3" on FireFox. To my mind this behaviour should probably change slightly. With a temporal dead zone implemented I would expect an attempt to [[DefineOwnProperty]] to an uninitialized value to be rejected (though I would not expect an exception to be thrown from non-strict code) so I would expect the output for 'let' to be "true, true, undefined, undefined, 2, 3" (the assignment of 'a' silently fails leaving the value unchanged).

More importantly, an attempt to [[Get]] an uninitialized property also has to be rejected. Both requires proliferating the entire notion of being uninitialized into the object model. To be coherent, you'd also have to be able to reflect on it through property descriptors, make it available to proxies, and so on. I see no good reason why we should go there.

Agreed. And for non-strict code if we choose to keep the non-throwing behavior, we do not have to. We do however therefore have to allow a const to be read before initialized, with value undefined -- but only for non-strict code.

This is future-hostile to guards, or really, we kick the can down the road a bit and evangelize strict mode. If and when we add guards, we can either break scoping semantics backward compatibility, or require strict opt in.

# Andreas Rossberg (13 years ago)

On 23 January 2012 18:39, Brendan Eich <brendan at mozilla.org> wrote:

Andreas Rossberg <mailto:rossberg at google.com>

In other words, the current behaviour breaks both scoping and objects semantics.

Non-strict or really pre-strict implementations broke scoping semantics -- no temporal dead zone. Whether we can retroactively force the new semantics on any JS that worked with the old scoping semantics remains to be seen.

Object semantics, I think you misread the alert output. Or did I?

Big oops, you are right. How embarrassing! I apologize to Gavin for misrepresenting!

The scoping issue remains, though. Existing implementations are seriously broken there. For example, for some compatibility reasons, V8 currently allows

var w = 1; w = 2; print(w); const w = 3

which will output 2. The idea most likely was that const should behave like var. This, and other, similar examples clearly have to break if should const become official in classic mode, so the compatibility argument may not carry far.

This is future-hostile to guards, or really, we kick the can down the road a bit and evangelize strict mode. If and when we add guards, we can either break scoping semantics backward compatibility, or require strict opt in.

Doesn't sound pleasant to me... :(

# Brendan Eich (13 years ago)

Andreas Rossberg <mailto:rossberg at google.com> January 23, 2012 10:15 AM On 23 January 2012 18:39, Brendan Eich<brendan at mozilla.org> wrote:

Andreas Rossberg<mailto:rossberg at google.com>

In other words, the current behaviour breaks both scoping and objects semantics. Non-strict or really pre-strict implementations broke scoping semantics -- no temporal dead zone. Whether we can retroactively force the new semantics on any JS that worked with the old scoping semantics remains to be seen.

Object semantics, I think you misread the alert output. Or did I?

Big oops, you are right. How embarrassing! I apologize to Gavin for misrepresenting!

The scoping issue remains, though. Existing implementations are seriously broken there. For example, for some compatibility reasons, V8 currently allows

var w = 1; w = 2; print(w); const w = 3

which will output 2. The idea most likely was that const should behave like var. This, and other, similar examples clearly have to break if should const become official in classic mode, so the compatibility argument may not carry far.

SpiderMonkey:

js> var w = 1; w = 2; print(w); const w = 3

typein:19: TypeError: redeclaration of var w: typein:19: var w = 1; w = 2; print(w); const w = 3 typein:19: ....................................^

js> var w = 1

js> const w = 3

typein:22: TypeError: redeclaration of var w

Why does V8 do something else, do you know the impetus for diverging?

This is future-hostile to guards, or really, we kick the can down the road a bit and evangelize strict mode. If and when we add guards, we can either break scoping semantics backward compatibility, or require strict opt in.

Doesn't sound pleasant to me... :(

Politicians do kick the can all the time. Oh wait.

We can pay the piper now. I will try to get non-strict const in SpiderMonkey into alignment, see what breaks. May be hard to find the thing we ca't afford to break quickly, of course.

# Andreas Rossberg (13 years ago)

On 23 January 2012 19:25, Brendan Eich <brendan at mozilla.org> wrote:

Andreas Rossberg<mailto:rossberg at google.com>:

V8 currently allows

var w = 1; w = 2; print(w); const w = 3

which will output 2. The idea most likely was that const should behave like var. This, and other, similar examples clearly have to break if should const become official in classic mode, so the compatibility argument may not carry far.

SpiderMonkey:

js> var w = 1; w = 2; print(w); const w = 3 typein:19: TypeError: redeclaration of var w: typein:19: var w = 1; w = 2; print(w); const w = 3 typein:19: ....................................^

js> var w = 1 js> const w = 3 typein:22: TypeError: redeclaration of var w

However:

js> const w = 3; var w = 1; print(w);

3

Why does V8 do something else, do you know the impetus for diverging?

No, unfortunately I don't know the full history there. I can try to find it out.

But just to be obnoxious, with Firefox:

print(c); Object.defineProperty(this, 'c', {value: 1}); print(c); const c = 2; print(c); => undefined 1 2

Or even:

print(c); Object.defineProperty(this, 'c', {value: 1, writable: true}); print(c); c = 2; print(c); const c = 3; print(c); => undefined 1 2 3

Apparently, FF avoids breaking the object model only for the price of keeping the non-writable const property configurable until the actual initialization -- effectively breaking const completely.

If the property wasn't configurable, then the initialization would need to modify a non-writable, non-configurable data property, which is a clear violation of the JS object model. It is what V8 does, though. So in V8, the above examples are rejected, but instead, objects break.

Consequently, I maintain my claim that it is impossible to reconcile sane let/const semantics with the idea of having toplevel bindings represented as data properties on the global object. If there is some trick for "hiding" accessor semantics that doesn't break the object model then I'd be really interested in seeing it. :)

# Brendan Eich (13 years ago)

Andreas Rossberg <mailto:rossberg at google.com> January 24, 2012 2:37 AM On 23 January 2012 19:25, Brendan Eich<brendan at mozilla.org> wrote:

Andreas Rossberg<mailto:rossberg at google.com>:

V8 currently allows

var w = 1; w = 2; print(w); const w = 3

which will output 2. The idea most likely was that const should behave like var. This, and other, similar examples clearly have to break if should const become official in classic mode, so the compatibility argument may not carry far. SpiderMonkey:

js> var w = 1; w = 2; print(w); const w = 3 typein:19: TypeError: redeclaration of var w: typein:19: var w = 1; w = 2; print(w); const w = 3 typein:19: ....................................^

js> var w = 1 js> const w = 3 typein:22: TypeError: redeclaration of var w

However:

js> const w = 3; var w = 1; print(w); 3

This follows from var not disturbing a pre-existing binding. Or so we thought long ago when adding const, but that was years before const as initialize-only-and-at-most-once let.

Why does V8 do something else, do you know the impetus for diverging?

No, unfortunately I don't know the full history there. I can try to find it out.

But just to be obnoxious, with Firefox:

print(c); Object.defineProperty(this, 'c', {value: 1}); print(c); const c = 2; print(c); => undefined 1 2

You bet -- const does redefine the global property, unlike var. This is not something we propose for standardization, of course!

Or even:

print(c); Object.defineProperty(this, 'c', {value: 1, writable: true}); print(c); c = 2; print(c); const c = 3; print(c); => undefined 1 2 3

Apparently, FF avoids breaking the object model only for the price of keeping the non-writable const property configurable until the actual initialization -- effectively breaking const completely.

More fun:

js> function f(k) { for (var i = 0; i < k; i++) { const j = i*i; a.push(j); } } js> a = [] [] js> f(10)

js> a [0, 1, 4, 9, 16, 25, 36, 49, 64, 81]

SpiderMonkey const has never been initialize-at-most-once. Some people claim to like this, but we are going to try breaking it in favor of ES6.

Consequently, I maintain my claim that it is impossible to reconcile sane let/const semantics with the idea of having toplevel bindings represented as data properties on the global object.

I agree. For ES4 we equated let to var at top-level, a mistake. For ES6 IIRC you have proposed an implicit block scope at top level (of functions at least, I don't see why not programs as well), whereby let and const bind lexically. Only var and function make global object properties.

If there is some trick for "hiding" accessor semantics that doesn't break the object model then I'd be really interested in seeing it. :)

Me too.

# Brendan Eich (13 years ago)

Brendan Eich <mailto:brendan at mozilla.org> January 24, 2012 9:49 AM More fun:

js> function f(k) { for (var i = 0; i < k; i++) { const j = i*i; a.push(j); } } js> a = [] [] js> f(10) js> a [0, 1, 4, 9, 16, 25, 36, 49, 64, 81]

Better example:

js> function f(k) { print(j); for (var i=0; i<k; i++) { const j = i*i; a.push(j); b.push(function(){return j}); } } js> a = [], b = [] [] js> f(10)

undefined js> a [0, 1, 4, 9, 16, 25, 36, 49, 64, 81] js> b0

81

Again, none of this is for standardization. Quirky as heck, but we hope sane users treat SpiderMonkey const like top level ES6 const and won't be broken by an incompatible change to conform.

# Andreas Rossberg (13 years ago)

On 24 January 2012 18:49, Brendan Eich <brendan at mozilla.org> wrote:

Andreas Rossberg <mailto:rossberg at google.com>

Consequently, I maintain my claim that it is impossible to reconcile sane let/const semantics with the idea of having toplevel bindings represented as data properties on the global object.

I agree. For ES4 we equated let to var at top-level, a mistake. For ES6 IIRC you have proposed an implicit block scope at top level (of functions at least, I don't see why not programs as well), whereby let and const bind lexically. Only var and function make global object properties.

Taken literally, an implicit block scope at toplevel would imply that let/const-bound properties won't show up on the global object at all. That seems to devalue the global object and/or let.

(Part of) what I proposed in the OP of this thread is that for the toplevel, we can still safely reflect let/const bindings on the global object, but through accessor properties -- just like with module instance objects. Solves all the problems we just discussed, while also keeping let useful as the new var.

Var would keep its current semantics, aliasing a data property. For functions, it doesn't matter much either way, but for consistency with other static bindings I'd move to accessor-based semantics in ES6.

# Brendan Eich (13 years ago)

Andreas Rossberg <mailto:rossberg at google.com> January 24, 2012 10:32 AM

Taken literally, an implicit block scope at toplevel would imply that let/const-bound properties won't show up on the global object at all. That seems to devalue the global object and/or let.

I prefer to think of it as saving 'let' from the degradations of the global object ;-).

I am skeptical we will banish var, ever. Consider the object detection pattern:

if (!this.foo) { var foo = function () { ... }; }

No way to make this work with let.

(Part of) what I proposed in the OP of this thread is that for the toplevel, we can still safely reflect let/const bindings on the global object, but through accessor properties -- just like with module instance objects. Solves all the problems we just discussed, while also keeping let useful as the new var.

I saw, clever approach -- haven't had time to evaluate it for compatibility. Seems optimizable.

# Andreas Rossberg (13 years ago)

On 24 January 2012 20:24, Brendan Eich <brendan at mozilla.org> wrote:

I am skeptical we will banish var, ever. Consider the object detection pattern:

if (!this.foo) {    var foo = function () {      ...    };  }

No way to make this work with let.

Actually, it is possible if this is a check for a primordial, and you move them to the prototype of global this and allow shadowing them. Then you'd be able to write:

let foo = Object.getPrototypeOf(this).foo || function() { ... }

(I think we had something like this on the board once.) Or, in the browser, if window was actually bound to that prototype, this simplifies to:

let foo = window.foo || function() { ... }

I don't think compatibility can afford binding window to something different than global this, though. (One could still pre-bind the prototype to a different name, however.)

Of course, I sincerely hope that the module system will provide a more pleasant replacement for this pattern. :)

# Brendan Eich (13 years ago)

Andreas Rossberg <mailto:rossberg at google.com> January 24, 2012 12:22 PM

Actually, it is possible if this is a check for a primordial,

This isn't about built-ins. But also I don't believe we should move the core built-ins to a prototype of the global object. That's an observable change since ES3. Maybe not breaking, who knows?

and you move them to the prototype of global this and allow shadowing them. Then you'd be able to write:

let foo = Object.getPrototypeOf(this).foo || function() { ... }

(I think we had something like this on the board once.) Or, in the browser, if window was actually bound to that prototype,

window === this -- it must be the WindowProxy for the Window object that is the global scope.

this simplifies to:

let foo = window.foo || function() { ... }

This will not work if the detection is coping with prior scripts that bound let bindings, since we agreed that you cannot redeclare a given identifier using let.

If each script's lexical scope is isolated, then of course there's no problem in "redeclaring", but detection breaks. Detection is based on objects, not bindings, in this kind of code (there are other detection patterns).

I don't think compatibility can afford binding window to something different than global this, though.

Whew! I was wondering.

(One could still pre-bind the prototype to a different name, however.)

(Do not want.)

Of course, I sincerely hope that the module system will provide a more pleasant replacement for this pattern. :)

But what? If it's let-based I'd like to know how it could work.

# liorean (13 years ago)

Andreas Rossberg <mailto:rossberg at google.com> January 24, 2012 12:22 PM Of course, I sincerely hope that the module system will provide a more pleasant replacement for this pattern. :)

On 24 January 2012 21:59, Brendan Eich <brendan at mozilla.org> wrote:

But what? If it's let-based I'd like to know how it could work.

Since let is new syntax anyway, why not add a syntactical way of doing the definition conditionally?

let varname if absent = value;