names [Was: Approach of new Object methods in ES5]

# P T Withington (15 years ago)

On 2010-04-16, at 13:07, Brendan Eich wrote:

Another Harmony idea: strawman:names for unforgeable property names not equated to any string. These cannot collide, and with sugar to let them be used with . (not only in computed property accesses using []), we may have a complete solution for injecting new "names" into standard prototypes without breaking existing code.

Comments welcome on the names proposal. There are open issues at the bottom, and the "private" keyword syntax is straw for sure, although we don't have a better proposal AFAIK.

Name sounds like a stripped-down uninterned symbol (bit.ly/bY3Jkg) to me.

It's an object with a magic attribute that says, unlike any other object you might try to use it as a property name, it is not coerced into a string first. And it is compared by identity when looked up. And it is invisible to (all?) enumerations of property names.

I have to wonder if it would be a worthwhile generalization to be able to confer these magical attributes on arbitrary objects? This might allow more experimentation with namespace ideas.

# David Herman (15 years ago)

Name sounds like a stripped-down uninterned symbol (bit.ly/bY3Jkg) to me.

Yup.

It's an object with a magic attribute that says, unlike any other object you might try to use it as a property name, it is not coerced into a string first. And it is compared by identity when looked up. And it is invisible to (all?) enumerations of property names.

Yup again. Basically it entails a slight generalization of the property lookup semantics; instead of ToString there would be a ToPropertyName meta-operation, which for existing ES objects would just delegate to ToString, but for the new class of things would be the identity.

I have to wonder if it would be a worthwhile generalization to be able to confer these magical attributes on arbitrary objects? This might allow more experimentation with namespace ideas.

This is worth exploring. I'd worry about unintended consequences of an attribute that can be turned on/off at will. But even if it were fairly restricted -- e.g., you could turn it on but you couldn't turn it back off again -- it might be more powerful.

Tucker: if the "property-nameness" attribute weren't transferrable but names were objects with property tables, do you think that would be powerful enough? Or would you want the ability to define custom constructors, e.g.:

function MyCustomKindOfNamespace() {
    Object.becomePropertyName(this);
    // ...
}

Dave

PS Still, I have my doubts about using any such mechanisms for versioning. Incidentally, ROC was just talking about versioning and metadata on the web:

http://weblogs.mozillazine.org/roc/images/APIDesignForTheMasses.pdf

He wasn't talking about JS API design, but some of the lessons still apply.

# P T Withington (15 years ago)

On 2010-04-16, at 14:31, David Herman wrote:

Tucker: if the "property-nameness" attribute weren't transferrable but names were objects with property tables, do you think that would be powerful enough? Or would you want the ability to define custom constructors, e.g.:

function MyCustomKindOfNamespace() { Object.becomePropertyName(this); // ... }

I was thinking that for exploratory purposes, you might want a custom constructor so you could, say, enumerate all the names in your custom namespace, or test for a name being in your namespace. But I could do that with just properties by (something like):

private customnames = []; private custom; function CustomName (pretty) { let name = new Name; name[custom] = pretty; name.toString = function () { return "custom::" + this[custom]; } customnames.push(name); return name; }

function isCustom(name) { return name.hasOwnProperty(custom); }

etc.

A conundrum is that you don't want private names to be revealed by property enumeration in general, but IWBN if they could be enumerated by someone with access to the namespace. Using the idea of names having properties themselves, if there were a way to say "enumerate the properties that have some property", you could use a private property as a capability to get at private names. I'm sure we can have hours of fun suggesting how to extend the for syntax to handle that. :)

# Peter van der Zee (15 years ago)

On Fri, Apr 16, 2010 at 9:48 PM, P T Withington <ptw at pobox.com> wrote:

On 2010-04-16, at 14:31, David Herman wrote:

Tucker: if the "property-nameness" attribute weren't transferrable but names were objects with property tables, do you think that would be powerful enough? Or would you want the ability to define custom constructors, e.g.:

function MyCustomKindOfNamespace() { Object.becomePropertyName(this); // ... }

I was thinking that for exploratory purposes, you might want a custom constructor so you could, say, enumerate all the names in your custom namespace, or test for a name being in your namespace. But I could do that with just properties by (something like):

private customnames = []; private custom; function CustomName (pretty) { let name = new Name; name[custom] = pretty; name.toString = function () { return "custom::" + this[custom]; } customnames.push(name); return name; }

function isCustom(name) { return name.hasOwnProperty(custom); }

etc.

A conundrum is that you don't want private names to be revealed by property enumeration in general, but IWBN if they could be enumerated by someone with access to the namespace. Using the idea of names having properties themselves, if there were a way to say "enumerate the properties that have some property", you could use a private property as a capability to get at private names. I'm sure we can have hours of fun suggesting how to extend the for syntax to handle that. :)

The straw seems awkward and unnecessarily complex compared to the .toPropertyName suggestion. It could just return true or false to determine whether the identity or .toString should be used as property reference. Including the proposed hidden identity reference properties. I wouldn't "sacrifice" private for this.

# Dmitry A. Soshnikov (15 years ago)

Hello David,

Friday, April 16, 2010, 10:31:07 PM, you wrote:

Name sounds like a stripped-down uninterned symbol (bit.ly/bY3Jkg) to me.

Yup.

It's an object with a magic attribute that says, unlike any other object you might try to use it as a property name, it is not coerced into a string first. And it is compared by identity when looked up. And it is invisible to (all?) enumerations of property names.

Yup again. Basically it entails a slight generalization of the property lookup semantics; instead of ToString there would be a ToPropertyName meta-operation, which for existing ES objects would just delegate to ToString, but for the new class of things would be the identity.

Still naming convention (the thing which ES doesn't use -- and in vain) is a good and elegant approach. That is Python (or even Ruby) uses.

JS already borrowed a lot from Python. So, its naming convention for _private and __protected is a good approach. Although, maybe not in such exact view (with leading underscore, but nevertheless).

As Brendan already mentioned, unfortunately, it's too late to talk about naming conventions for this purpose because of backward compatibilities. So the leading underscore doesn't fit.

But, there are many other interesting and elegant symbols.

Ruby e.g. uses leading colon for symbols - :symbol. But for JS it will look a bit odd, because it already uses colon to separate property's value from its:

var foo = {x: 10, :y: 20} // although...

where :y - is your "private" "Name" symbol.

Choose any:

{x: 10, _y: 20}, // habitual

{x: 10, -y: 20}, // interesting

{x: 10, !y: 20} // be careful, it's !private

(although, in Ruby exclamation mark is used for "dangerous" methods, which modifies the argument instead of returning the new one)

{x: 10, .y: 20} // also interesting, like a "hidden file"

{x: 10, ~y: 20} // a bit ugly, but acceptable

{x: 10, <y>: 20} // to many different braces, doubtful.

and so on, there are many interesting naming conventions (which are not yet "borrowed" by backward compatibility).

It looks a bit odd for e.g. Java programmers which used to write that long-long lines, but -- it's just as a variant.

Although, the variant with the leading dot is interesting, but it will look odd using with the corresponding property accessor:

let obj = { .foo: 42, getFoo: function () this..foo // ? too odd };

(Although, we have already such: 1..toString() -- "1" for fractional part of numbers)

How naming convention will help? -- in any position in code we can presiseally say what access level this property has. We do not need some special keywords.

But, that's just abstract suggestion. Which at current step (and moment) incomparable with the JS nature.

Meanwhile, the "classical" approach for access levels is also elegant.

Leaving naming conventions (if you will) it is also possible to use this "private" keyword in this form:

Declarative:

let obj = {

private: foo: 42, bar: 41

public: baz: 43 getFoo: lambda() this.baz + this.bar;

};

obj.foo; // undefined obj.bar; // undefined

obj.baz; // 43 obj.getFoo(); // 83

obj.bar = 44; // Error, "bar" is private ?

where "private:" keyword with the colon will fill "obj" with "Name" symbols followed by it; "public:" then is just to separate the section. Actually, this quit standard approach. Of course, lexer/parser rules should be changed and "private" and "protected" then cannot be used as normal property names (which ES5 seems allows, e.g. this.super()).

Special "Name" constructor for this aim seems superfluous. There should be the way to create them fast and elegant (and with minumum, but still elegant of code). Although, name Symbol" for this constructor is acceptable.

Imperative:

obj[private: "x" + dynamicName] = 100;

obj.private: x = 100;

obj[new Symbol("x")] = 100; obj[Symbol("x")] = 100;

// or again naming convention obj[.x] = 100;

trusted(obj, .x); trusted(obj, private:x); trusted(obj, Symbol("x")); // odd

// with intermediate action let x = new Symbol; trusted(obj, x);

Also -- just a variant of suggestion. But it souldn't be too complex with several intermediate actions.

P.S.: by the way, having in Rhino access to the activation object, it is possible to manage objects increasing encapsulation providing higher abstraction -- and again -- with using leading underscore convention: gist.github.com/363056 So from this viewpoint -- it is also pity that spec officaly doesn't allow to get access to the AO (activation object) -- it could make a good metaprogrammig solutions, as locals() or vars() in Python. E.g. in this example I can use only function declaration forms and omit "this." prefix each time.

But, JS of course isnt' a Python, although it's very similar, JS has it's onw ideology.

Dmitry.

# David Herman (15 years ago)

Still naming convention (the thing which ES doesn't use -- and in vain) is a good and elegant approach. That is Python (or even Ruby) uses.

Brendan already described to you some of the problems with name-mangling techniques, and we could argue over whether naming conventions are indeed "elegant" programming techniques. But they are useless as language features for information hiding. If people want to use naming conventions to suggest privacy they can already do so today, with no additional language extensions.

var foo = {x: 10, :y: 20} // although...

where :y - is your "private" "Name" symbol.

Choose any: ...

and so on, there are many interesting naming conventions (which are not yet "borrowed" by backward compatibility).

Any of these would merely suggest privacy, without enforcing it. This comes with no guarantees whatsoever. The purpose of the private names proposal is to provide a mechanism for creating guaranteed private object properties. If the creator of the object does not share the name value, then no one else can get to the property. What you have suggested is just a convention.

How naming convention will help? -- in any position in code we can presiseally say what access level this property has.

No, you can't. All you've done is change the syntax. There's no behavioral difference that ensures that one part of the code can access the "private" data and another can't.

Leaving naming conventions (if you will) it is also possible to use this "private" keyword in this form:

The private names proposal already includes a declarative syntax for creating private names.

http://wiki.ecmascript.org/doku.php?id=strawman:names

let obj = { private: foo: 42, bar: 41

public: baz: 43 getFoo: lambda() this.baz + this.bar; }; ... obj.bar = 44; // Error, "bar" is private ?

If "private" is an attribute of the slot but the name is still public, then you have not entirely hidden your data representation. You still end up with name contention (no one can use the property name "bar" now) and you don't get full information hiding (clients can see that you have a property called "bar" and change their behavior because of it).

You also haven't said anything about what parts of the code do have access to the private properties and what parts of the code don't. If you don't see why this is important, then trust me -- it's tricky in a language where objects and functions can be arbitrarily lexically nested and functions can be dynamically added and removed from objects.

These issues are addressed by the names proposal-- please do take a look.

Thanks,

# Brendan Eich (15 years ago)

On Apr 16, 2010, at 2:31 PM, David Herman wrote:

function MyCustomKindOfNamespace() { Object.becomePropertyName(this); // ... }

If arbitrary objects can be property names, are these objects weakly
referenced? Or is the property an Ephemeron (key-value pair where the
value is strong so long as the key is alive via other references,
otherwise the value is weak and the property is liable to be removed
along with the value being GC'ed)? This matters to avoid leaks and
bloat involving cycles.

A Name object that is required to be a leaf in the live object graph
has the advantage that it can be strongly referenced by the
implementation when used as a property name (key), without reference
cycles being possible. Implementations would be able to count on this
property. We could choose to specify Names this way, and the current
spec seems to lean this way.

Mark points out that if you treat Name objects as EphemeronTables,
then for a given name object n, o[n] is n.get(o) (EphemeronTables
can't be enumerated). I've argued the spec should not dictate this
implementation, since other plausible ones exist that have different
trade-offs (Name "objects" could be UUIDs, e.g.). Also, we don't want
to couple proposals at this point if there is no semantic win in doing
so.

There's an open question of what (typeof n) should be.

PS Still, I have my doubts about using any such mechanisms for
versioning.

The topic is not versioning in full, rather hiding properties added to
built-in prototypes.

# David Herman (15 years ago)

A Name object that is required to be a leaf in the live object graph has the advantage that it can be strongly referenced by the implementation when used as a property name (key), without reference cycles being possible. Implementations would be able to count on this property. We could choose to specify Names this way, and the current spec seems to lean this way.

Good points. Correct me if I'm wrong, but it also seems an implementation could be free to GC object slots for unreachable leaf-name-keys. IOW, being a property-key does not imply reachability. (For strings this is obviously not true, since they are forgeable.)

Also, we don't want to couple proposals at this point if there is no semantic win in doing so.

PS Still, I have my doubts about using any such mechanisms for versioning.

The topic is not versioning in full, rather hiding properties added to built-in prototypes.

I had the impression Tucker was thinking about versioning, but I may have imagined it. I guess I'm not clear on what desiderata the names proposal as-is doesn't address. Tucker mentions enumeration, but I'm not sure how important that is. It doesn't seem like a common need, but might be an interesting reflective operation.

# Dmitry A. Soshnikov (15 years ago)

Hello David,

Saturday, April 17, 2010, 2:01:09 AM, you wrote:

var foo = {x: 10, :y: 20} // although...

where :y - is your "private" "Name" symbol.

Choose any: ...

and so on, there are many interesting naming conventions (which are not yet "borrowed" by backward compatibility).

Any of these would merely suggest privacy, without enforcing it. This comes with no guarantees whatsoever. The purpose of the private names proposal is to provide a mechanism for creating guaranteed private object properties. If the creator of the object does not share the name value, then no one else can get to the property. What you have suggested is just a convention.

Well, it was just quick suggestion (right-now, in-place), although as I mentioned myself, it is debatable wethere it is (naming convention) fits for the current ES design. All that ".", "_" and other -- looks odd as I said myself.

But I meant not only naming convention, but that by this naming convention this properties (symbols) will be hidden -- just like in Python, when "_" and "__" properties become unavailable outside by their names (although, Python just renames them by special rule "_ClassName__property_name" -- but that doesn't matter).

So, of course it is a question not of one letter and here should be deep analysis to provide the most applicable design.

let obj = { private: foo: 42, bar: 41

public: baz: 43 getFoo: lambda() this.baz + this.bar; }; ... obj.bar = 44; // Error, "bar" is private ?

If "private" is an attribute of the slot but the name is still public, then you have not entirely hidden your data representation. You still end up with name contention (no one can use the property name "bar" now) and you don't get full information hiding (clients can see that you have a property called "bar" and change their behavior because of it).

Then I have to see more examples. And nevertheless, encapsulation in its main purpose -- is increasing of abstraction. But you're talking about already security hiding. Then of course should be other approach. I'd be glad to see more examples of your "Name"s proposal -- how one part can get access to hidden data, and another -- cannot.

You also haven't said anything about what parts of the code do have access to the private properties and what parts of the code don't. If you don't see why this is important, then trust me -- it's tricky in a language where objects and functions can be arbitrarily lexically nested and functions can be dynamically added and removed from objects.

These issues are addressed by the names proposal-- please do take a look.

Thank you, I of course will take a detailed look on it and be glad to see another examples of this proposal. Then maybe we can suggest also something interesting if will completely understand how should it look and work.

Dmitry.

# Brendan Eich (15 years ago)

On Apr 17, 2010, at 12:06 AM, David Herman wrote:

A Name object that is required to be a leaf in the live object
graph has the advantage that it can be strongly referenced by the
implementation when used as a property name (key), without
reference cycles being possible. Implementations would be able to
count on this property. We could choose to specify Names this way,
and the current spec seems to lean this way.

Good points. Correct me if I'm wrong, but it also seems an
implementation could be free to GC object slots for unreachable leaf- name-keys.

Yes, this would make for a superior implementation ;-). But is it the
spec's business to mandate it, or specify much at all about GC? I
think not, obviously; you may agree. Just calling out the issue of
under-specification here.

IOW, being a property-key does not imply reachability. (For strings
this is obviously not true, since they are forgeable.)

Right, which is why Mark's EphemeronTable implementation works for
Names.

# David Herman (15 years ago)

But I meant not only naming convention, but that by this naming convention this properties (symbols) will be hidden -- just like in Python, when "_" and "__" properties become unavailable outside...

You still haven't specified what "outside" means. What does get to see a hidden name and what doesn't?

Then I have to see more examples.

  1. Publishing private names ruins abstraction

Let's say you create a library and share it with some people. Then in version 2 of your library, you introduce a new feature, which uses an internal private property called "count". One of your clients figures out that you used this private name and writes a blog post saying "hey, if you want to figure out whether the library is greater than version 1, just look for a private member variable called 'count'!"

Now you have 100,000 clients depending on the fact that you have a private property called "count." You decide for version 3 that you'd rather call it "elementCount" but you can't get rid of the private name because your customers have already relied on it.

  1. Publishing private names creates namespace pollution

Library A adds a private "count" property to some shared object.

Library B also adds a private "count" property to the same object.

They are both developed separately.

Now Client C wants to use both Library A and Library B. Let's arbitrarily say it adds A first, then B. Library B fails with an error because it tries to use the private "count" property, which it doesn't have access to because Library A already claimed it.

And nevertheless, encapsulation in its main purpose -- is increasing of abstraction. But you're talking about already security hiding.

Absolutely not. What I'm talking about is abstraction, not security. The purpose of abstraction is to support modularity, i.e., to eliminate dependencies between separate units of modularity so that their implementations are free to change. If you publish your private names, you create a point of dependency between modules and make it harder to change code. None of this is talking about security.

Of course, publishing private names is bad for security as well!

# Brendan Eich (15 years ago)

On Apr 16, 2010, at 2:31 PM, David Herman wrote:

PS Still, I have my doubts about using any such mechanisms for
versioning. Incidentally, ROC was just talking about versioning and
metadata on the web:

weblogs.mozillazine.org/roc/images/APIDesignForTheMasses.pdf

Rob's blog post: weblogs.mozillazine.org/roc/archives/2010/04/api_design_for.html

He wasn't talking about JS API design, but some of the lessons still
apply.

Old WHATWG co-conspirators like me obviously agree on the principles
roc presents, but they do not work so well in JS compared to HTML or
even CSS. Consider HTML markup:

<video ...> <object ...></object> </video>

A new HTML5 video tag with an object tag as fallback, to use a plugin
to present the video for pre-HTML5 browsers. There are text-y examples
that work too, even if the degradation is not as graceful as you might
get with a plugin (plugins can lack grace too :-/).

CSS has standardized error correction from day one, although as noted
in comments on roc's blog it lacks "feature detection". But graceful
degradation seems to work as well with CSS as with HTML, if not better.

With JS, new syntax is going to make old browsers throw SyntaxErrors.
There's no SGML-ish container-tag/point-tag model on which to build
fallback handling. One could use big strings and eval, or XHR or
generated scripts to source different versions of the JS content --
but who wants to write multiple versions of JS content in the first
place.

The "find the closing brace" error correction idea founders on the
need to fully lex, which is (a) costly and (b) future-hostile.
Allowing new syntax in the main grammar only, not in the lexical
grammar, seems too restrictive even if we never extend the lexical
grammar -- we might fix important bugs or adjust the spec to match de- facto lexical standards, as we did for IE's tolerance of the /[/]/
regexp literal.

So API object detection with fallback written in JS random logic works
(for some very practical if not theoretically pretty definitions of
"works") for the non-syntactic extensions coming in Harmony, assuming
we can dodge the name collision bullets. But for new Harmony syntax,
some kind of opt-in versioning seems required.

We survived this in the old days moving from JS1.0 to JS1.2 and then
ES3. One could argue that the web was smaller then (it was still damn
big), or that Microsoft's monopolizing helped consolidate around ES3
more quickly (it did -- IE started ignoring version suffixes on
<script language=> as I noted recently).

Roc's point about fast feedback from prototype implementations to
draft standards is the principle to uphold here, not "no versioning".

Obviously we could avoid new syntax in order to avoid opt-in
versioning, but this is a bad trade-off. JS is not done evolving,
syntax is user interface, JS needs new syntax to fix usability bugs.
I'm a broken record on this point.

Secondarily, new syntax can help implementations too, both for
correctness and optimizations.

So I should stop being a broken record here, and let others talk about
opt-in versioning. It seems inevitable. We have it already in a
backward-compatible but semantically meaningful (including runtime
semantic changes!) sense in ES5 with "use strict".

Opt-in versioning s not a free ride, but it is going to a destination
we need to reach: new syntax where appropriate and needed for
usability and productivity wins.

# P T Withington (15 years ago)

On 2010-04-17, at 00:06, David Herman wrote:

PS Still, I have my doubts about using any such mechanisms for versioning.

The topic is not versioning in full, rather hiding properties added to built-in prototypes.

I had the impression Tucker was thinking about versioning, but I may have imagined it. I guess I'm not clear on what desiderata the names proposal as-is doesn't address. Tucker mentions enumeration, but I'm not sure how important that is. It doesn't seem like a common need, but might be an interesting reflective operation.

I was just thinking about ways to use private names to create distinct namespaces (sets of names). The benefit of names being "leaf nodes" would seem to outweigh being able to annotate names. Introspection could be supported by just keeping track of the names in my namespace, although it would be more convenient if there were a capability to enable for in to iterate over "my" names -- that's what led me down the sub-type path.

# David Herman (15 years ago)

There are multiple levels of opt-in versioning:

(1) versioning of the language itself

(2) language support for versioning of libraries

I agree with what you're saying wrt (1), but wrt (2), feature detection is feasible, and I'd think more tractable than version detection.

# Brendan Eich (15 years ago)

On Apr 17, 2010, at 3:03 PM, David Herman wrote:

There are multiple levels of opt-in versioning:

(1) versioning of the language itself

(2) language support for versioning of libraries

I agree with what you're saying wrt (1), but wrt (2), feature
detection is feasible, and I'd think more tractable than version
detection.

Yes, I agree.

TC39 has discussed a "frame" (meaning DOM window, ideally web-app
wide) version selection mechanism for the built-ins libraries (plural:
JS, DOM, and more -- and all libraries, too, not particularly
distinguished by being native or primordial in their specs).

No one has made a proposal to TC39 yet.

The closest thing to (2) being fielded today may be what modern IE
versions [A], and now Google Chrome Frame [B], do with the X-UA- Compatible HTTP header. David Baron of Mozilla has written cogently
about this header [C].

I'm not in favor of inventing something in Ecma that adds opt-in
versioning of the object model (2), for the reason I gave in reply to
Peter van der Zee: complete opt-in versioning including new API
visiblity is too brittle over time -- it is likely to lead to over- versioned, under-tested, ultimately non-working (except for one of N
browsers) code. Object- and in general feature-detection is more
resilient and less likely to suffer version-scope-creep.

/be

[A] msdn.microsoft.com/library/cc817574.aspx [B] www.chromium.org/developers/how-tos/chrome-frame-getting-started#TOC-Making-Your-Pages-Work-With-Google- [C] dbaron.org/log/2008

# Peter van der Zee (15 years ago)

On Sat, Apr 17, 2010 at 11:40 PM, Brendan Eich <brendan at mozilla.com> wrote:

On Apr 17, 2010, at 3:03 PM, David Herman wrote:

There are multiple levels of opt-in versioning:

(1) versioning of the language itself

(2) language support for versioning of libraries

I agree with what you're saying wrt (1), but wrt (2), feature detection is feasible, and I'd think more tractable than version detection.

Yes, I agree.

TC39 has discussed a "frame" (meaning DOM window, ideally web-app wide) version selection mechanism for the built-ins libraries (plural: JS, DOM, and more -- and all libraries, too, not particularly distinguished by being native or primordial in their specs).

No one has made a proposal to TC39 yet.

The closest thing to (2) being fielded today may be what modern IE versions [A], and now Google Chrome Frame [B], do with the X-UA-Compatible HTTP header. David Baron of Mozilla has written cogently about this header [C].

I'm not in favor of inventing something in Ecma that adds opt-in versioning of the object model (2), for the reason I gave in reply to Peter van der Zee: complete opt-in versioning including new API visiblity is too brittle over time -- it is likely to lead to over-versioned, under-tested, ultimately non-working (except for one of N browsers) code. Object- and in general feature-detection is more resilient and less likely to suffer version-scope-creep.

/be

[A] msdn.microsoft.com/library/cc817574.aspx [B] www.chromium.org/developers/how-tos/chrome-frame-getting-started#TOC-Making-Your-Pages-Work-With-Google- [C] dbaron.org/log/2008-01

Dave

On Apr 17, 2010, at 9:38 AM, Brendan Eich wrote:

On Apr 16, 2010, at 2:31 PM, David Herman wrote:

PS Still, I have my doubts about using any such mechanisms for versioning. Incidentally, ROC was just talking about versioning and metadata on the web:

weblogs.mozillazine.org/roc/images/APIDesignForTheMasses.pdf

Rob's blog post: weblogs.mozillazine.org/roc/archives/2010/04/api_design_for.html

He wasn't talking about JS API design, but some of the lessons still apply.

Old WHATWG co-conspirators like me obviously agree on the principles roc presents, but they do not work so well in JS compared to HTML or even CSS. Consider HTML markup:

<video ...> <object ...></object> </video>

A new HTML5 video tag with an object tag as fallback, to use a plugin to present the video for pre-HTML5 browsers. There are text-y examples that work too, even if the degradation is not as graceful as you might get with a plugin (plugins can lack grace too :-/).

CSS has standardized error correction from day one, although as noted in comments on roc's blog it lacks "feature detection". But graceful degradation seems to work as well with CSS as with HTML, if not better.

With JS, new syntax is going to make old browsers throw SyntaxErrors. There's no SGML-ish container-tag/point-tag model on which to build fallback handling. One could use big strings and eval, or XHR or generated scripts to source different versions of the JS content -- but who wants to write multiple versions of JS content in the first place.

The "find the closing brace" error correction idea founders on the need to fully lex, which is (a) costly and (b) future-hostile. Allowing new syntax in the main grammar only, not in the lexical grammar, seems too restrictive even if we never extend the lexical grammar -- we might fix important bugs or adjust the spec to match de-facto lexical standards, as we did for IE's tolerance of the /[/]/ regexp literal.

So API object detection with fallback written in JS random logic works (for some very practical if not theoretically pretty definitions of "works") for the non-syntactic extensions coming in Harmony, assuming we can dodge the name collision bullets. But for new Harmony syntax, some kind of opt-in versioning seems required.

We survived this in the old days moving from JS1.0 to JS1.2 and then ES3. One could argue that the web was smaller then (it was still damn big), or that Microsoft's monopolizing helped consolidate around ES3 more quickly (it did -- IE started ignoring version suffixes on <script language=> as I noted recently).

Roc's point about fast feedback from prototype implementations to draft standards is the principle to uphold here, not "no versioning".

Obviously we could avoid new syntax in order to avoid opt-in versioning, but this is a bad trade-off. JS is not done evolving, syntax is user interface, JS needs new syntax to fix usability bugs. I'm a broken record on this point.

Secondarily, new syntax can help implementations too, both for correctness and optimizations.

So I should stop being a broken record here, and let others talk about opt-in versioning. It seems inevitable. We have it already in a backward-compatible but semantically meaningful (including runtime semantic changes!) sense in ES5 with "use strict".

Opt-in versioning s not a free ride, but it is going to a destination we need to reach: new syntax where appropriate and needed for usability and productivity wins.

/be


es-discuss mailing list es-discuss at mozilla.org, mail.mozilla.org/listinfo/es-discuss

Okay, what exactly are we trying to solve here...

It seems to me there are two things to be solved, there are a couple of doing it and a few routes to take in general...

To be solved:

  • Allow non-string-property keys
  • Allow "hidden" properties, non-enumerable, not generically accessible (like stringed keys are now). To be honest, I'm still not 100% clear on this one.

Ways of doing that:

  • By identity -- Property is inaccessible once the identity is gc'ed -- Could cause additional constraints, make an implementation more complex and heavier in general (circular checks)
  • By some kind of private syntax, see the straw for that

Routes in general. By this I mean making changes to the language like discussed in this thread.

Basically, there are a few options available...

  • Simply add new object / instance methods to the language in a sane sounding way, taking the backwards compat problems for granted. This is by far the easiest and yet the most opposed way.
  • Add new methods with obscure names in an effort to minimize backwards compat problems. This is what proto would fall under (in my opinion). I think that's also part of the start of this thread and the comparison to "that other language"...
  • Extending the language by using new syntax, allowing more characters or sacrificing reserved keywords. This will only get you so far and is not always the "best" option for certain extensions.
  • Versioning opt in. The only ecma generic way I see is by directives. Script tag attributes are not an option because ecma is a generic language, not just for browsers. Will come back to this in a sec.
  • Freezing ES5 and starting a major version increment. Coming back to this as well.
  • ?

As far as versioning goes, this seems like an appealing option at first but can lead to a complex system of rules and versions to be kept supporting. Basically Brendan's objections. Any versioning scheme will come down to this, whether it be feature specific (like an include) or version specific (minor/major version or whatever).

Freezing the spec for this major version and starting a new major version sounds like an interesting choice, to me anyways. Of course I'm not quite clear on the consequences of such a decision nor whether there's already work in progress on this. But what about a new major version of Javascript. One where most of the well known problems are addressed and some interesting scheme is implemented that allows the language to be extended, while still keeping backwards compat. Of course this version could also have the nifty weird language additions waiting to be implemented like let and perhaps sharp variables. New syntax, extended rules, or whatever.

In fact, I would very much like this because it allows for a "simple" javascript. The one we have now. And for a new javascript with a whole new level of complexity. Because frankly, some of the features that have been proposed scare me a little in terms of complexity. The accessibility and readability of the language is becoming an issue when they are implemented.

So. What problem are we trying to solve here and what options do we have (and are viable) to extend the specification?

# Brendan Eich (15 years ago)

On Apr 17, 2010, at 6:07 PM, Peter van der Zee wrote:

To be solved:

  • Allow non-string-property keys
  • Allow "hidden" properties, non-enumerable, not generically
    accessible (like stringed keys are now). To be honest, I'm still not
    100% clear on this one.

I don't see how these two differ.

Ways of doing that:

  • By identity -- Property is inaccessible once the identity is gc'ed -- Could cause additional constraints, make an implementation more
    complex and heavier in general (circular checks)
  • By some kind of private syntax, see the straw for that

The private syntax is so you can use o.p instead of o[x] where x is a
Name object; private p would bind the name p lexically to a Name
object and use it instead of the string "p" when evaluating o.p.

  • Simply add new object / instance methods to the language in a sane
    sounding way, taking the backwards compat problems for granted. This
    is by far the easiest and yet the most opposed way.

This is what ES5 did.

  • Add new methods with obscure names in an effort to minimize
    backwards compat problems. This is what proto would fall under
    (in my opinion). I think that's also part of the start of this
    thread and the comparison to "that other language"...

This is not favored by TC39, because the __ convention is no guarantee
against collisions, and it is rejected by some sanitizers.

As far as versioning goes, this seems like an appealing option at
first but can lead to a complex system of rules and versions to be
kept supporting. Basically Brendan's objections. Any versioning
scheme will come down to this, whether it be feature specific (like
an include) or version specific (minor/major version or whatever).

Real web experience suggests feature-testing, or really object
detection, is less combinatorial explosive. Once a browser supports
the new object, the script can use it. No need to enumerate browser or
rendering engine names in a conditional or X-UA-Compatible header.

So. What problem are we trying to solve here and what options do we
have (and are viable) to extend the specification?

For Harmony we have new syntax, so the problem is clear: we need some
kind of opt-in versioning mechanism for authors to use, where old
browsers do not process the new version. This could involve RFC 4329
style

<script src="harmony.js" type="application/ecmascript;version=6"></

script>

but then the question becomes: how does the content author ship an ES3
or lower script to older browsers? See the older versioning page I
wrote, which Collin Jackson worked on too, at

proposals:versioning

An in-language pragma, e.g.

<script> "use vesrion 6"; // Harmony code here... </script>

would be ignored by older browsers. This seems bad because downrev
browsers would try to run the script content, unless you use server- side version detection to ship this only to uprev browsers.

Harmony having new syntax does not mean we are opening up the design
space to make some new, incompatible version of the language. You seem
to allow for that as a possible future course, but TC39 members are
uniformly against this path AFAICT. See harmony:harmony .

# Peter van der Zee (15 years ago)

On Sun, Apr 18, 2010 at 7:19 AM, Brendan Eich <brendan at mozilla.com> wrote:

On Apr 17, 2010, at 6:07 PM, Peter van der Zee wrote:

To be solved:

  • Allow non-string-property keys
  • Allow "hidden" properties, non-enumerable, not generically accessible (like stringed keys are now). To be honest, I'm still not 100% clear on this one.

I don't see how these two differ.

Non-string property keys don't need to be hidden. The language could be extended in such a way where identity properties could be part of it. They could be enumerable. Some edge cases would have to be resolved of course.

The hidden properties could just as well be strings. You could have a public enumerable property and one that's private and hidden at the same time.

I can see these as distinct, That's why I'm wondering whether everyone is trying to solve one with the other or actually pursuing either one (or both) of them.

Ways of doing that:

  • By identity -- Property is inaccessible once the identity is gc'ed -- Could cause additional constraints, make an implementation more complex and heavier in general (circular checks)
  • By some kind of private syntax, see the straw for that

The private syntax is so you can use o.p instead of o[x] where x is a Name object; private p would bind the name p lexically to a Name object and use it instead of the string "p" when evaluating o.p.

  • Simply add new object / instance methods to the language in a sane

sounding way, taking the backwards compat problems for granted. This is by far the easiest and yet the most opposed way.

This is what ES5 did.

  • Add new methods with obscure names in an effort to minimize backwards

compat problems. This is what proto would fall under (in my opinion). I think that's also part of the start of this thread and the comparison to "that other language"...

This is not favored by TC39, because the __ convention is no guarantee against collisions, and it is rejected by some sanitizers.

As far as versioning goes, this seems like an appealing option at first

but can lead to a complex system of rules and versions to be kept supporting. Basically Brendan's objections. Any versioning scheme will come down to this, whether it be feature specific (like an include) or version specific (minor/major version or whatever).

Real web experience suggests feature-testing, or really object detection, is less combinatorial explosive. Once a browser supports the new object, the script can use it. No need to enumerate browser or rendering engine names in a conditional or X-UA-Compatible header.

This could be fixed by some kind detection scheme for these directives. When it would be possible to detect the state of some directive using it as a versioning scheme becomes feature detectable. Enabled directives could be made public as the property of some object. Alternatively there could be a try/catch surrounding some block of code that would fail on an implementation without the directive and succeed on an implementation with the directive. There could be a generic method for enabled directives this way. This would be a workaround though and I'd prefer something straightforward for detection.

But why was versioning going to be a problem at first and not any more? I still agree that in the long run, a versioning opt in scheme will add too much complexity. Both for the programmer (having to enable them one by one) and the vendor (ditto + support them).

So. What problem are we trying to solve here and what options do we have

(and are viable) to extend the specification?

For Harmony we have new syntax, so the problem is clear: we need some kind of opt-in versioning mechanism for authors to use, where old browsers do not process the new version. This could involve RFC 4329 style

<script src="harmony.js" type="application/ecmascript;version=6"></script>

but then the question becomes: how does the content author ship an ES3 or lower script to older browsers? See the older versioning page I wrote, which Collin Jackson worked on too, at

proposals:versioning

An in-language pragma, e.g.

<script> "use vesrion 6"; // Harmony code here... </script>

would be ignored by older browsers. This seems bad because downrev browsers would try to run the script content, unless you use server-side version detection to ship this only to uprev browsers.

How would this work in non-browsers? Does this list care about that? Should it? (I don't know what the main target is here...). Browsers do seem the prime candidate here, especially when it comes to multiple versions and upgrading. But how are other (non browser) implementations supposed to do this? Isn't an language internal solution the way to go?

Harmony having new syntax does not mean we are opening up the design space to make some new, incompatible version of the language. You seem to allow for that as a possible future course, but TC39 members are uniformly against this path AFAICT. See harmony:harmony.

/be

That was my proposal indeed. This also makes it impossible for introducing any and all kind of feature that would not be parsable by previous versions (like introducing new characters or the short function notation). It actually puts a great burden on the evolution of the language as a whole, doesn't it? Not sure whether that's a bad thing per se.

# Dmitry A. Soshnikov (15 years ago)

Hello David,

Saturday, April 17, 2010, 8:01:48 PM, you wrote:

But I meant not only naming convention, but that by this naming convention this properties (symbols) will be hidden -- just like in Python, when "_" and "__" properties become unavailable outside...

You still haven't specified what "outside" means. What does get to see a hidden name and what doesn't?

If to take the mentioned above Python with its naming convention, then:

class A(object):

def __init__(self):
  self.public = 10
  self.__private = 20

def get_private(self):
    return self.__private

outside:

a = A() # instance of A

print(a.public) # OK, 30 print(a.get_private()) # OK, 20 print(a.__private) # fail, available only within A description

but Python just renames such properties

_ClassName__property_name

and by this name this properties are

abailable outside

print(a._A__private) # OK, 20

This implementation shows that encapsulation is an increasing of abstraction, but not just a hiding related to "hey, ... just look for a private member variable called 'count'!". From this viewpoint Python's implementation doesn't fit.

The same e.g. in Ruby: we can define "private" and "protected" properties/methods, but on the other hand we have special meta methods such as "instance_variable_get", "instance_variable_set", "send" and other which allows to get access to encapsulated data.

class A

def initialize @a = 10 end

def public_method private_method(20) end

private

def private_method(b) return @a + b end

end

a = A.new # new instance

a.public_method # OK, 30

a.a # fail, @a - is private instance var without "a" getter

fail "private_method" is private and

available only within A class definition

a.private_method # Error

But, using special meta methods -- we have access to that encapsulated data:

a.send(:private_method) # OK, 30 a.instance_variable_get(:@a) # OK, 10

This implementation also shows that it is mostly about abstraction -- as a convenient sugar to help programmer more abstractly describe the system, and less about "no one should see this data".

I wrote about it briefly: bit.ly/bnDTkW

But of course, mentioned above implementations aren't the sample and ECMAScript can (and even should) provide its own. And if there there is ability to manage it without any access from the outside -- this is also good.

Then I have to see more examples.

  1. Publishing private names ruins abstraction

Let's say you create a library and share it with some people. Then in version 2 of your library, you introduce a new feature, which uses an internal private property called "count". One of your clients figures out that you used this private name and writes a blog post saying "hey, if you want to figure out whether the library is greater than version 1, just look for a private member variable called 'count'!"

Now you have 100,000 clients depending on the fact that you have a private property called "count." You decide for version 3 that you'd rather call it "elementCount" but you can't get rid of the private name because your customers have already relied on it.

Yes, Dave, I understand all this. But you here mostly talk about naming dependency. And in this case you (probably) will suggest to have a public getter for this private local "count" var, yep? Yes, this is quite logical and true. But this naming dependency can appear again if you rename already public method from "getCount" to "getElementCoutn". All your 100,000 clients also will break the code.

But, yes, encapsulation of /helper auxiliary data/ is increasing of abstraction and the main goal that users still have some public /unchangeable from version-to-version/ getter getCount while we can change everyday our private helper auxiliary data -- will we use "count" name or "elementCount" (or even both of them) -- should be abstracted from the user.

Moreover, I mostly meant examples of the current approach of "Name"s -- want to see how useful/elegant will it look from the syntax viewpoint.

  1. Publishing private names creates namespace pollution

Library A adds a private "count" property to some shared object.

Library B also adds a private "count" property to the same object.

They are both developed separately.

Now Client C wants to use both Library A and Library B. Let's arbitrarily say it adds A first, then B. Library B fails with an error because it tries to use the private "count" property, which it doesn't have access to because Library A already claimed it.

Yes of course. These are different things: helpter auxilary data and public properties for /any/ public object. From the first viewpoint (private auxilary data) -- of course A and B libs should use their own "namespace/private state object" for this purpose. But if the goal to provide that "count" property to exactly some shared /public/ object -- that's OK if B will fail. The full responsibility for this is only on Client C but not on A and B libs. Again -- only if that libs want to provide exactly /public/ augmentation.

That's again what I mentioned -- if I'm free from all consequences of augmentations of buil-ins (which are: (1) using several 3rd-party libs, (2) enumeration with for..in over augmented objects without control of {DontEnum}/[[Enumerable]]) -- it's /very/ convenient and useful to augment built-in prototypes and use plugged-in functionality as it would be native for this objects. The same approach is used in Ruby libs e.g. which the same as ECMAScript (but in contrast with Python) allows augmentations of built-in classes.

And nevertheless, encapsulation in its main purpose -- is increasing of abstraction. But you're talking about already security hiding.

Absolutely not. What I'm talking about is abstraction, not security.

OK.

The purpose of abstraction is to support modularity, i.e., to eliminate dependencies between separate units of modularity so that their implementations are free to change. If you publish your private names, you create a point of dependency between modules and make it harder to change code. None of this is talking about security.

Yes, true, agreed.

Of course, publishing private names is bad for security as well!

Yep, but if programmers wants to get access to encapsulated data -- this is the wish of the programmer and only the programmer takes full responsibility for it. Although, it is possible to talk about bad practice programming, yep.

In other case -- if to forbit anything -- we go to static strict language already, which has another ideology.

Thanks, Dmitry.

# Sam Tobin-Hochstadt (15 years ago)

On Sun, Apr 18, 2010 at 3:56 AM, Peter van der Zee <ecma at qfox.nl> wrote:

would be ignored by older browsers. This seems bad because downrev browsers would try to run the script content, unless you use server-side version detection to ship this only to uprev browsers.

How would this work in non-browsers? Does this list care about that? Should it? (I don't know what the main target is here...). Browsers do seem the prime candidate here, especially when it comes to multiple versions and upgrading. But how are other (non browser) implementations supposed to do this? Isn't an language internal solution the way to go?

There are lots of well-understood solutions for handling multiple versions of a language in the non-browser context. For example, C compilers, which handle a wide variety of language dialects and standards, often use command line switches. It's also possible to have separate binaries for each version (this is how it works for Python), or a bunch of other possibilities that aren't available on the web.

# Brendan Eich (15 years ago)

On Apr 18, 2010, at 3:56 AM, Peter van der Zee wrote:

On Sun, Apr 18, 2010 at 7:19 AM, Brendan Eich <brendan at mozilla.com>
wrote: On Apr 17, 2010, at 6:07 PM, Peter van der Zee wrote:

To be solved:

  • Allow non-string-property keys
  • Allow "hidden" properties, non-enumerable, not generically
    accessible (like stringed keys are now). To be honest, I'm still not
    100% clear on this one.

I don't see how these two differ.

Non-string property keys don't need to be hidden. The language could
be extended in such a way where identity properties could be part of
it. They could be enumerable. Some edge cases would have to be
resolved of course.

Existing code containing for-in loops over objects that could be
passed in as arguments, or found in the heap, where the objects could
now have object-keyed properties, would break if the keys were not
converted to strings. But of course if converted to string, an object
key would lose its identity and potentially collide, and in any event
not name the same property it did. for-in loop bodies often use the
enumerated key:

for (var prop in obj) alert(o[i]); for (var prop in obj) assert(typeof prop == "string"); for (var prop in obj) if (obj.hasOwnProperty(prop)) alert(prop);

etc.

This gets back to versioning. If you assume we can make incompatible
changes to the language, say under opt-in versioning, then there's
still the problem of mixed versions of code interacting via the heap
(shared function references allowing new objects to be passed to old
functions, or just found in the heap by old functions via the global
object or some other shared mutable object).

These are not "edge cases". They suggest keeping any new object-keyed
properties non-enumerable by definition, with no way to make such new
properties enumerable. This then leads to the observations about
garbage collection, equivalence with inverting obj[prop] to
prop.get(obj) where prop is an EphemeronTable, etc.

The hidden properties could just as well be strings. You could have
a public enumerable property and one that's private and hidden at
the same time.

If both are string-keyed, how would you access both from within the
"private" boundary? There would still be one bit of difference between
the keys. I'm not soliciting a full proposal from you here, just
suggesting this is not a clear enough proposal to evaluate.

I can see these as distinct, That's why I'm wondering whether
everyone is trying to solve one with the other or actually pursuing
either one (or both) of them.

The names proposal has advantages beyond what a "private string" key
idea could have, if there were a way to forge or equate the private
string key. Names can't be forged.

If the private string key idea is indistinguishable with the Name
object idea, then we are in agreement. The use of "object" in "Name
object" is a straw concept currently. Perhaps there should be a new
typeof result. I addressed this issue here:

"We have extant implementations and real code on the Web that assume
o[x] will convert an object denoted by x to its string representation
via [[DefaultValue]] (typically, toString) and use that string to
identify the property of o. Changing the rules by making Name
instances be of object type (“object” typeof result, Object type name
in ECMA-262) breaks this contract. There is no string conversion of a
Name instance that can be used to identify the property named by that
instance in an object. Names are different animals from strings and
objects. Their type (typeof and abstract specification type) must make
this clear."

This could be fixed by some kind detection scheme for these
directives.

You can't detect new syntax except by embedding it in a string and
eval'ing the string inside a try/catch.

But why was versioning going to be a problem at first and not any
more?

Because old browsers fade away. IE6 is taking its time but it will go
away too. More modern browsers all auto-update, crucial for patching
security bugs and promoting new versions to users who might otherwise
get stuck at a buggy, unsafe old rev.

Harmony having new syntax does not mean we are opening up the design
space to make some new, incompatible version of the language. You
seem to allow for that as a possible future course, but TC39 members
are uniformly against this path AFAICT. See harmony:harmony .

/be

That was my proposal indeed. This also makes it impossible for
introducing any and all kind of feature that would not be parsable
by previous versions (like introducing new characters or the short
function notation). It actually puts a great burden on the evolution
of the language as a whole, doesn't it? Not sure whether that's a
bad thing per se.

The burden comes from the Web as it is, in my view. We don't get to
start over. The decision whether or not to make a new and more freely
incompatible variant of JS is, you're right, a choice. TC39 does not
propose such a large change. It is likely to be bad for implementors
and users. It would probably not converge in a usable new spec, based
on design-by-committee. The odds of a market player evolving some new
and more incompatible version is even lower.

The best course in TC39's view, I think I can safely say, is to extend
the language in ways that address design flaws and gaps, and remove
flawed constructs only after they fall into disuse. It is the
extension via new syntax, more than extension of standard objects (but
both can matter), that motivates this versioning discussion. And of
course this gets back to the thread's topic: why did ES5 add "static
methods" to Object.

I hope this helps clarify things.

# Peter van der Zee (15 years ago)

On Sun, Apr 18, 2010 at 3:28 PM, Brendan Eich <brendan at mozilla.com> wrote:

On Apr 18, 2010, at 3:56 AM, Peter van der Zee wrote:

On Sun, Apr 18, 2010 at 7:19 AM, Brendan Eich <brendan at mozilla.com> wrote:

On Apr 17, 2010, at 6:07 PM, Peter van der Zee wrote:

To be solved:

  • Allow non-string-property keys
  • Allow "hidden" properties, non-enumerable, not generically accessible (like stringed keys are now). To be honest, I'm still not 100% clear on this one.

I don't see how these two differ.

Non-string property keys don't need to be hidden. The language could be extended in such a way where identity properties could be part of it. They could be enumerable. Some edge cases would have to be resolved of course.

Existing code containing for-in loops over objects that could be passed in as arguments, or found in the heap, where the objects could now have object-keyed properties, would break if the keys were not converted to strings. But of course if converted to string, an object key would lose its identity and potentially collide, and in any event not name the same property it did. for-in loop bodies often use the enumerated key:

for (var prop in obj) alert(o[i]); for (var prop in obj) assert(typeof prop == "string"); for (var prop in obj) if (obj.hasOwnProperty(prop)) alert(prop);

etc.

This gets back to versioning. If you assume we can make incompatible changes to the language, say under opt-in versioning, then there's still the problem of mixed versions of code interacting via the heap (shared function references allowing new objects to be passed to old functions, or just found in the heap by old functions via the global object or some other shared mutable object).

These are not "edge cases". They suggest keeping any new object-keyed properties non-enumerable by definition, with no way to make such new properties enumerable. This then leads to the observations about garbage collection, equivalence with inverting obj[prop] to prop.get(obj) where prop is an EphemeronTablestrawman:ephemeron_tables, etc.

Okay. So enumerating identity references would require changes that are not going to come. Check. It's not going to happen so any more discussion feels pointless.

The hidden properties could just as well be strings. You could have a public

enumerable property and one that's private and hidden at the same time.

If both are string-keyed, how would you access both from within the "private" boundary? There would still be one bit of difference between the keys. I'm not soliciting a full proposal from you here, just suggesting this is not a clear enough proposal to evaluate.

It would be the same as shadowing variables and your own responsibility. It's also a reason why I don't find it a feasible solution to extending the language. It's like a one time shot. Once you have it, all the backwards compat clash arguments would apply again. This feels like the wrong way because in the end we'll still need to use one of the other paths. I wouldn't prefer to introduce complexity into the language on this premise.

I can see these as distinct, That's why I'm wondering whether everyone is trying to solve one with the other or actually pursuing either one (or both) of them.

The names proposal has advantages beyond what a "private string" key idea could have, if there were a way to forge or equate the private string key. Names can't be forged.

If the private string key idea is indistinguishable with the Name object idea, then we are in agreement. The use of "object" in "Name object" is a straw concept currently. Perhaps there should be a new typeof result. I addressed this issue herestrawman:ephemeron_tables :

"We have extant implementations and real code on the Web that assume o[x]will convert an object denoted by x to its string representation via [[DefaultValue]] (typically, toString) and use that string to identify the property of o. Changing the rules by making Name instances be of object type (“object” typeof result, Objecttype name in ECMA-262) breaks this contract. There is no string conversion of a Name instance that can be used to identify the property named by that instance in an object. Names are different animals from strings and objects. Their type (typeof and abstract specification type) must make this clear."

This could be fixed by some kind detection scheme for these directives.

You can't detect new syntax except by embedding it in a string and eval'ing the string inside a try/catch.

You can add the directive and detect whether it has been activated at a later point, can't you? I mean, you can't in the current specification, but such a mechanism can be created... with backward compatibility in mind. This is of course all tied with the question of wanting to walk the versioning path to begin with. "use strict" seems to be recon in this regard... (as in, haven't we already started on this path?)

But why was versioning going to be a problem at first and not any more?

Because old browsers fade away. IE6 is taking its time but it will go away too. More modern browsers all auto-update, crucial for patching security bugs and promoting new versions to users who might otherwise get stuck at a buggy, unsafe old rev.

Harmony having new syntax does not mean we are opening up the design space to make some new, incompatible version of the language. You seem to allow for that as a possible future course, but TC39 members are uniformly against this path AFAICT. See harmony:harmony.

/be

That was my proposal indeed. This also makes it impossible for introducing any and all kind of feature that would not be parsable by previous versions (like introducing new characters or the short function notation). It actually puts a great burden on the evolution of the language as a whole, doesn't it? Not sure whether that's a bad thing per se.

The burden comes from the Web as it is, in my view. We don't get to start over. The decision whether or not to make a new and more freely incompatible variant of JS is, you're right, a choice. TC39 does not propose such a large change. It is likely to be bad for implementors and users. It would probably not converge in a usable new spec, based on design-by-committee. The odds of a market player evolving some new and more incompatible version is even lower.

The best course in TC39's view, I think I can safely say, is to extend the language in ways that address design flaws and gaps, and remove flawed constructs only after they fall into disuse. It is the extension via new syntax, more than extension of standard objects (but both can matter), that motivates this versioning discussion. And of course this gets back to the thread's topic: why did ES5 add "static methods" to Object.

I hope this helps clarify things.

/be

I'm a a little puzzled why Harmony would be okay to get new syntax, while at the same time refusing to go with new syntax in general. I mean, if you're going to change the language anyways... But I suppose that's just the way it is best acceptable to all parties, including the web.

Sorry for hijacking the thread, did not mean to. Thanks for the clarification.

# Brendan Eich (15 years ago)

On Apr 18, 2010, at 10:00 AM, Peter van der Zee wrote:

This could be fixed by some kind detection scheme for these
directives.

You can't detect new syntax except by embedding it in a string and
eval'ing the string inside a try/catch.

You can add the directive and detect whether it has been activated
at a later point, can't you?

If you mean "use strict"; then yes, you can add that to a function and
detect the lack of its effect in pre-ES5 (or buggy ES5) implementations.

If you mean new syntax, not a pragma hidden in an otherwise-useless
strict expression-statement, then the syntax error exception will stop
parsing at the point of new syntax. To detect the lack of successful
parsing you'd have to write a later <script> or second compilation

unit of some kind, and try to deduce that the first unit did not
compile successfully because the new syntax was not recognized.

But this is onerous and error prone. Thus the opt-in versioning idea
based

# Peter van der Zee (15 years ago)

On Sun, Apr 18, 2010 at 2:45 PM, Sam Tobin-Hochstadt <samth at ccs.neu.edu>wrote:

On Sun, Apr 18, 2010 at 3:56 AM, Peter van der Zee <ecma at qfox.nl> wrote:

would be ignored by older browsers. This seems bad because downrev browsers would try to run the script content, unless you use server-side version detection to ship this only to uprev browsers.

How would this work in non-browsers? Does this list care about that? Should it? (I don't know what the main target is here...). Browsers do seem the prime candidate here, especially when it comes to multiple versions and upgrading. But how are other (non browser) implementations supposed to do this? Isn't an language internal solution the way to go?

There are lots of well-understood solutions for handling multiple versions of a language in the non-browser context. For example, C compilers, which handle a wide variety of language dialects and standards, often use command line switches. It's also possible to have separate binaries for each version (this is how it works for Python), or a bunch of other possibilities that aren't available on the web.

sam th samth at ccs.neu.edu

After talking it through with Brendan this is what we ended up with. He hasn't seen this proposal yet so feel free to take it out on me :)

There's a need for extending the specification. Whilst this by itself is an easy task, there are a few restrictions which make it very difficult. Some include not breaking backwards compatibility, using a uniform naming convention and not using "hackish" looking solutions.

Basically, this means we cannot introduce new language constructs or syntax because older implementations will trip over the code with no way to recover. Furthermore, for various reasons it seems "feature detection" is favored over version detection. This leads us to a simple addition for detecting the support of a certain feature.

ES5 introduced the concept of directives, using perfectly fine fallback with no side effects. This was, as far as the above goes, perfect. Older implementations couldn't possibly trip over it since a string literal without anything else has no visible side effects. There is just one problem with it; there is currently no fool proof way of detecting the support of any directive. There are hacks, but nothing that would be an acceptable solution for the specification.

So to this end a simple proposal follows. Whenever any directive was successfully enabled, a globally accessible property by the same should be incremented, starting at one.

I suggest to use the global variable. If anything, this is the perfect reason for using the global scope. Better yet, the space in the current (and so far only) directive will make it unlikely to break any existing scripts. Brendan thinks "the global object is a tarpit" and would rather see a different object to be used for this. Object would be best suited since ES5 uses that as well. I believe there's no semantic base to use Object for the detection of enabled directives. However, there are only a few alternatives available since introducing a new variable just for this cause has two problems in one; there's no full fallback without hacky looking solutions: if (window.EnabledDirectives && window.EnabledDirectives["use strict"]). Leaving out the && will make it crash in implementations that don't support it.

Anyways, the target object, if not global, can be up for discussion.

The reason a globally accessible property would work is because we're just testing whether the directive is supported in general. The reason the property is incremented is if, for whatever reason, the programmer would want to detect case by case whether the directive was followed, he could just compare the values of the property. In any other case, once set the property always evals to true. Note that the ES5 spec simply does not support a directive to be ignored selectively. If it is supported by the implementation, it should always be supported.

Hope I caught all the reasoning and problems, but if the language is to be extended by feature detectable constructs... this method would probably be a good candidate.

# Brendan Eich (15 years ago)

On Apr 19, 2010, at 4:27 PM, Peter van der Zee wrote:

Basically, this means we cannot introduce new language constructs or
syntax because older implementations will trip over the code with no
way to recover. Furthermore, for various reasons it seems "feature
detection" is favored over version detection.

When you want the new syntax, though, you're going to have to use opt- in versioning (see RFC4329).

ES5 introduced the concept of directives, using perfectly fine
fallback with no side effects. This was, as far as the above goes,
perfect. Older implementations couldn't possibly trip over it since
a string literal without anything else has no visible side effects.

I should point out again that "use strict"; changes runtime semantics
involving eval and arguments in ES5, it does not merely prevent
programs from getting to runtime (i.e., it is not just stricter
syntax, e.g. forbidding 'with').

This means that if you "use strict" you have to test your code in pre- ES5 and ES5-or-above implementations, to be sure you're not counting
on the ES5 changes.

Usually you won't have a problem, but testing is the only way to be
sure, if you are using eval and/or arguments in your strict code.

# Erik Corry (15 years ago)

2010/4/19 Brendan Eich <brendan at mozilla.com>:

On Apr 19, 2010, at 4:27 PM, Peter van der Zee wrote:

Basically, this means we cannot introduce new language constructs or syntax because older implementations will trip over the code with no way to recover. Furthermore, for various reasons it seems "feature detection" is favored over version detection.

When you want the new syntax, though, you're going to have to use opt-in versioning (see RFC4329).

Let's not go there.

The names proposal seems to be basically ephemeron tables without the special GC semantics.

I'm a great fan of coupling proposals. Putting a dozen uncoupled proposals into Harmony looks like a recipe for a hodge-podge language. Finding powerful abstractions that solve several problems at once (in this case weak hashes and private variables) feels much nicer.

# Brendan Eich (15 years ago)

On Apr 19, 2010, at 5:15 PM, Erik Corry wrote:

2010/4/19 Brendan Eich <brendan at mozilla.com>:

On Apr 19, 2010, at 4:27 PM, Peter van der Zee wrote:

Basically, this means we cannot introduce new language constructs or syntax because older implementations will trip over the code with
no way to recover. Furthermore, for various reasons it seems "feature
detection" is favored over version detection.

When you want the new syntax, though, you're going to have to use
opt-in versioning (see RFC4329).

Let's not go there.

We have new syntax in Harmony. We are going there.

The names proposal seems to be basically ephemeron tables without the special GC semantics.

That is over-specification and implementation-as-specification, and it
will not fly in TC39.

I'm a great fan of coupling proposals.

Have you heard of the multiplication principle?

I like my odds ratios bigger, thank you very much. I've strongly
advised Mark on this point and he has adapted his proposals.

Putting a dozen uncoupled proposals into Harmony looks like a recipe for a hodge-podge language.

Hodge-podge is what you get by implementation-as-specification.

Finding powerful abstractions that solve several problems at once (in this case weak hashes and private variables) feels much nicer.

A name abstraction that is concrete in terms of GC, object type
(typeof), and the possibility of non-leaf Name objects is not abstract
at all -- it is concretely an EphemeronTable.

TC39 wants sugar for names. But "desugaring" taken too far becomes
compilation, which is not simple syntax rewriting. It's also
observable via typeof, Name objects having properties, and other
effects (I'm willing to bet).

Real abstractions serve use-cases, that is, pressing user needs,
without implementing abstractions in overtly leaky ways. That's what
we need for Names, and many other proposals. This does not make a
hodge-podge if we serve the important use-cases. It makes a better
language.

If we can unify abstractions without leaks, sure. That's not the case
here.

# David Herman (15 years ago)

I'm a great fan of coupling proposals. Putting a dozen uncoupled proposals into Harmony looks like a recipe for a hodge-podge language. Finding powerful abstractions that solve several problems at once (in this case weak hashes and private variables) feels much nicer.

That's not been our experience. TC39 has made good progress since agreeing to the Harmony approach, which involves keeping proposals as orthogonal as possible. That doesn't mean we can't exercise taste and caution in deciding on what goes together in the end, and it certainly doesn't mean we don't consider the effects the different proposals have on one another, but it does mean we don't have to approach every design question by solving a monolithic, global constraint problem.

# Erik Corry (15 years ago)

2010/4/20 Brendan Eich <brendan at mozilla.com>:

On Apr 19, 2010, at 5:15 PM, Erik Corry wrote:

2010/4/19 Brendan Eich <brendan at mozilla.com>:

On Apr 19, 2010, at 4:27 PM, Peter van der Zee wrote:

Basically, this means we cannot introduce new language constructs or syntax because older implementations will trip over the code with no way to recover. Furthermore, for various reasons it seems "feature detection" is favored over version detection.

When you want the new syntax, though, you're going to have to use opt-in versioning (see RFC4329).

Let's not go there.

We have new syntax in Harmony. We are going there.

The names proposal seems to be basically ephemeron tables without the special GC semantics.

That is over-specification and implementation-as-specification, and it will not fly in TC39.

I'm a great fan of coupling proposals.

Have you heard of the multiplication principle?

www.google.com/search?sourceid=chrome&ie=UTF-8&q=multiplication+principle ?

I like my odds ratios bigger, thank you very much.

You are welcome.

# Brendan Eich (15 years ago)

On Apr 19, 2010, at 5:50 PM, Erik Corry wrote:

I like my odds ratios bigger, thank you very much.

You are welcome.

The point is that EphemeronTables may not make Harmony, but Names
could. Coupling the latter to the former multiplies risk of neither
making it.

# Peter van der Zee (15 years ago)

On Mon, Apr 19, 2010 at 11:25 PM, Brendan Eich <brendan at mozilla.com> wrote:

On Apr 19, 2010, at 4:27 PM, Peter van der Zee wrote:

ES5 introduced the concept of directives, using perfectly fine fallback

with no side effects. This was, as far as the above goes, perfect. Older implementations couldn't possibly trip over it since a string literal without anything else has no visible side effects.

I should point out again that "use strict"; changes runtime semantics involving eval and arguments in ES5, it does not merely prevent programs from getting to runtime (i.e., it is not just stricter syntax, e.g. forbidding 'with').

This means that if you "use strict" you have to test your code in pre-ES5 and ES5-or-above implementations, to be sure you're not counting on the ES5 changes.

Usually you won't have a problem, but testing is the only way to be sure, if you are using eval and/or arguments in your strict code.

Correct me if I'm wrong but in the case of "use strict", doesn't that only apply restrictions? So as far as the difference between "use strict" and no strict are concerned, if a script works in strict mode shouldn't it also work in no strict? I can't recall any parts of the spec that include backwards incompatible extensions for that directive.

Of course for new syntax or using future reserved keywords or whatever, you're absolutely correct.

# Brendan Eich (15 years ago)

On Apr 19, 2010, at 11:13 PM, Peter van der Zee wrote:

On Mon, Apr 19, 2010 at 11:25 PM, Brendan Eich <brendan at mozilla.com>
wrote: On Apr 19, 2010, at 4:27 PM, Peter van der Zee wrote:

ES5 introduced the concept of directives, using perfectly fine
fallback with no side effects. This was, as far as the above goes,
perfect. Older implementations couldn't possibly trip over it since
a string literal without anything else has no visible side effects.

I should point out again that "use strict"; changes runtime
semantics involving eval and arguments in ES5, it does not merely
prevent programs from getting to runtime (i.e., it is not just
stricter syntax, e.g. forbidding 'with').

This means that if you "use strict" you have to test your code in
pre-ES5 and ES5-or-above implementations, to be sure you're not
counting on the ES5 changes.

Usually you won't have a problem, but testing is the only way to be
sure, if you are using eval and/or arguments in your strict code.

Correct me if I'm wrong but in the case of "use strict", doesn't
that only apply restrictions? So as far as the difference between
"use strict" and no strict are concerned, if a script works in
strict mode shouldn't it also work in no strict?

No.

var x = "global"; alert((function () { "use strict"; eval("var x = 'dynamic';"); return
x;})());

There are other such examples.

I can't recall any parts of the spec that include backwards
incompatible extensions for that directive.

Of course for new syntax or using future reserved keywords or
whatever, you're absolutely correct.

I was explicit in calling attention to eval and arguments.

# Asen Bozhilov (15 years ago)

Brendan Eich wrote:

On Apr 19, 2010, at 4:27 PM, Peter van der Zee wrote:

ES5 introduced the concept of directives, using perfectly fine fallback with no side effects. This was, as far as the above goes, perfect. Older implementations couldn't possibly trip over it since a string literal without anything else has no visible side effects.

I should point out again that "use strict"; changes runtime semantics involving eval and arguments in ES5, it does not merely prevent programs from getting to runtime (i.e., it is not just stricter syntax, e.g. forbidding 'with').

But ES5 strict mode is not change only runtime semantics. For example:

| 13 Function Definition | 13.1 Strict Mode Restrictions | It is a SyntaxError if the Identifier "eval" or the Identifier "arguments" occurs within a FormalParameterList | of a strict mode FunctionDeclaration or FunctionExpression.

And from that quotation:

"use strict"; function eval(){} function arguments(){}

Both function declarations are syntactical invalid by described rules of ECMA-262-5. The same is with variable declaration see 12.2.1 Strict Mode Restrictions. And there are strict mode restrictions about future reserved words. For example:

var implements;

Is syntactical valid statement by ES5 non-strict mode. But the follow is not:

"use strict"; var implements;

When they do syntactical restriction in strict mode, why they are not remove automatic semicolon insertions?

# Brendan Eich (15 years ago)

On Apr 23, 2010, at 1:24 PM, Asen Bozhilov wrote:

Brendan Eich wrote:

On Apr 19, 2010, at 4:27 PM, Peter van der Zee wrote:

ES5 introduced the concept of directives, using perfectly fine
fallback with no side effects. This was, as far as the above goes,
perfect. Older implementations couldn't possibly trip over it
since a string literal without anything else has no visible side
effects.

I should point out again that "use strict"; changes runtime
semantics involving eval and arguments in ES5, it does not merely
prevent programs from getting to runtime (i.e., it is not just
stricter syntax, e.g. forbidding 'with').

But ES5 strict mode is not change only runtime semantics.

Yes, we know -- I didn't say otherwise. Is this important to the point
I was making in reply to Peter?

For example:

[snip]

Yup. ES5 spec is out, anyone can read it.

I don't see the point in your reply yet. Excuse my bitchiness, but the
noise to signal on this list is rising and I object.

When they do syntactical restriction in strict mode, why they are
not remove automatic semicolon insertions?

Are you digressing to summarize strict mode, and then ask questions
about its design?

My point in reply to Peter stands, but here's a memory from the strict
mode discussions in TC39, to answer this question: we talked about
trade-offs in making it hard to migrate extant code into strict mode.
Removing ASI from strict mode was considered too big a migration tax.
That's it.

# Brendan Eich (15 years ago)

On Apr 23, 2010, at 2:17 PM, Brendan Eich wrote:

... here's a memory from the strict mode discussions in TC39, to
answer this question: we talked about trade-offs in making it hard
to migrate extant code into strict mode. Removing ASI from strict
mode was considered too big a migration tax. That's it.

Here's a bit more memory to up the S/N ratio. The most objectionable
case affected by ASI, everyone agrees, is:

function longwinded() { if (some) { if (deep) { if (condition) { return "a lengthy, verbose, pleonastic, tautological,
redundant result";

         }
         ...
     }
     ...
 }
 return "foo";

}

ASI makes that return\n be return;\n and leaves the result string as a
useless expression in an unreachable statement.

Consider the above modified to remove the braces around the innermost
if's consequent: then the string is a useless expression statement but
it is quite reachable (it precedes the first ...-elided code).

This objectionable case, IIRC, we did talk about trying to fix by
mandating control flow graph construction to find the dead code, and
either reporting the error or correcting it by not inserting the ;
after the return.

But such analysis is a high tax on implementors (well, it was in years
past; not this year in my view). So we didn't mandate an error or
(better) an exception to ASI's rules here.

# Asen Bozhilov (15 years ago)

Brendan Eich wrote:

On Apr 23, 2010, at 1:24 PM, Asen Bozhilov wrote:

Brendan Eich wrote:

On Apr 19, 2010, at 4:27 PM, Peter van der Zee wrote:

ES5 introduced the concept of directives, using perfectly fine fallback with no side effects. This was, as far as the above goes, perfect. Older implementations couldn't possibly trip over it since a string literal without anything else has no visible side effects.

I should point out again that "use strict"; changes runtime semantics involving eval and arguments in ES5, it does not merely prevent programs from getting to runtime (i.e., it is not just stricter syntax, e.g. forbidding 'with').

But ES5 strict mode is not change only runtime semantics.

Yes, we know -- I didn't say otherwise. Is this important to the point I was making in reply to Peter?

Yes it was important, because someone can quoted your words. And someone when read that and will see the name Brendan Eich - The creator of JavaScript and can do error conclusion.

For example:

[snip]

Yup. ES5 spec is out, anyone can read it.

That was only example which prove my words. Your tone here sounds like: "Anybody can RTFM" and is not related with my response.

I don't see the point in your reply yet. Excuse my bitchiness, but the noise to signal on this list is rising and I object.

You are excused. My previous reply have questionable character instead of disapprove your words.

When they do syntactical restriction in strict mode, why they are not remove automatic semicolon insertions?

Are you digressing to summarize strict mode, and then ask questions about its design?

Any other response on my question was better than that question.

However thanks for your reply!

# Brendan Eich (15 years ago)

On Apr 23, 2010, at 2:47 PM, Asen Bozhilov wrote:

However thanks for your reply!

No problem, your question about why ASI is not disabled by ES5 strict
mode was a good one!