using Private name objects for declarative property definition.

# Allen Wirfs-Brock (14 years ago)

The current Harmony classes proposal harmony:classes includes the concept of private instance members and syntax for defining them. While it presents a syntax for accessing them (eg, private(foo).bar accesses the private 'bar' member of the object that is the value of foo) there does not yet appear to be consensus acceptance of this access syntax. There also appears to be a number of still unresolved semantic issues WRT such private instance members. As an alternative, the classes proposal says that "a pattern composing private names with classes can satisfy" the requirements for private instance members. However, so far, we have no concrete proposals on what these patterns might be. This is a first cut at proposing such patterns.

The current versions of the private names proposal harmony:private_name_objects simply exposes a constructor for creating unique values can be be used as property keys:

const key = Name.create();

Those values can be used to define and access object properties:

let obj = {};

print(obj.key); // "undefined" obj[key]="some private state"; //create a property with a private name key print(obj.key); // still "undefined" print(obj["key"]); //"undefined" print(obj[key]); // "some private state"

In the above example the property created by using the private name value is essentially an instance private member of obj. In general, various levels of visibility (instance private, class private, friends, module private, etc.) can be achieved simply by using lexical scoping to control access to the variable that contain private key values.

For example:

function ObjWithInstancePrivateMember(secret) { const s = Name.create(); this[s]= secret; this.method = function () { return doSomething(this[s]); } }

const priv = Name.create(); function ObjWithClassPrivateMember(secret) { const s = Name.create(); this[priv]= secret; this.method = function (another) { return doSomethingTogether(this[priv], another[priv]); } }

Note that [ ] notation with an expression that evaluates to a private name value must be used to access such private members. There have been various proposals such as strawman:names and strawman:private_names to allow dotted property access to be used with private name keys. However, these either had technical problems or their semantics were rejected by community discussion on this list. The issues raised in response to those proposals are likely to recur for any private member access syntax that uses dot notation.

A weakness with using private names object to define private members is that currently there is no way to declaratively define objects with such members. Consider the following definition of a Point abstraction using extended object literals and a simple naming convention to identify "private" members:

const Point = { //private members __x: 0, __y: 0, __validate(x,y) { return typeof x == 'number' && typeof y = 'number'}, //public members new(x,y) { if (!this.__validate(x,y)) throw "invalid"; return this <| { __x: x, __y: y } }; add(anotherPoint) { return this.new(this.__x+another.__x, this.__y+another.__y) } }

using private name objects this would have to be written as:

const __x=Name.create(); const __y=Name.create(); const __validate=Name.create(); const Point = { //public members new(x,y) { if (!this__validate) throw "invalid"; let obj = this <| { }; obj[__x] = x; obj[__y] = y; return obj; }; add(anotherPoint) { return this.new(this[__x]+another[__x], this[__y]+another[__y]) } } //private members for Point Point[__x] = 0; Point[__y] = 0,; Point[ __validate] = function (x,y) { return typeof x == 'number' && tyupeof y = 'number'};

The biggest issue with this rewrite is it loose the declarative definition and replaces it with an hybrid that has some declarative parts and some imperative parts. This breaks the logical encapsulation of the definition of the Point abstraction making it harder to be understand and manipulated by both humans and mechanical tools. The reason this is necessary is that object literal notation currently has no way to express that a definition of a property uses a name that needs be interpreted as a reference to a lexical variable whose value is the actual private name object. Fixes these requires an additional extension to object literals.

One possibility, is to use a prefix keyword to each property definition that uses a private key value. For example:

const __x=Name.create(); const __y=Name.create(); const __validate=Name.create(); Point = { //private members private __x: 0, private __y: 0, private __validate(x,y) { return typeof x == 'number' && typeof y = 'number'}, //public members new(x,y) { if (!this__validate) throw "invalid"; return this <| { private __x: x, private __y: y } }; add(anotherPoint) { return this.new(this[__x]+another[__x], this[__y]+another[__y]) } }

Note that the separate definition of the private names is still required. In the above example, prefixing a property definition with 'private' means that the identifier in the property name position is evaluated as a lexical reference and its value is used as key of the property that is being defined. (If it is not a private name object, ToString of the value is used as the key). There are a couple potential issues with this approach. The 'private' keyword is very noticeable, but its meaning is different from what might be expected from readers familiar with widely used languages. It particular, it does not control the actual accessibility of the property. It only means that the property key (may) be a private name value. The actual accessibility of the property is determined by the visibility of the private name value. Another potential issue is that is precludes the use of 'private' for other purposes. For example, a possible short hand for:

const __x=Name.create(); const __y=Name.create(); const __validate=Name.create();

might be:

private __x; private __y; private __validate;

and if this was available, it might be be useful to allow such definitions to occur inside an object literal so that private name object definitions could be logically bundled in the object declaration that uses them. The use of 'private' as a property prefix might preclude this and other usages of 'private'.

Another alternative that avoids using the 'private' prefix is to allow the property name in a property definition to be enclosed with brackets:

const __x=Name.create(); const __y=Name.create(); const __validate=Name.create(); Point = { //private members [__x]: 0, [ __y]: 0, __validate { return typeof x == 'number' && typeof y = 'number'}, //public members new(x,y) { if (!this__validate) throw "invalid"; return this <| { [__x]: x, [__y]: y } }; add(anotherPoint) { return this.new(this[__x]+another[__x], this[__y]+another[__y]) } }

The meaning is the same as the private prefix, the identifier enclosed in brackets is evaluated as a lexical reference and its value is used as the key of the property that is being defined. Using brackets in this manner would enforce the fact that private name keyed properties can only be accessed by [ ] property access syntax.

Of course some people may object to this requirement, that bracket access must be used to access private state properties. One possible objection is that [ ] suggests array access or more generally computed property access where the actual property accessed is likely to vary between different executions of an expression containing them. A fix for this could be to use a different notation for private name based property access. For example, obj at foo could be defined to mean the same thing as obj[foo] if foo's value is a private name object. If that notation was used for property accesses it could also be used for property declarations. For example:

const __x=Name.create(); const __y=Name.create(); const __validate=Name.create(); Point = { //private members @__x: 0, @__y: 0, @__validate(x,y) { return typeof x == 'number' && typeof y = 'number'}, //public members new(x,y) { if (!this at validate(x,y)) throw "invalid"; return this <| { @__x: x, @__y: y } }; add(anotherPoint) { return this.new(this at __x+another@__x, this at __y+another@__y) } }

Each of these approaches seem plausible. For a side-by-side comparison of the above example using the alternatives see harmony:private-name-alternatives.pdf . I'm interested in feedback on the alternatives.

I've only used object literal in these examples, however the same alternatives should be equally applicable to class declarations.

# Andreas Rossberg (14 years ago)

On 8 July 2011 21:16, Allen Wirfs-Brock <allen at wirfs-brock.com> wrote:

The current versions of the private names proposal harmony:private_name_objects  simply exposes a constructor for creating unique values can be be used as property keys:

Of the several private names proposals around, I find this one preferable. It is clean and simple, and provides the functionality needed in an orthogonal manner. It seems worth exploring how well this works in practice before we settle on something more complicated.

One minor suggestion I'd have is to treat names as a proper new primitive type, i.e. typeof key == "name", not "object". That way, it can be defined much more cleanly what a name is, where its use is legal (as opposed to proper objects), and where it maybe enjoys special treatment.

Another alternative that avoids using the 'private' prefix is to allow the property name in a property definition to be enclosed with brackets: const __x=Name.create(); const __y=Name.create(); const __validate=Name.create();  Point = {      //private members      [__x]: 0,      [ __y]: 0,      __validate { return typeof x == 'number' && typeof y = 'number'},      //public members      new(x,y) {           if (!this__validate) throw "invalid";           return this <| {                   [__x]: x,                   [__y]: y                  }       };      add(anotherPoint) {            return this.new(this[__x]+another[__x], this[__y]+another[__y])      } }

I like this notation most, because it can be generalised in a consistent manner beyond the special case of private names: there is no reason that the bit in brackets is just an identifier, we could allow arbitrary expressions. So the notation would be the proper dual to bracket access notation. From a symmetry and expressiveness perspective, this is very appealing.

Notation-wise, I think people would get used to using brackets. I see no good reason to introduce yet another projection syntax, like @.

Whether additional sugar is worthwhile -- e.g. private declarations -- remains to be explored. (To be honest, I haven't quite understood yet in what sense such sugar would really be more "declarative". Sure, it is convenient and perhaps more readable. But being declarative is a semantic property, and cannot be achieved by simple syntax tweaks.)

# Brendan Eich (14 years ago)

On Jul 8, 2011, at 12:16 PM, Allen Wirfs-Brock wrote:

The current Harmony classes proposal harmony:classes includes the concept of private instance members and syntax for defining them. While it presents a syntax for accessing them (eg, private(foo).bar accesses the private 'bar' member of the object that is the value of foo) there does not yet appear to be consensus acceptance of this access syntax.

Oh, quite the opposite -- everyone on TC39 with whom I've spoken agrees that private(this) syntax is straw that must be burned up. We need new syntax, or else we need to do at least what Dave proposed in "minimal classes": defer private syntax, let programmers use private name objects and explicit [] indexing.

I do think we can say a few more things about private in class that may not be in the requirements on the wiki:

On the level of rationale for why class- and not instance-private, Juan Ignacio Dopazo in private correspondence made a good observation, shown by his example:

class MyClass { private foo() {} bar() { var self = this; setTimeout(function () { self.foo(); }, 0); } }

Because |this| is not lexical, instance- rather than class-private access that requires "this." to the left of the private variable reference does not work unless you use the closure pattern explicitly and abjure |this|.

The straw "private(foo)" syntax doesn't help if the goal is instance privacy, since the inner function has no idea (runtime, never mind compile-time) how to enforce to which particular instance self must refer.

Requiring .bind(this) after the function expression passed to setTimeout can be used to work around such a hypothetical, mandatory "this."-prefix instance-private restriction, but that's onerous and it can be too costly.

Another observation about any class-private scheme we might consider in the current context: private instance variables in many ways (notably not for Object.freeze) act like properties, and the syntax mooted so far casts them in that light. Even if the ES5 reflective APIs, such as Object.getOwnPropertyNames, rightly skip privates on a class instance, proxies may raise the question: how does a private variable name reflect as a property name?

class Point { constructor(x, y) { private x = x, y = y; } equals(other) { return private(this).x is private(other).x && private(this).y is private(other).y; } ... }

We cannot know how to ask for private-x from other without special syntax of some kind, either at the access point or as a binding declaration affecting all .x (and x: in object literals). So here I use the proposal's straw private(foo) syntax.

Could other be a proxy that somehow has a private data record? Could other denote a class instance whose [[Prototype]] is a proxy? I claim we do not want private(foo) by itself, no .x after, to reify as an object, but if it did, then it seems to me it could be used with a proxy to reflect on private variable names.

There are certainly several choices here, but the one I currently favor as simplest, which creates no new, ad-hoc concepts in the language, is that class-private instance variables are properties named by private name objects.

Per the requirements for private in the classes proposal, this means Object.freeze does not freeze private-named properties, or at least does not make private-named data properties non-writable.

Perhaps Object.preventExtensions should not restrict private-named properties from being added to the object. This seems strange and wrong at first, but if freeze does not affect private-named properties, I'm not sure there is a "defensive consistency" threat from attackers decorating frozen objects with ad-hoc properties named by private name objects that only the attacker could create or access, which cannot collide with the class's private names.

Perhaps this is uncontroversial. I hope so, but I don't assume it is, and I suspect people who worked on the classes proposal may disagree. Cc'ing Mark in particular, since I'm just writing down some preliminary thoughts and intermediate conclusions here, and I could be way off base.

Ok, back to your good point about wanting private names to be usable in object initialisers:

Each of these approaches seem plausible. For a side-by-side comparison of the above example using the alternatives see harmony:private-name-alternatives.pdf . I'm interested in feedback on the alternatives.

The first thing I'd say is that, whatever the syntax (private prefix, [] around property name, or @ prefix), it seems too verbose to require boilerplate of const __x = Name.create(), etc. before the object literal.

Wouldn't it be better for the prefix, brackets or sigil to by itself define a private name and use it, also putting it in scope for the rest of the initialiser? With temporal dead zone error semantics for use before def, as with const?

On the syntax menu, the [] bracketing seems to allow arbitrary computed property names, but if I'm reading you right, you don't propose that: rather you want "constant" but not static property names, so that the initializer's shape does not depend on evaluating arbitrary expressions to compute property names (whether string-equated public names, or private name objecs). Do I have this right?

If so, I agree we want the syntax to should "here is a private property name!" In this light, private works but is verbose, @ works as well and is concise (maybe too concise for some folks).

I also contend that the static (compile-time, early-error time) shape of object literals in JS, in contrast to the mandatory name-quoting and full evaluation of unquoted name expressions in other languages such as Python, is something to consider keeping.

If so, then const is not enough. We would really need a static evaluation stage where private names could be created and used, but no other expressions evaluated. But such a stage has been rejected in the Harmony era, e.g. for guards.

Modules add a different kind of staging that is not problematic, but two stages for evaluating any given expression or statement is a bridge too far. I'm not in favor of adding another such evaluation regime.

I've only used object literal in these examples, however the same alternatives should be equally applicable to class declarations.

With classes as proposed, though, the way private name objects would be created would be via "private x = x, y = y;" declarations within the constructor body, as in my Point example above. That seems unfortunate since the scope of these names, whatever the access syntax, is wider than the constructor body. This is one of the open issues with classes I've mentioned here but not quite captured on the wiki page.

But whatever the class syntax, and the disposition of private in class and even classes in ES.next, I agree we should expect private declarative and expression forms to work the same in object initialisers and in classes.

It would be good to get everyone buying into this private-means-property-with-private-name-key-everywhere agreement.

# Brendan Eich (14 years ago)

On Jul 8, 2011, at 2:43 PM, Andreas Rossberg wrote:

One minor suggestion I'd have is to treat names as a proper new primitive type, i.e. typeof key == "name", not "object". That way, it can be defined much more cleanly what a name is, where its use is legal (as opposed to proper objects), and where it maybe enjoys special treatment.

We went back and forth on this. I believe the rationale is in the wiki (but perhaps in one of the strawman:name pages). There are a couple of reasons:

  1. We want private name objects to be usable as keys in WeakMaps. Clearly we could extend WeakMaps to have either "object" (but not "null") or "name" typeof-type keys, but that complexity is not warranted yet.

  2. Private name objects are deeply frozen and behave like value types (since they have no copy semantics and you can only generate fresh ones). Thus they are typeof-type "object" but clearly distinct from string-equated property names that JS has sported so far.

In other words, we don't gain any distinctiveness, or make any particular claims about private name objects that could not be made about other (deeply-frozen, generated-only, say by Object.create or Proxy.create in a distinguished factory function) kinds of objects, via a new typeof-type.

In light of these points, making private names be "objects" in both the as-proposed WeakMap sense, and the typeof-result sense, seems best.

So, while we have added "null" as a new typeof result string, and we have considered adding other novel typeof results, and we believe that we can add new typeof results with good enough reason (thanks to IE adding some non-standard ones long ago, and programmers tending to write non-exhaustive switch and if-else "cond" structures to handle only certain well-known typeof cases), in the case of private name objects, we don't think we have good enough reason to add a typeof name -- and then to complicate WeakMap.

Point = { //private members [__x]: 0, [ __y]: 0, __validate { return typeof x == 'number' && typeof y = 'number'}, //public members new(x,y) { if (!this__validate) throw "invalid"; return this <| { [__x]: x, [__y]: y } }; add(anotherPoint) { return this.new(this[__x]+another[__x], this[__y]+another[__y]) } }

I like this notation most, because it can be generalised in a consistent manner beyond the special case of private names: there is no reason that the bit in brackets is just an identifier, we could allow arbitrary expressions.

Then the shape of the object is not static. Perhaps this is worth the costs to implementations and other "analyzers" (static program analysis, human readers). We should discuss a bit more first, as I just wrote in reply to Allen.

So the notation would be the proper dual to bracket access notation. From a symmetry and expressiveness perspective, this is very appealing.

It does help avoid eval abusage, on the upside.

Proceeding bottom-up, with orthogonal gap-filling primitives that compose well, is our preferred way for Harmony. Private name objects without new syntax, requiring bracket-indexing, won in part by filling a gap without jumping to premature syntax with novel binding semantics (see below).

Here, with obj = { [expr]: value } as the way to compute a property name in an object initialiser (I must not write "object literal" any longer), we are proceeding up another small and separate hill. But, is this the right design for object initialisers (which the normative grammar does call "ObjectLiterals")?

Notation-wise, I think people would get used to using brackets. I see no good reason to introduce yet another projection syntax, like @.

Agreed that unless we use @ well for both property names in initialisers and private property access, and in particular if we stick with bracketing for access, then square brackets win for property naming too -- but there's still the loss-of-static-shape issue.

Whether additional sugar is worthwhile -- e.g. private declarations -- remains to be explored. (To be honest, I haven't quite understood yet in what sense such sugar would really be more "declarative". Sure, it is convenient and perhaps more readable. But being declarative is a semantic property, and cannot be achieved by simple syntax tweaks.)

Good point!

The original name declaration via "private x" that Dave championed was definitely semantic: it created a new static lookup hierarchy, lexical but for names after . in expressions and before : in property assignments in object literals. This was, as Allen noted, controversial and enough respected folks

# Brendan Eich (14 years ago)

On Jul 8, 2011, at 3:24 PM, Brendan Eich wrote:

In other words, we don't gain any distinctiveness, or make any particular claims about private name objects that could not be made about other (deeply-frozen, generated-only, say by Object.create or Proxy.create in a distinguished factory function) kinds of objects, via a new typeof-type.

Oh, of course you meant to distinguish private names via typeof precisely to tell that they are not converted to strings when used as property names. For that test, the proposal

harmony:private_name_objects

proposes an isName predicate function exported from the "@name" built-in module.

# Allen Wirfs-Brock (14 years ago)

On Jul 8, 2011, at 3:24 PM, Brendan Eich wrote:

On Jul 8, 2011, at 2:43 PM, Andreas Rossberg wrote:

Point = { //private members [__x]: 0, [ __y]: 0, __validate { return typeof x == 'number' && typeof y = 'number'}, //public members new(x,y) { if (!this__validate) throw "invalid"; return this <| { [__x]: x, [__y]: y } }; add(anotherPoint) { return this.new(this[__x]+another[__x], this[__y]+another[__y]) } }

I like this notation most, because it can be generalised in a consistent manner beyond the special case of private names: there is no reason that the bit in brackets is just an identifier, we could allow arbitrary expressions.

Then the shape of the object is not static. Perhaps this is worth the costs to implementations and other "analyzers" (static program analysis, human readers). We should discuss a bit more first, as I just wrote in reply to Allen.

This is one of the reason I think I slightly prefer the @ approach. @ can be defined similarly to . in that it must be followed by an identifier and that the identifier must evaluate to a private name object. The latter in the general case would have to be a runtime check but in many common cases could be statically verified. Even this doesn't guarantee that we statically know the shape. Consider:

function f(a,b) { return { @a: 1, @b: 2 } };

const x=Name.create(); const y=Name.create(); let obj1 = f(x,y); let obj2 = f(y,x); let obj3 = f(x,x); let obj4 = f(y,y);

So the notation would be the proper dual to bracket access notation. From a symmetry and expressiveness perspective, this is very appealing.

It does help avoid eval abusage, on the upside.

If you mean something like: let propName = computeSomePropertyName(); let obj = eval("({"+propName+": null})");

I'd say whoever does that doesn't know the language well enough as their are already good alternatives such as: let obj = {}; obj[propName] = null;

or let obj = new Object; Object.defineProperty(obj, propName,{value:null, writable: true, enumerable: true, configurable: true});

I know that people do stupid and unnecessary eval tricks today but I doubt that adding yet another non-eval way to accomplish the same thing is going to stop that. The basic problem they have is not knowing the language and make the language bigger isn't likely to make things any better for them.

Proceeding bottom-up, with orthogonal gap-filling primitives that compose well, is our preferred way for Harmony. Private name objects without new syntax, requiring bracket-indexing, won in part by filling a gap without jumping to premature syntax with novel binding semantics (see below).

Here, with obj = { [expr]: value } as the way to compute a property name in an object initialiser (I must not write "object literal" any longer), we are proceeding up another small and separate hill. But, is this the right design for object initialisers (which the normative grammar does call "ObjectLiterals")?

Another concern is that it creates another look-ahead issue for the unify blocks and object initializers proposal.

Notation-wise, I think people would get used to using brackets. I see no good reason to introduce yet another projection syntax, like @.

Agreed that unless we use @ well for both property names in initialisers and private property access, and in particular if we stick with bracketing for access, then square brackets win for property naming too -- but there's still the loss-of-static-shape issue.

There is also an argument to me made that [ ] and @ represent two different use cases: computed property access and private property access and for that reason there should be a syntactic distinction between them.

Whether additional sugar is worthwhile -- e.g. private declarations -- remains to be explored. (To be honest, I haven't quite understood yet in what sense such sugar would really be more "declarative". Sure, it is convenient and perhaps more readable. But being declarative is a semantic property, and cannot be achieved by simple syntax tweaks.)

Good point!

The original name declaration via "private x" that Dave championed was definitely semantic: it created a new static lookup hierarchy, lexical but for names after . in expressions and before : in property assignments in object literals. This was, as Allen noted, controversial and enough respected folks on es-discuss (I recall Andrew Dupont in particular) and in TC39 reacted negatively that we separated and deferred it. I do not know how to revive it productively.

Actually, I think you can blame be rather than Dave for the dual lookup hierarchy. Dave (and Sam's) original proposal did have special semantics for private name declarations bug only had a single lookup hierarchy.

# Brendan Eich (14 years ago)

On Jul 8, 2011, at 4:21 PM, Allen Wirfs-Brock wrote:

On Jul 8, 2011, at 3:24 PM, Brendan Eich wrote:

Then the shape of the object is not static. Perhaps this is worth the costs to implementations and other "analyzers" (static program analysis, human readers). We should discuss a bit more first, as I just wrote in reply to Allen.

This is one of the reason I think I slightly prefer the @ approach. @ can be defined similarly to . in that it must be followed by an identifier and that the identifier must evaluate to a private name object. The latter in the general case would have to be a runtime check but in many common cases could be statically verified.

This does give a probabilistic edge to implementations optimizing @ for private names only (but with guards that throw on non-private-name result of evaluating what's on the right of @), and predicting or static-analyzing shape.

Even this doesn't guarantee that we statically know the shape.

Certainly not. And that is a change from today, which I think we ought to discuss. Because as you note below, private names could be added to objects declared by initialisers that lack any private name syntax, after the initialiser; or after a new Object or Object.create.

It does help avoid eval abusage, on the upside.

If you mean something like: let propName = computeSomePropertyName(); let obj = eval("({"+propName+": null})");

I'd say whoever does that doesn't know the language well enough as their are already good alternatives such as: let obj = {}; obj[propName] = null;

or let obj = new Object; Object.defineProperty(obj, propName,{value:null, writable: true, enumerable: true, configurable: true});

No, the problem is not that there aren't other ways to do this. Or that people are "too dumb". I've had real JS users, often with some Python knowledge but sometimes just proceeding from a misunderstanding of how object literals are evaluated, ask for the ability to compute property names selectively in object literals.

Sure, they can add them after. That sauce for the goose is good for the private-name gander too, no?

Here, with obj = { [expr]: value } as the way to compute a property name in an object initialiser (I must not write "object literal" any longer), we are proceeding up another small and separate hill. But, is this the right design for object initialisers (which the normative grammar does call "ObjectLiterals")?

Another concern is that it creates another look-ahead issue for the unify blocks and object initializers proposal.

Yes, indeed. I've been thinking about that one since yesterday, without much more to show for it except this: I think we should be careful not to extend object literals in ways that create more ambiguity with blocks.

I grant that the new method syntax is too sweet to give up. But I don't think "{[...", "{!...", etc. for # and ~ after {, are yet worth the ambiguity that is future-hostile to block vs. object literal unity.

Agreed that unless we use @ well for both property names in initialisers and private property access, and in particular if we stick with bracketing for access, then square brackets win for property naming too -- but there's still the loss-of-static-shape issue.

There is also an argument to me made that [ ] and @ represent two different use cases: computed property access and private property access and for that reason there should be a syntactic distinction between them.

Is this different from your first point above, about implementation optimization edge?

The original name declaration via "private x" that Dave championed was definitely semantic: it created a new static lookup hierarchy, lexical but for names after . in expressions and before : in property assignments in object literals. This was, as Allen noted, controversial and enough respected folks on es-discuss (I recall Andrew Dupont in particular) and in TC39 reacted negatively that we separated and deferred it. I do not know how to revive it productively.

Actually, I think you can blame be rather than Dave for the dual lookup hierarchy. Dave (and Sam's) original proposal did have special semantics for private name declarations bug only had a single lookup hierarchy.

Of course -- sorry about that, credit and blame where due ;-).

# Allen Wirfs-Brock (14 years ago)

On Jul 8, 2011, at 2:58 PM, Brendan Eich wrote:

On Jul 8, 2011, at 12:16 PM, Allen Wirfs-Brock wrote:

The current Harmony classes proposal harmony:classes includes the concept of private instance members and syntax for defining them. While it presents a syntax for accessing them (eg, private(foo).bar accesses the private 'bar' member of the object that is the value of foo) there does not yet appear to be consensus acceptance of this access syntax.

Oh, quite the opposite -- everyone on TC39 with whom I've spoken agrees that private(this) syntax is straw that must be burned up. We need new syntax, or else we need to do at least what Dave proposed in "minimal classes": defer private syntax, let programmers use private name objects and explicit [] indexing.

I do think we can say a few more things about private in class that may not be in the requirements on the wiki:

On the level of rationale for why class- and not instance-private, Juan Ignacio Dopazo in private correspondence made a good observation, shown by his example:

class MyClass { private foo() {} bar() { var self = this; setTimeout(function () { self.foo(); }, 0); } }

Because |this| is not lexical, instance- rather than class-private access that requires "this." to the left of the private variable reference does not work unless you use the closure pattern explicitly and abjure |this|.

The straw "private(foo)" syntax doesn't help if the goal is instance privacy, since the inner function has no idea (runtime, never mind compile-time) how to enforce to which particular instance self must refer.

Requiring .bind(this) after the function expression passed to setTimeout can be used to work around such a hypothetical, mandatory "this."-prefix instance-private restriction, but that's onerous and it can be too costly.

An block lambdas would presumably also provide a solution if they were added to the language.

Another observation about any class-private scheme we might consider in the current context: private instance variables in many ways (notably not for Object.freeze) act like properties, and the syntax mooted so far casts them in that light. Even if the ES5 reflective APIs, such as Object.getOwnPropertyNames, rightly skip privates on a class instance, proxies may raise the question: how does a private variable name reflect as a property name?

class Point { constructor(x, y) { private x = x, y = y; } equals(other) { return private(this).x is private(other).x && private(this).y is private(other).y; } ... }

We cannot know how to ask for private-x from other without special syntax of some kind, either at the access point or as a binding declaration affecting all .x (and x: in object literals). So here I use the proposal's straw private(foo) syntax.

Could other be a proxy that somehow has a private data record? Could other denote a class instance whose [[Prototype]] is a proxy? I claim we do not want private(foo) by itself, no .x after, to reify as an object, but if it did, then it seems to me it could be used with a proxy to reflect on private variable names.

There are certainly several choices here, but the one I currently favor as simplest, which creates no new, ad-hoc concepts in the language, is that class-private instance variables are properties named by private name objects.

There is more general issue about reflection on private members (and I intentionally said "members" here rather than "names"). Some of us believe that private members should never be exposed via reflection. Others of us think there are plenty of reflection use cases where it is either useful or necessary to reflect using private names. The current private name objects allows whether or not reflection is permitted using that name to be set when the private name is created. The pure declarative form in the proposed class declarations don't have any way to specify that regardless of whether or not private names are used as the underlyng implementation for private members). If it did allow such control (and private names were used in the implementation), there would still need to be a way to access the private name associated by the class declaration with a specific private member in order to reflect upon it. Sure you could use getOwnPropertyNames to access all of the property names but if that was all you had how could you be sure of the correspondence between private names and declared private members.

Per the requirements for private in the classes proposal, this means Object.freeze does not freeze private-named properties, or at least does not make private-named data properties non-writable.

Perhaps Object.preventExtensions should not restrict private-named properties from being added to the object. This seems strange and wrong at first, but if freeze does not affect private-named properties, I'm not sure there is a "defensive consistency" threat from attackers decorating frozen objects with ad-hoc properties named by private name objects that only the attacker could create or access, which cannot collide with the class's private names.

The classes proposal states this requirement as "The ability to have private mutable state on publicly frozen objects." It doesn't say, "add new private properties to publicly frozen objects". The stated requirement could be meet by doing a preventExtensions on the object and setting each public property's attributes to non-confiburable and non-writable (for data properties) while leaving private properties writable (or even configurable).

I wouldn't particularly agree with making the other formulation a requirement or with requiring that "soft field" associated with a non-extensible object must be accessible in a referential transparent manner. (In other words, I think explicit look-aside tables are fine for this use case, which in practice I think occurs quite rarely).

Perhaps this is uncontroversial. I hope so, but I don't assume it is, and I suspect people who worked on the classes proposal may disagree. Cc'ing Mark in particular, since I'm just writing down some preliminary thoughts and intermediate conclusions here, and I could be way off base.

Ok, back to your good point about wanting private names to be usable in object initialisers:

Each of these approaches seem plausible. For a side-by-side comparison of the above example using the alternatives see harmony:private-name-alternatives.pdf . I'm interested in feedback on the alternatives.

The first thing I'd say is that, whatever the syntax (private prefix, [] around property name, or @ prefix), it seems too verbose to require boilerplate of const __x = Name.create(), etc. before the object literal.

Wouldn't it be better for the prefix, brackets or sigil to by itself define a private name and use it, also putting it in scope for the rest of the initialiser? With temporal dead zone error semantics for use before def, as with const?

But if const Point = { @x: 0, @y, 0, ... }

means that the private names bound to x and y are instant private, then how would you make x and y friendly between Point and ThreePoint?

const x=Name.create(), y=Name.create(); const Point = { @x: 0, @y, 0, ... } const ThreePoint = { @x: 0, @y: 0, @z: 0, ... }

I don't think you want something like this to be controlled by the existence/non-existane of a declaration of the same name in an outer scope.

I think what I would like would be for private x; to be equivalent to const x=Name.create(); and to allow that form of private declaration to appear inside an object initializer.

Then you could say: private x,y; const Point = { @x: 0, @y, 0, ... }

or const Point = { private x, //another lookahead issue for block lambdas private y, //sure would be nice for it to be private x,y, but... @x: 0, @y, 0, ... }

depending on what accessibility you want for the restricted members.

I think that ability to flexibly define the accessibility of member is one of the key features of the private name approach, but it does add a litter verbosity to the most common use case ("class" private)

On the syntax menu, the [] bracketing seems to allow arbitrary computed property names, but if I'm reading you right, you don't propose that: rather you want "constant" but not static property names, so that the initializer's shape does not depend on evaluating arbitrary expressions to compute property names (whether string-equated public names, or private name objecs). Do I have this right?

yes, that is my proposal. However, I'm not sure that it really helps at either the implementation or readability level. However, it seems like a restriction we could startup with latter relax if there was a good reason to.

If so, I agree we want the syntax to should "here is a private property name!" In this light, private works but is verbose, @ works as well and is concise (maybe too concise for some folks).

I also contend that the static (compile-time, early-error time) shape of object literals in JS, in contrast to the mandatory name-quoting and full evaluation of unquoted name expressions in other languages such as Python, is something to consider keeping.

If so, then const is not enough. We would really need a static evaluation stage where private names could be created and used, but no other expressions evaluated. But such a stage has been rejected in the Harmony era, e.g. for guards.

Yes, I don't think we want to go there so the restriction is mostly token.

Modules add a different kind of staging that is not problematic, but two stages for evaluating any given expression or statement is a bridge too far. I'm not in favor of adding another such evaluation regime.

I've only used object literal in these examples, however the same alternatives should be equally applicable to class declarations.

With classes as proposed, though, the way private name objects would be created would be via "private x = x, y = y;" declarations within the constructor body, as in my Point example above. That seems unfortunate since the scope of these names, whatever the access syntax, is wider than the constructor body. This is one of the open issues with classes I've mentioned here but not quite captured on the wiki page.

I assume that both the declaration and reference of private members part of the classes proposal will have to be adjusted to accommodate a private names based approach.

But whatever the class syntax, and the disposition of private in class and even classes in ES.next, I agree we should expect private declarative and expression forms to work the same in object initialisers and in classes.

absolutely

# Andreas Rossberg (14 years ago)

On 9 July 2011 00:24, Brendan Eich <brendan at mozilla.com> wrote:

On Jul 8, 2011, at 2:43 PM, Andreas Rossberg wrote:

One minor suggestion I'd have is to treat names as a proper new primitive type, i.e. typeof key == "name", not "object". That way, it can be defined much more cleanly what a name is, where its use is legal (as opposed to proper objects), and where it maybe enjoys special treatment.

We went back and forth on this. I believe the rationale is in the wiki (but perhaps in one of the strawman:name pages). There are a couple of reasons:

  1. We want private name objects to be usable as keys in WeakMaps. Clearly we could extend WeakMaps to have either "object" (but not "null") or "name" typeof-type keys, but that complexity is not warranted yet.

I can see that being a relevant use case for weak maps. But the same logic applies to using, say, strings or numbers as keys. So isn't the right fix rather to allow weak maps to be keyed on any JS value?

  1. Private name objects are deeply frozen and behave like value types (since they have no copy semantics and you can only generate fresh ones). Thus they are typeof-type "object" but clearly distinct from string-equated property names that JS has sported so far.

Oh, of course you meant to distinguish private names via typeof precisely to tell that they are not converted to strings when used as property names. For that test, the proposal

harmony:private_name_objects

proposes an isName predicate function exported from the "@name" built-in module.

Yes, I know. But this is introducing a kind of ad-hoc shadow type mechanism. Morally, this is a type distinction, so why not make it one? Moreover, I feel that ES already has too many classification mechanisms (typeof, class, instanceof), so adding yet another one through the back door doesn't seem optimal.

[...] in the case of private name objects, we don't think we have good enough reason to add a typeof name -- and then to complicate WeakMap.

Why do you think that it would that make WeakMap more complicated? As far as I can see, implementations will internally make that very type distinction anyways. And the spec also has to make it, one way or the other.

I like this notation most, because it can be generalised in a consistent manner beyond the special case of private names: there is no reason that the bit in brackets is just an identifier, we could allow arbitrary expressions.

Then the shape of the object is not static. Perhaps this is worth the costs to implementations and other "analyzers" (static program analysis, human readers). We should discuss a bit more first, as I just wrote in reply to Allen.

I don't think that the more general form would be a big deal for implementations. And it is still easy to identify object expressions with static shape syntactically: they don't use [_]. Analyses shouldn't be harder than for a series of assignments (which you probably could desugar this into).

Here, with obj = { [expr]: value } as the way to compute a property name in an object initialiser (I must not write "object literal" any longer), we are proceeding up another small and separate hill. But, is this the right design for object initialisers (which the normative grammar does call "ObjectLiterals")?

(If you care about that, then that's a misnomer already, since the property values have always been arbitrary expressions.)

Thanks,

# Sam Tobin-Hochstadt (14 years ago)

On Sat, Jul 9, 2011 at 3:48 AM, Andreas Rossberg <rossberg at google.com> wrote:

We went back and forth on this. I believe the rationale is in the wiki (but perhaps in one of the strawman:name pages). There are a couple of reasons:

  1. We want private name objects to be usable as keys in WeakMaps. Clearly we could extend WeakMaps to have either "object" (but not "null") or "name" typeof-type keys, but that complexity is not warranted yet.

I can see that being a relevant use case for weak maps. But the same logic applies to using, say, strings or numbers as keys. So isn't the right fix rather to allow weak maps to be keyed on any JS value?

Unlike Names, strings and numbers are forgeable, so if they were allowed as keys in WeakMaps, the associated value could never be safely collected. Names, by contrast, have identity.

# Brendan Eich (14 years ago)

On Jul 9, 2011, at 12:48 AM, Andreas Rossberg wrote:

On 9 July 2011 00:24, Brendan Eich <brendan at mozilla.com> wrote:

On Jul 8, 2011, at 2:43 PM, Andreas Rossberg wrote:

One minor suggestion I'd have is to treat names as a proper new primitive type, i.e. typeof key == "name", not "object". That way, it can be defined much more cleanly what a name is, where its use is legal (as opposed to proper objects), and where it maybe enjoys special treatment.

We went back and forth on this. I believe the rationale is in the wiki (but perhaps in one of the strawman:name pages). There are a couple of reasons:

  1. We want private name objects to be usable as keys in WeakMaps. Clearly we could extend WeakMaps to have either "object" (but not "null") or "name" typeof-type keys, but that complexity is not warranted yet.

I can see that being a relevant use case for weak maps. But the same logic applies to using, say, strings or numbers as keys. So isn't the right fix rather to allow weak maps to be keyed on any JS value?

Sam covered this, but just to emphasize: only object has unique identity due to its reference-type nature. Adding names as a distinct reference and typeof type, extending WeakMap to have as its key type (object | name), adds complexity compared to subsuming name under object.

  1. Private name objects are deeply frozen and behave like value types (since they have no copy semantics and you can only generate fresh ones). Thus they are typeof-type "object" but clearly distinct from string-equated property names that JS has sported so far.

Oh, of course you meant to distinguish private names via typeof precisely to tell that they are not converted to strings when used as property names. For that test, the proposal

harmony:private_name_objects

proposes an isName predicate function exported from the "@name" built-in module.

Yes, I know. But this is introducing a kind of ad-hoc shadow type mechanism.

We have many such, and JS authors develop their own. ES5 added Array.isArray to help with a common "is this object, which may have come from a different Array built-in in a different global object (window or iframe)" question.

Given object as a wide type, such ad-hoc type mechanisms are inevitable. Original JS had .constructor testing. ES3 added instanceof, which missed the mark due to multiple globals. There's nothing new or wrong in adding something like isName, IMHO -- esp. compared with the alternative of a new typeof type, WeakMap key type, and possibly other wrinkles we haven't thought of yet.

Morally, this is a type distinction, so why not make it one?

Sorry, typeof is not the only or best way to make such distinctions. It's for testing among different primitive value types (including null in ES.next, yay) vs. the one reference type object.

Moreover, I feel that ES already has too many classification mechanisms (typeof, class, instanceof), so adding yet another one through the back door doesn't seem optimal.

See above, there is nothing novel or evil in isName or isArray (another example) isGenerator.

typeof is for non-object primitive type classification, vs. object. It's extensible but not to be extended lightly.

instanceof is for proto-chain walking to find a constructor.prototype, and a bit of an attractive nuisance in the world of multiple global objects (browser windows/frames).

I'm not sure which "class" you mean. The [[ClassName]] disclosed by Object.prototype.toString.call(x).slice(8,-1) is one possibility, which is one of the many and user-extensible ways of distinguishing among objects. ES.next class is just sugar for constructor/prototype patterns with crucial help for extends and super.

There is a two-level hierarchy here. Some of the object-specific stuff did not pan out as hoped (instanceof in particular). Sorry I didn't head that off, my ES3 time on committee was limited.

[...] in the case of private name objects, we don't think we have good enough reason to add a typeof name -- and then to complicate WeakMap.

Why do you think that it would that make WeakMap more complicated? As far as I can see, implementations will internally make that very type distinction anyways.

No, as proposed private name objects are just objects, and WeakMap implementations do not have to distinguish (apart from usual GC mark method virtualization internal to implementations) between names and other objects used as keys.

And the spec also has to make it, one way or the other.

Not if names are objects.

I don't think that the more general form would be a big deal for implementations. And it is still easy to identify object expressions with static shape syntactically: they don't use [_]. Analyses shouldn't be harder than for a series of assignments (which you probably could desugar this into).

Yes, [propname] is all plausible as an extension in object literals. I have mixed feelings about it, but it has come up, as I noted in a recent message, from users wanting to intermix a little property name computation in a mostly-static-shape literal.

Here, with obj = { [expr]: value } as the way to compute a property name in an object initialiser (I must not write "object literal" any longer), we are proceeding up another small and separate hill. But, is this the right design for object initialisers (which the normative grammar does call "ObjectLiterals")?

(If you care about that, then that's a misnomer already, since the property values have always been arbitrary expressions.)

Yup. The nonterminal name, used colloquially as well, is the least of our worries, so forget I mentioned it!

# Brendan Eich (14 years ago)

On Jul 9, 2011, at 8:48 AM, Brendan Eich wrote:

See above, there is nothing novel or evil in isName or isArray (another example) isGenerator.

Also the Proxy.isTrapping, which in recent threads has been proposed to be renamed to Proxy.isProxy or Object.isProxy.

This is a fine point, but we may as well hash it out, and es-discuss is the best place: do we want Object.isFoo predicates, in spite of precedent such as ES5's Array.isArray (which I've whined about as redundantly and a bit misleadingly named, since its argument can be any value -- Object seems a better home, lacking a Value "class" built-in constructor in which to bind Value.isArray).

Back at the dawn of time, isNaN and isFinite as value predicates. Not relevant, since not type predicates, but worth a mention. ES.next adds saner (no argument coercion) Number.is{NaN,Finite} predicates, and Number.isInteger for testing whether a number is an integral IEEE 754 double.

For ES.next, Function.prototype.isGenerator has been proposed (and implemented in Firefox 4 and up).

Why not Function.isGenerator? Because the prototype method is more usable when you have a function in hand and you want to know whether it's a generator. If you have any value x, you'll need (typeof x === 'object' && 'isGenerator' in x) guarding the x.isGenerator() call, but we judged that as the less common use-case.

Could be we were wrong, or that being consistent even in the hobgoblin/foolish sense is better. But Object.isGenerator crowds poor Object more.

Notice how there's no need for Function.isFunction or Object.isFunction, due to typeof (function(){}) === 'function'. That's another dawn-of-time distinction that does not fit the primitiive types vs. object classification use-case I mentioned in the last message -- testing whether x is callable via typeof x === 'function', at least for native if not host objects, is the use-case here.

Some of these questions are at this point more about aesthetics and Principles than about usability. And we have historic messiness in how things have grown.

Given Array.isArray, perhaps we are better off with Proxy.isProxy. The case for Function.prototype.isGenerator hangs on usability, so I'd like to let the prototype in Firefox ride for a bit.

If someone wants to propose a better, extensible and uniform system of type predicate naming, please do.

# Allen Wirfs-Brock (14 years ago)

On Jul 9, 2011, at 9:45 AM, Brendan Eich wrote:

On Jul 9, 2011, at 8:48 AM, Brendan Eich wrote:

See above, there is nothing novel or evil in isName or isArray (another example) isGenerator.

Also the Proxy.isTrapping, which in recent threads has been proposed to be renamed to Proxy.isProxy or Object.isProxy.

This is a fine point, but we may as well hash it out, and es-discuss is the best place: do we want Object.isFoo predicates, in spite of precedent such as ES5's Array.isArray (which I've whined about as redundantly and a bit misleadingly named, since its argument can be any value -- Object seems a better home, lacking a Value "class" built-in constructor in which to bind Value.isArray).

A consideration here is that whatever conventions we follow sets a precedent for user written code. I don't think we want to encourage the addition of such classification functions to Object or Object.prototype. So from that perspective Array.isArray, Function.isGenerator, and Proxy.isProxy are better exemplars than Object.isArray, etc.

Another consider is whether such classification schemes should use methods or data properties. An instance side, isFoo() method to be useful generally will also require the addition of an isFoo() method to Object.prototype while a instance side-data property can be tested without augmenting Object.prototype (({}.isFoo) is a falsey value). Also most user code classification schemes will require some sort of instance-side tagging or testing to actually determine whether the instance is in the category.

For those reasons, I think the best classification scheme is usually via an instance-side public data property with a truthy value.

But there are some exceptions. For the legacy built-ins such as Array is less risky from compatibility perspective to add properties to the constructor than to the prototype. However, for new built-in this isn't an issue. Another consider is meta layering: meta-layer functions shouldn't be exposed on application layer objects. Proxy.isProxy is a good example. Proxy instances operate as application objects in the application layer. The public face of application objects shouldn't be polluted with a meta-layer method such as isProxy.

Back at the dawn of time, isNaN and isFinite as value predicates. Not relevant, since not type predicates, but worth a mention. ES.next adds saner (no argument coercion) Number.is{NaN,Finite} predicates, and Number.isInteger for testing whether a number is an integral IEEE 754 double.

For ES.next, Function.prototype.isGenerator has been proposed (and implemented in Firefox 4 and up).

Why not Function.isGenerator? Because the prototype method is more usable when you have a function in hand and you want to know whether it's a generator. If you have any value x, you'll need (typeof x === 'object' && 'isGenerator' in x) guarding the x.isGenerator() call, but we judged that as the less common use-case.

Why not just make it a boolean-valued data property?

Could be we were wrong, or that being consistent even in the hobgoblin/foolish sense is better. But Object.isGenerator crowds poor Object more.

Notice how there's no need for Function.isFunction or Object.isFunction, due to typeof (function(){}) === 'function'. That's another dawn-of-time distinction that does not fit the primitiive types vs. object classification use-case I mentioned in the last message -- testing whether x is callable via typeof x === 'function', at least for native if not host objects, is the use-case here.

But this is a special case that does not extrapolate to other situations.

Some of these questions are at this point more about aesthetics and Principles than about usability. And we have historic messiness in how things have grown.

In a utilitarian language like JS, usability and aesthetic considerations should be closely linked and consistency is a significant usability factor. . Principles are a way to get consistent application of such aesthetics .

Given Array.isArray, perhaps we are better off with Proxy.isProxy. The case for Function.prototype.isGenerator hangs on usability, so I'd like to let the prototype in Firefox ride for a bit.

If someone wants to propose a better, extensible and uniform system of type predicate naming, please do.

a start of one is above. The exceptions and special cases are where good judgement is required.

# Brendan Eich (14 years ago)

On Jul 9, 2011, at 11:19 AM, Allen Wirfs-Brock wrote:

A consideration here is that whatever conventions we follow sets a precedent for user written code. I don't think we want to encourage the addition of such classification functions to Object or Object.prototype. So from that perspective Array.isArray, Function.isGenerator, and Proxy.isProxy are better exemplars than Object.isArray, etc.

Do we want users extending Array or Function? Same problem as applies to object. In Edition N+1 we then collide. I don't see a difference in kind here.

Another consider is whether such classification schemes should use methods or data properties. An instance side, isFoo() method to be useful generally will also require the addition of an isFoo() method to Object.prototype while a instance side-data property can be tested without augmenting Object.prototype (({}.isFoo) is a falsey value). Also most user code classification schemes will require some sort of instance-side tagging or testing to actually determine whether the instance is in the category.

For those reasons, I think the best classification scheme is usually via an instance-side public data property with a truthy value.

I don't agree, because detecting truthy valued properties from host objects (proxies) can have effects. Calling a well-known predicate function that has no side effects is better.

But there are some exceptions. For the legacy built-ins such as Array is less risky from compatibility perspective to add properties to the constructor than to the prototype. However, for new built-in this isn't an issue. Another consider is meta layering: meta-layer functions shouldn't be exposed on application layer objects. Proxy.isProxy is a good example. Proxy instances operate as application objects in the application layer. The public face of application objects shouldn't be polluted with a meta-layer method such as isProxy.

Do we want intercession by proxies for any of these isFoo predicates?

Why not Function.isGenerator? Because the prototype method is more usable when you have a function in hand and you want to know whether it's a generator. If you have any value x, you'll need (typeof x === 'object' && 'isGenerator' in x) guarding the x.isGenerator() call, but we judged that as the less common use-case.

Why not just make it a boolean-valued data property?

See above, and also isArray and other isFoo predicates (isNaN) realized via functions, not data properties.

Could be we were wrong, or that being consistent even in the hobgoblin/foolish sense is better. But Object.isGenerator crowds poor Object more.

Notice how there's no need for Function.isFunction or Object.isFunction, due to typeof (function(){}) === 'function'. That's another dawn-of-time distinction that does not fit the primitiive types vs. object classification use-case I mentioned in the last message -- testing whether x is callable via typeof x === 'function', at least for native if not host objects, is the use-case here.

But this is a special case that does not extrapolate to other situations.

Agreed, although we've said we'll consider typeof-result extensions in the future, e.g., for user-defined value types.

Some of these questions are at this point more about aesthetics and Principles than about usability. And we have historic messiness in how things have grown. In a utilitarian language like JS, usability and aesthetic considerations should be closely linked and consistency is a significant usability factor. . Principles are a way to get consistent application of such aesthetics .

It's hard to disagree, but that doesn't settle disagreements, e.g., on data properties that are true-valued or missing, so undefined-falsy, vs. pure predicate functions.

In the worst case, our argument now must go meta. I don't want to do that. My argument for pure predicates is concrete, not abstract.

# Allen Wirfs-Brock (14 years ago)

On Jul 9, 2011, at 1:43 PM, Brendan Eich wrote:

On Jul 9, 2011, at 11:19 AM, Allen Wirfs-Brock wrote:

A consideration here is that whatever conventions we follow sets a precedent for user written code. I don't think we want to encourage the addition of such classification functions to Object or Object.prototype. So from that perspective Array.isArray, Function.isGenerator, and Proxy.isProxy are better exemplars than Object.isArray, etc.

Do we want users extending Array or Function? Same problem as applies to object. In Edition N+1 we then collide. I don't see a difference in kind here.

No, but the example we are setting is that if you are hanging such predicates off of constructors they should be constructors related to what you are testing rather than Object.

Another consider is whether such classification schemes should use methods or data properties. An instance side, isFoo() method to be useful generally will also require the addition of an isFoo() method to Object.prototype while a instance side-data property can be tested without augmenting Object.prototype (({}.isFoo) is a falsey value). Also most user code classification schemes will require some sort of instance-side tagging or testing to actually determine whether the instance is in the category.

For those reasons, I think the best classification scheme is usually via an instance-side public data property with a truthy value.

I don't agree, because detecting truthy valued properties from host objects (proxies) can have effects. Calling a well-known predicate function that has no side effects is better.

That may be true for host objects/proxies. But in general for plain user defined objects such a predicate function is going to have to inspect the instance for some identifying characteristic (assuming it is a situation where instance of isn't adequate or desirable). That characteristic is presumably going to manifest some sort of trademark property on the instance. If you are considered about spoofing I can see why you might use a well known predicate that depends upon a private name based trade mark. However, if that isn't your concern (and my perspective is that in most cases it shouldn't be) you might as well use a public trademark property on instances in which case a direct test for that property is probably all you need.

However, I agree that best practice would be for such property to have no access side-effects, regardless of how they were are implemented.

But there are some exceptions. For the legacy built-ins such as Array is less risky from compatibility perspective to add properties to the constructor than to the prototype. However, for new built-in this isn't an issue. Another consider is meta layering: meta-layer functions shouldn't be exposed on application layer objects. Proxy.isProxy is a good example. Proxy instances operate as application objects in the application layer. The public face of application objects shouldn't be polluted with a meta-layer method such as isProxy.

Do we want intercession by proxies for any of these isFoo predicates?

In general yes. Let's say a new kind of Crazy DOM node interfaces defines that it has a isCrazyNode property and also has semantics that requires the use of a Proxy to natively implement it in JS. In that case isCrazyNode needs to be implemented via the Proxy.

On the other hand, we don't want random objects to start exposing isProxy properties just because a Proxy was used to implement it. That would be unnecessarily exposing implementation details. I fall back o my standard test, can Proxies be used to implement ES Array instances. If such instances implemented via Proxies had a visible isProxy property they would not be a accurate implemented on the ES spec. for array instances.

Why not Function.isGenerator? Because the prototype method is more usable when you have a function in hand and you want to know whether it's a generator. If you have any value x, you'll need (typeof x === 'object' && 'isGenerator' in x) guarding the x.isGenerator() call, but we judged that as the less common use-case.

Why not just make it a boolean-valued data property?

See above, and also isArray and other isFoo predicates (isNaN) realized via functions, not data properties.

OK, if you have a distinct function that isn't a method instance. However, in general you have to hang that function off of come object (such as the constructor). That means that a test site must have access (probably lexical) to the container of the predicate and must know the name of the predicate. The contain is probably accessed via a module import and if it is a constructor you get access to capabilities (such as creating instances) that go far beyond simply category testing. That violates the principle of least privilege.

I think self identifying objects have much better modularity. All you need to know is the name of the categorization property and that should be part of the instance interface that you are already depending upon. If you do it that way there is no need to import a constructor or other container of the predicate and expose additional capabilities that might be available from those. BTW, my concern here isn't security based. It is that minimizing module dependencies give you more maintainable code.

Could be we were wrong, or that being consistent even in the hobgoblin/foolish sense is better. But Object.isGenerator crowds poor Object more.

Notice how there's no need for Function.isFunction or Object.isFunction, due to typeof (function(){}) === 'function'. That's another dawn-of-time distinction that does not fit the primitiive types vs. object classification use-case I mentioned in the last message -- testing whether x is callable via typeof x === 'function', at least for native if not host objects, is the use-case here.

But this is a special case that does not extrapolate to other situations.

Agreed, although we've said we'll consider typeof-result extensions in the future, e.g., for user-defined value types.

Some of these questions are at this point more about aesthetics and Principles than about usability. And we have historic messiness in how things have grown. In a utilitarian language like JS, usability and aesthetic considerations should be closely linked and consistency is a significant usability factor. . Principles are a way to get consistent application of such aesthetics .

It's hard to disagree, but that doesn't settle disagreements, e.g., on data properties that are true-valued or missing, so undefined-falsy, vs. pure predicate functions.

My argument for self-identify objects rather than independent predicates is above.

If you buy that self-identify is better, then the question becomes data property or instance methods. To me that comes down to, if someone is going to have to code if (typeof obj.isFoo == 'function' && obj.isFoo()) ... why not just say if (obj.isFoo)...

A major source of monkey patch in large Smalltalk applications was people adding isFoo ^false methods to Object so they could test arbitrary objects for Fooness. I'm assume that as people start building complex JS applications using rich inheritance based duck typed "class" libraries they are going to find the need to do the same sort of classification testing. Data property testing is a way to accomplish this without the monkey patching.

# Brendan Eich (14 years ago)

On Jul 9, 2011, at 4:32 PM, Allen Wirfs-Brock wrote:

On Jul 9, 2011, at 1:43 PM, Brendan Eich wrote:

Do we want users extending Array or Function? Same problem as applies to object. In Edition N+1 we then collide. I don't see a difference in kind here.

No, but the example we are setting is that if you are hanging such predicates off of constructors they should be constructors related to what you are testing rather than Object.

Ok, so I will stop whining about Array.isArray.

... However, if that isn't your concern (and my perspective is that in most cases it shouldn't be) you might as well use a public trademark property on instances in which case a direct test for that property is probably all you need.

However, I agree that best practice would be for such property to have no access side-effects, regardless of how they were are implemented.

I'm not sure which however wins. ;-)

We have Array.isArray, Proxy.isTrapping (soon to be Proxy.isProxy?), Function.prototype.isGenerator (could be Function.isGenerator -- note different signature, so don't want both -- Object.getPrototypeOf(Function) === Function.prototype). I think methods on built-ins are winning.

In general yes. Let's say a new kind of Crazy DOM node interfaces defines that it has a isCrazyNode property and also has semantics that requires the use of a Proxy to natively implement it in JS. In that case isCrazyNode needs to be implemented via the Proxy.

On the other hand, we don't want random objects to start exposing isProxy properties just because a Proxy was used to implement it. That would be unnecessarily exposing implementation details. I fall back o my standard test, can Proxies be used to implement ES Array instances. If such instances implemented via Proxies had a visible isProxy property they would not be a accurate implemented on the ES spec. for array instances.

Right, so Proxy.isProxy -- on the built-in namespace object (like a standard module binding), not on each proxy instance.

But if you want isCrazyNode intercession via a Proxy, as you state above. For an Array emulation via Proxy, you'd have to monkey-patch Array.isArray.

My argument for self-identify objects rather than independent predicates is above.

A brand or trademark is quite different from some old truthy data property. One could use private name objects for branding, no forgeries possible.

If you buy that self-identify is better, then the question becomes data property or instance methods. To me that comes down to, if someone is going to have to code if (typeof obj.isFoo == 'function' && obj.isFoo()) ... why not just say if (obj.isFoo)...

I agree with that style when detecting a method.

BTW, it leads to the obj.isFoo?() conditional call idea (along with ?.). We never reached consensus there.

A major source of monkey patch in large Smalltalk applications was people adding isFoo ^false methods to Object so they could test arbitrary objects for Fooness. I'm assume that as people start building complex JS applications using rich inheritance based duck typed "class" libraries they are going to find the need to do the same sort of classification testing.

This was instance-side, right? Seems like it would work with class-side inheritance too.

Data property testing is a way to accomplish this without the monkey patching.

You mean without all the stub ^false / return false methods polluting Object? That is true, but again I return to the precedent (however recent) we've set: Array.isArray. Class-side, not instance-side.

# Allen Wirfs-Brock (14 years ago)

On Jul 9, 2011, at 5:02 PM, Brendan Eich wrote:

On Jul 9, 2011, at 4:32 PM, Allen Wirfs-Brock wrote:

On Jul 9, 2011, at 1:43 PM, Brendan Eich wrote:

... However, if that isn't your concern (and my perspective is that in most cases it shouldn't be) you might as well use a public trademark property on instances in which case a direct test for that property is probably all you need.

However, I agree that best practice would be for such property to have no access side-effects, regardless of how they were are implemented.

I'm not sure which however wins. ;-)

We have Array.isArray, Proxy.isTrapping (soon to be Proxy.isProxy?), Function.prototype.isGenerator (could be Function.isGenerator -- note different signature, so don't want both -- Object.getPrototypeOf(Function) === Function.prototype). I think methods on built-ins are winning.

I think instances that are self identifying or self classify are best and should be the default approach. Whether via a method or data properties is a secondary issue. However, some times you need to use have a separate object (let's call it an oracle) that is responsibility for doing the identification/classification. There are various reasons for using an oracle:

  1. stratification - Proxy.isProxy is an example
  2. It is impossible or inconvenient to add the classification interface to the appropriate instance interface. I think that was the reason for Array.isArray. We were more afraid of adding it to Array.prototype than to Array.
  3. Externally imposed classification that is not cohesive with the rest of an object's behavior. For example, isSerializable is probably dependent upon a specific kind of serializer. Their might even be multiple different kinds of serializers installed in a system.

In general yes. Let's say a new kind of Crazy DOM node interfaces defines that it has a isCrazyNode property and also has semantics that requires the use of a Proxy to natively implement it in JS. In that case isCrazyNode needs to be implemented via the Proxy.

On the other hand, we don't want random objects to start exposing isProxy properties just because a Proxy was used to implement it. That would be unnecessarily exposing implementation details. I fall back o my standard test, can Proxies be used to implement ES Array instances. If such instances implemented via Proxies had a visible isProxy property they would not be a accurate implemented on the ES spec. for array instances.

Right, so Proxy.isProxy -- on the built-in namespace object (like a standard module binding), not on each proxy instance.

But if you want isCrazyNode intercession via a Proxy, as you state above. For an Array emulation via Proxy, you'd have to monkey-patch Array.isArray.

Well, if you were actually using Proxy to implement Array, you would probably also be providing the Array implementation.

However, arguably Array.isArray really should have been Array.prototype.isArray. We treated as a case 2 from above. May we really didn't need to, but that's water over dam. I don't think we should use it as precedent for more greenfield situations.

My argument for self-identify objects rather than independent predicates is above.

A brand or trademark is quite different from some old truthy data property. One could use private name objects for branding, no forgeries possible.

Yes, but I think the situation where you really need a non-forgage brand are relative rare compared to the the need to do more basic classificaiton.

If you buy that self-identify is better, then the question becomes data property or instance methods. To me that comes down to, if someone is going to have to code if (typeof obj.isFoo == 'function' && obj.isFoo()) ... why not just say if (obj.isFoo)...

I agree with that style when detecting a method.

BTW, it leads to the obj.isFoo?() conditional call idea (along with ?.). We never reached consensus there.

But f the method implementation is simply going to be return true or return false why do yo need to call it at all?

A major source of monkey patch in large Smalltalk applications was people adding isFoo ^false methods to Object so they could test arbitrary objects for Fooness. I'm assume that as people start building complex JS applications using rich inheritance based duck typed "class" libraries they are going to find the need to do the same sort of classification testing.

This was instance-side, right? Seems like it would work with class-side inheritance too.

In Smalltalk, yes, on the instance side. But you can always reach the class side from the instance. Essentially the equivalent of JS obj.constructor.

Smalltalk coe guidelines would general say (but I'll use JS syntax here instead of Smalltalk):

obj.isFoo //ok obj.constructor.isFoo(obj) //less ok, unless there is a good reason Foo.isFoo(obj) //even less good

In the first two you are only statically coupled to obj in the third you are statically coupled to both obj and Foo.

Data property testing is a way to accomplish this without the monkey patching.

You mean without all the stub ^false / return false methods polluting Object? That is true, but again I return to the precedent (however recent) we've set: Array.isArray. Class-side, not instance-side.

right. And I think that Array.isArray was a special case exception rather than an exemplar for the future.

# Brendan Eich (14 years ago)

On Jul 9, 2011, at 7:22 PM, Allen Wirfs-Brock wrote:

On Jul 9, 2011, at 5:02 PM, Brendan Eich wrote:

On Jul 9, 2011, at 4:32 PM, Allen Wirfs-Brock wrote:

On Jul 9, 2011, at 1:43 PM, Brendan Eich wrote:

... However, if that isn't your concern (and my perspective is that in most cases it shouldn't be) you might as well use a public trademark property on instances in which case a direct test for that property is probably all you need.

However, I agree that best practice would be for such property to have no access side-effects, regardless of how they were are implemented.

I'm not sure which however wins. ;-)

We have Array.isArray, Proxy.isTrapping (soon to be Proxy.isProxy?), Function.prototype.isGenerator (could be Function.isGenerator -- note different signature, so don't want both -- Object.getPrototypeOf(Function) === Function.prototype). I think methods on built-ins are winning.

I think instances that are self identifying or self classify are best and should be the default approach. Whether via a method or data properties is a secondary issue. However, some times you need to use have a separate object (let's call it an oracle) that is responsibility for doing the identification/classification. There are various reasons for using an oracle:

  1. stratification - Proxy.isProxy is an example
  2. It is impossible or inconvenient to add the classification interface to the appropriate instance interface. I think that was the reason for Array.isArray. We were more afraid of adding it to Array.prototype than to Array.

That's too bad, since we added Function.prototype.isGenerator (which as I noted last time shows up on Function, of course!) without problem. It's Object.prototype that people can't extend, and then the warning about other built-ins is pre-ES5, out of lack of enumerable control, so for fear of breaking for-in loops.

Adding a non-enumerable Array.prototype method seems doable to me, if the name is clear and not commonly used.

  1. Externally imposed classification that is not cohesive with the rest of an object's behavior. For example, isSerializable is probably dependent upon a specific kind of serializer. Their might even be multiple different kinds of serializers installed in a system.

Good point.

In general yes. Let's say a new kind of Crazy DOM node interfaces defines that it has a isCrazyNode property and also has semantics that requires the use of a Proxy to natively implement it in JS. In that case isCrazyNode needs to be implemented via the Proxy.

On the other hand, we don't want random objects to start exposing isProxy properties just because a Proxy was used to implement it. That would be unnecessarily exposing implementation details. I fall back o my standard test, can Proxies be used to implement ES Array instances. If such instances implemented via Proxies had a visible isProxy property they would not be a accurate implemented on the ES spec. for array instances.

Right, so Proxy.isProxy -- on the built-in namespace object (like a standard module binding), not on each proxy instance.

But if you want isCrazyNode intercession via a Proxy, as you state above. For an Array emulation via Proxy, you'd have to monkey-patch Array.isArray.

Well, if you were actually using Proxy to implement Array, you would probably also be providing the Array implementation.

Not necessarily -- you might just want isArray to return true for your proxied fakes and the real deal. But yes, you would have to provide at least Array.isArray. With all the standard methods on Array.prototype generic, I believe you wouldn't have to replace all of Array.

However, arguably Array.isArray really should have been Array.prototype.isArray. We treated as a case 2 from above. May we really didn't need to, but that's water over dam. I don't think we should use it as precedent for more greenfield situations.

First thought: Crock sold us on reformed Number.is{NaN,Finite} along with new Numer.isInteger. We can "do both" and correct course for the long run, perhaps: Array.isArray and Array.prototype.isArray. But first we should settle the data property vs. method issue.

If you buy that self-identify is better, then the question becomes data property or instance methods. To me that comes down to, if someone is going to have to code if (typeof obj.isFoo == 'function' && obj.isFoo()) ... why not just say if (obj.isFoo)...

I agree with that style when detecting a method.

BTW, it leads to the obj.isFoo?() conditional call idea (along with ?.). We never reached consensus there.

But f the method implementation is simply going to be return true or return false why do yo need to call it at all?

The general case is an optional method, a notification hook call-out or some such.

You're right that for object detection the method call seems a waste. We could have made Function isGenerator a non-writable, non-configurable, and non-enumerable boolean-valued data property.

But, then it would be an instance property, which burns storage (the shape or hidden class sharing is not the issue, the true or false value slot is); or else an accessor on Function.prototype, which is about as costly as a method in implementations. Users testing it would not have to write (), your point. That wins. But otherwise it's a wash.

Smalltalk code guidelines would general say (but I'll use JS syntax here instead of Smalltalk):

obj.isFoo //ok obj.constructor.isFoo(obj) //less ok, unless there is a good reason Foo.isFoo(obj) //even less good

In the first two you are only statically coupled to obj in the third you are statically coupled to both obj and Foo.

Right, and as you say there may be a reason for that (reasons 1 and 3; not sure about 2).

Data property testing is a way to accomplish this without the monkey patching.

You mean without all the stub ^false / return false methods polluting Object? That is true, but again I return to the precedent (however recent) we've set: Array.isArray. Class-side, not instance-side.

right. And I think that Array.isArray was a special case exception rather than an exemplar for the future.

We should correct course in ES.next.

# Allen Wirfs-Brock (14 years ago)

On Jul 9, 2011, at 7:22 PM, Brendan Eich wrote:

On Jul 9, 2011, at 5:02 PM, Allen Wirfs-Brock wrote: ...

  1. stratification - Proxy.isProxy is an example
  2. It is impossible or inconvenient to add the classification interface to the appropriate instance interface. I think that was the reason for Array.isArray. We were more afraid of adding it to Array.prototype than to Array.

That's too bad, since we added Function.prototype.isGenerator (which as I noted last time shows up on Function, of course!) without problem. It's Object.prototype that people can't extend, and then the warning about other built-ins is pre-ES5, out of lack of enumerable control, so for fear of breaking for-in loops.

Adding a non-enumerable Array.prototype method seems doable to me, if the name is clear and not commonly used.

We can probably still add Array.prototoype.isArray if that would help to establish the pattern. Document as being preferred over Array.isArray

Well, if you were actually using Proxy to implement Array, you would probably also be providing the Array implementation.

Not necessarily -- you might just want isArray to return true for your proxied fakes and the real deal. But yes, you would have to provide at least Array.isArray. With all the standard methods on Array.prototype generic, I believe you wouldn't have to replace all of Array.

but if you want (new Array()) to create the fakes...

However, arguably Array.isArray really should have been Array.prototype.isArray. We treated as a case 2 from above. May we really didn't need to, but that's water over dam. I don't think we should use it as precedent for more greenfield situations.

First thought: Crock sold us on reformed Number.is{NaN,Finite} along with new Numer.isInteger. We can "do both" and correct course for the long run, perhaps: Array.isArray and Array.prototype.isArray. But first we should settle the data property vs. method issue.

yes. I wonder if isArray and isInteger are different kinds of categorizations ("class" based vs value based) that perhaps should have distinct naming conventions.

But f the method implementation is simply going to be return true or return false why do yo need to call it at all?

The general case is an optional method, a notification hook call-out or some such.

You're right that for object detection the method call seems a waste. We could have made Function isGenerator a non-writable, non-configurable, and non-enumerable boolean-valued data property.

But, then it would be an instance property, which burns storage (the shape or hidden class sharing is not the issue, the true or false value slot is); or else an accessor on Function.prototype, which is about as costly as a method in implementations. Users testing it would not have to write (), your point. That wins. But otherwise it's a wash.

Why not a non-writable,non-enumerable non-configurable data property on Function.prototype.

Smalltalk code guidelines would general say (but I'll use JS syntax here instead of Smalltalk):

obj.isFoo //ok obj.constructor.isFoo(obj) //less ok, unless there is a good reason Foo.isFoo(obj) //even less good

In the first two you are only statically coupled to obj in the third you are statically coupled to both obj and Foo.

Right, and as you say there may be a reason for that (reasons 1 and 3; not sure about 2).

and I should have included: obj.constructor === Foo //very bad

# Brendan Eich (14 years ago)

On Jul 10, 2011, at 10:54 AM, Allen Wirfs-Brock wrote:

However, arguably Array.isArray really should have been Array.prototype.isArray. We treated as a case 2 from above. May we really didn't need to, but that's water over dam. I don't think we should use it as precedent for more greenfield situations.

First thought: Crock sold us on reformed Number.is{NaN,Finite} along with new Numer.isInteger. We can "do both" and correct course for the long run, perhaps: Array.isArray and Array.prototype.isArray. But first we should settle the data property vs. method issue.

yes. I wonder if isArray and isInteger are different kinds of categorizations ("class" based vs value based) that perhaps should have distinct naming conventions.

The isFoo predicate convention is used variously in other languages. Kind of like foop in Lisps. I would not tax name length just to distinguish these cases, or others (ontology fanatics will soon be splitting sub-cases of "value-based").

But f the method implementation is simply going to be return true or return false why do yo need to call it at all?

The general case is an optional method, a notification hook call-out or some such.

You're right that for object detection the method call seems a waste. We could have made Function isGenerator a non-writable, non-configurable, and non-enumerable boolean-valued data property.

But, then it would be an instance property, which burns storage (the shape or hidden class sharing is not the issue, the true or false value slot is); or else an accessor on Function.prototype, which is about as costly as a method in implementations. Users testing it would not have to write (), your point. That wins. But otherwise it's a wash.

Why not a non-writable,non-enumerable non-configurable data property on Function.prototype.

We're talking about isGenerator, right? There is no Generator constructor, no shadowing opportunity. Function can create a generator if yield occurs in the body string.

Here's a reason to prefer a method, at least one method per "feature": you can then object-detect the method without having to autoconf-style try/catch a generator in eval or Function. Just "if (Function.prototype.isGenerator)" or (shorter, works as noted) "if (Function.isGenerator)" to guard code (perhaps a script injection) that depends on generator support.

# Andreas Rossberg (14 years ago)

On 9 July 2011 14:42, Sam Tobin-Hochstadt <samth at ccs.neu.edu> wrote:

Unlike Names, strings and numbers are forgeable, so if they were allowed as keys in WeakMaps, the associated value could never be safely collected.  Names, by contrast, have identity.

Of course you are right, and I shouldn't post in the middle of the night.

# Allen Wirfs-Brock (14 years ago)

On Jul 10, 2011, at 6:58 PM, Brendan Eich wrote:

On Jul 10, 2011, at 10:54 AM, Allen Wirfs-Brock wrote:

Why not a non-writable,non-enumerable non-configurable data property on Function.prototype.

We're talking about isGenerator, right? There is no Generator constructor, no shadowing opportunity. Function can create a generator if yield occurs in the body string.

Oops, forgot we were talking about isGenerator. So why not internally defined

const generatorProto = Object.create(Function.prototype, {isGenerator: {value: true}});

and specify that function containing yield are created as if by: generatorProto <| function (...) {...yield...}

In other words, generator functions have a different [[Prototype]] than regulator functions.

Alternatively, isGenerator could be a method, but that's the the point here.

# Brendan Eich (14 years ago)

On Jul 11, 2011, at 9:25 AM, Allen Wirfs-Brock wrote:

On Jul 10, 2011, at 6:58 PM, Brendan Eich wrote:

On Jul 10, 2011, at 10:54 AM, Allen Wirfs-Brock wrote:

Why not a non-writable,non-enumerable non-configurable data property on Function.prototype.

We're talking about isGenerator, right? There is no Generator constructor, no shadowing opportunity. Function can create a generator if yield occurs in the body string.

Oops, forgot we were talking about isGenerator. So why not internally defined

const generatorProto = Object.create(Function.prototype, {isGenerator: {value: true}});

and specify that function containing yield are created as if by: generatorProto <| function (...) {...yield...}

In other words, generator functions have a different [[Prototype]] than regulator functions.

There's no point. We haven't ever had a request for this. It requires polluting the global object (or some object) with a property (even adding a private name requires adding a public property naming the private name object).

Alternatively, isGenerator could be a method, but that's the the point here.

That's the point we're arguing about, yes.

The method allows object-detection, which helps. It's ad-hoc but web JS has versioned itself based on object detection far better than on other, a priori grand plans (Accept: headers, e.g.).

# Allen Wirfs-Brock (14 years ago)

On Jul 11, 2011, at 9:32 AM, Brendan Eich wrote:

On Jul 11, 2011, at 9:25 AM, Allen Wirfs-Brock wrote:

On Jul 10, 2011, at 6:58 PM, Brendan Eich wrote:

On Jul 10, 2011, at 10:54 AM, Allen Wirfs-Brock wrote:

Why not a non-writable,non-enumerable non-configurable data property on Function.prototype.

We're talking about isGenerator, right? There is no Generator constructor, no shadowing opportunity. Function can create a generator if yield occurs in the body string.

Oops, forgot we were talking about isGenerator. So why not internally defined

const generatorProto = Object.create(Function.prototype, {isGenerator: {value: true}});

and specify that function containing yield are created as if by: generatorProto <| function (...) {...yield...}

In other words, generator functions have a different [[Prototype]] than regulator functions.

There's no point. We haven't ever had a request for this. It requires polluting the global object (or some object) with a property (even adding a private name requires adding a public property naming the private name object).

I don't think we or anybody else has really explored the extensibility implications of generator functions so it isn't surprising that there have been no requests.

Certainly there is no need to add any new globals to support a distinct prototype for generator functions. Strictly speaking there wouldn't even have to be a property on Function to access it as it would be a built-in object that is accessed internally when a generator is created. However, I do think it would be useful make it accessible via something like Function.generatorPrototype. I don't see that adding such a property as undesirable pollution.

Alternatively, isGenerator could be a method, but that's the the point here.

That's the point we're arguing about, yes.

I think that the factoring of the prototype hierarchy is largely distinct issue but, yes, there is some overlap in that a use of a data property for this particular tagging requires a separate prototype object for generators.

The method allows object-detection, which helps. It's ad-hoc but web JS has versioned itself based on object detection far better than on other, a priori grand plans (Accept: headers, e.g.).

Function.generatorPrototype as described above would also support object-detection.

In general, how far to you think we can push a "all new features have to be object-detectable" requirement. Do we need to be able to object-detect <|. How about rest and spread, or de-structuring? We are going to use non-eval detectability as a ECMAScript extension design criteria then maybe we do need a less ad-hoc scheme for feature detection. It wouldn't have to be all that grand...

# Brendan Eich (14 years ago)

On Jul 11, 2011, at 10:25 AM, Allen Wirfs-Brock wrote:

I don't think we or anybody else has really explored the extensibility implications of generator functions so it isn't surprising that there have been no requests.

We certainly have explored generators (extensibility is not a good in itself) since 2006. Multiple libraries on github use them.

Certainly there is no need to add any new globals to support a distinct prototype for generator functions. Strictly speaking there wouldn't even have to be a property on Function to access it as it would be a built-in object that is accessed internally when a generator is created. However, I do think it would be useful make it accessible via something like Function.generatorPrototype. I don't see that adding such a property as undesirable pollution.

We're still trying to get isFoo conventions in order. Why add another (first-of or one-off) built-in prototype convention?

I'm going to stick with YAGNI and when in doubt, leave it out. Generators win to the extent that they are functions with the least shallow continuation semantics on top that work.

The method allows object-detection, which helps. It's ad-hoc but web JS has versioned itself based on object detection far better than on other, a priori grand plans (Accept: headers, e.g.).

Function.generatorPrototype as described above would also support object-detection.

In general, how far to you think we can push a "all new features have to be object-detectable" requirement. Do we need to be able to object-detect <|.

It's easy to go overboard. Developers do not detect at fine grain to handle insane or never-shipped combinations. The combinatorics explode, and reality is that browsers support feature cohorts that are fairly chunky.

How about rest and spread, or de-structuring? We are going to use non-eval detectability as a ECMAScript extension design criteria then maybe we do need a less ad-hoc scheme for feature detection. It wouldn't have to be all that grand...

Even less grand ones such as the DOM's (www.w3.org/TR/DOM-Level-3-Core/core.html#DOMFeatures) are failures.

# Claus Reinke (14 years ago)

How about rest and spread, or de-structuring? We are going to use non-eval detectability as a ECMAScript extension design criteria then maybe we do need a less ad-hoc scheme for feature detection. It wouldn't have to be all that grand...

Even less grand ones such as the DOM's (www.w3.org/TR/DOM-Level-3-Core/core.html#DOMFeatures) are failures.

Just noticed this hidden sub-thread, which seems to be related to my attempt to start a thread on modular versioning. How about the following attempt at least-grandness?-)

  • have a built-in module hierarchy 'language', with submodules for each new ES/next feature that is implemented in the current engine (language.destructuring, language.generators, ..)

  • code that relies on one of those features can try to import the language.feature module, and will be rejected early if that "feature module" isn't available

  • there could be collective modules corresponding to language versions (ES5, JS1.7, ES/next, ..), which behave as if they imported the language feature modules corresponding to that language version

  • this would suggest that ES/next modules ought to be the first ES/next feature to be implemented, to support versioned addition of the remaining ES/next features; but since modules already need to manage dependencies, there is no need to replicate that functionality for language features

Makes sense? Claus

# Allen Wirfs-Brock (14 years ago)

On Jul 11, 2011, at 10:41 AM, Brendan Eich wrote:

On Jul 11, 2011, at 10:25 AM, Allen Wirfs-Brock wrote:

Certainly there is no need to add any new globals to support a distinct prototype for generator functions. Strictly speaking there wouldn't even have to be a property on Function to access it as it would be a built-in object that is accessed internally when a generator is created. However, I do think it would be useful make it accessible via something like Function.generatorPrototype. I don't see that adding such a property as undesirable pollution.

We're still trying to get isFoo conventions in order. Why add another (first-of or one-off) built-in prototype convention?

I'm going to stick with YAGNI and when in doubt, leave it out. Generators win to the extent that they are functions with the least shallow continuation semantics on top that work.

Let's try to back out of the brambles a bit:

On Jul 10, 2011, at 6:58 PM, Brendan Eich wrote:

On Jul 10, 2011, at 10:54 AM, Allen Wirfs-Brock wrote:

yes. I wonder if isArray and isInteger are different kinds of categorizations ("class" based vs value based) that perhaps should have distinct naming conventions.

The isFoo predicate convention is used variously in other languages. Kind of like foop in Lisps. I would not tax name length just to distinguish these cases, or others (ontology fanatics will soon be splitting sub-cases of "value-based").

isGenerator is essentially a value test rather than class-like categorization. Methods work well for this because a method can dynamically inspect the value being tested in order to make the determination.

However, methods are less desirable for class-like categorization because they require an existence predicated call (f.isFoo && f.isFoo()) which potentially leads to monkey patching Object.prototype (Object.prototype.isFoo = function(){return false}). A truthy data property is a plausable alternative that avoids the need for monkey patching, but it doesn't work for value tests.

If a value tests can be recast as a class-like categorization then the data property approach works for it. Using an alternative prototype for all values in a "subclass" (eg all generators) seems like a technique that might be plausible in situations like this. It is essentially just a way to factor out of each generator the storage of the true value for the isGenerator property. It doesn't require the exposure of a separate Generator constructor.

We are trying to generalize to a pattern to apply to all (or at least most isFoo) situations. Here is what we seem to have observed so far:

A isFoo method works well for value classification for situations where you will generally already know the "class" of the value. A independent classification function (perhaps hung from a constructor) may be a good solution when value classification will generally be done in situations where the "class" of the value is not predetermined. A truthy isFoo data property works will for class-like categorization as long as all values that share the same prototype are considered members of the same category.

# Allen Wirfs-Brock (14 years ago)

On Jul 11, 2011, at 11:50 AM, Claus Reinke wrote:

How about rest and spread, or de-structuring? We are going to use non-eval detectability as a ECMAScript extension design criteria then maybe we do need a less ad-hoc scheme for feature detection. It wouldn't have to be all that grand... Even less grand ones such as the DOM's (www.w3.org/TR/DOM-Level-3-Core/core.html#DOMFeatures) are failures.

Just noticed this hidden sub-thread, which seems to be related to my attempt to start a thread on modular versioning. How about the following attempt at least-grandness?-)

  • have a built-in module hierarchy 'language', with submodules for each new ES/next feature that is implemented in the current engine (language.destructuring, language.generators, ..)

  • code that relies on one of those features can try to import the language.feature module, and will be rejected early if that "feature module" isn't available

  • there could be collective modules corresponding to language versions (ES5, JS1.7, ES/next, ..), which behave as if they imported the language feature modules corresponding to that language version

  • this would suggest that ES/next modules ought to be the first ES/next feature to be implemented, to support versioned addition of the remaining ES/next features; but since modules already need to manage dependencies, there is no need to replicate that functionality for language features

Makes sense? Claus

I think there is a (usually unstated) desire to also test for ES.next features that may also start to show up as extensions to "ES5" level implementations. For example, generators in Firefox. You can't depend upon modules in such situations.

# Brendan Eich (14 years ago)

On Jul 11, 2011, at 12:46 PM, Allen Wirfs-Brock wrote:

I think there is a (usually unstated) desire to also test for ES.next features that may also start to show up as extensions to "ES5" level implementations. For example, generators in Firefox. You can't depend upon modules in such situations.

The thread with subject "Design principles for extending ES object abstractions" got into this, and I thikn (I can't be sure because he hasn't replied yet) Luke Hoban, Dave Herman, and I were kinda/sorta agreeing that the "old scripts" (ES5 and below) need just one API entry point: SystemLoader as a global property (I misspelled it Object.ModuleLoader in my reply to Luke).

Once old script has object-detected its way to joy via SystemLoader, it can load built-in ES6+ libraries such as "@name" and "@iter".

The API for module detection is under development, but it would allow both synchronous testing for a built-in or already-loaded MRL, and an async form for loading from an MRL over the 'net.

Given this (forgive the lack of complete specs, but it's enough, I claim), why do we need anything more for future-friendly "feature detection"?

# Brendan Eich (14 years ago)

On Jul 11, 2011, at 12:40 PM, Allen Wirfs-Brock wrote:

isGenerator is essentially a value test rather than class-like categorization. Methods work well for this because a method can dynamically inspect the value being tested in order to make the determination.

I'm not so sure about this now. I was just reviewing with Dave how the design evolved. We had Function.isGenerator, analogous to Array.isArray. For taskjs, Dave had thought he had a use-case where the code has a function and wants to know whether it's a generator. It turned out (IIUC) that what he really wanted (this will warm your heart) was simply the duck-type "does it implement the iterator protocol" test.

On the other hand, code that wants to ask "is this value a generator?" may not have a-priori knowledge that the value is a function, so a class method such as Function.isGenerator wins.

Indeed Array.isArray, apart from being named as if it were a class method just to avoid polluting the global (or some other) object, is a predicate function -- not a method. You can't know that you have an Array instance, never mind an object-type (typeof sense) value, all the time. If you don't know whether x is an array or even an object as opposed to a primitive, just call Array.isArray(x).

This strongly suggests to me that we want predicate functions. For the "@name" built-in module, isName is one such. There's no need to stick that into a "class object" just to avoid name pollution, since the module lets the importer name the import.

To rehash a subtle point again: our Function.prototype.isGenerator method is particular nasty if a user knows Array.isArray and expects Function.isGenerator(x) to test whether x is a generator (or any non-generator, whether function or not) value. That expression actually evaluates equivalently to Function.prototype.isGenerator.call(Function, x), and x is ignored -- and Function is of course not a generator.

So I think we took the wrong door here. Function.isGenerator by analogy to Array.isGenerator, or an isGenerator export from "@iter" (equivalent semantically), is the best way.

However, methods are less desirable for class-like categorization because they require an existence predicated call (f.isFoo && f.isFoo()) which potentially leads to monkey patching Object.prototype (Object.prototype.isFoo = function(){return false}). A truthy data property is a plausable alternative that avoids the need for monkey patching, but it doesn't work for value tests.

Only if you know you have an object. If the predicate you need has signature "any -> boolean" then you want a function.

If a value tests can be recast as a class-like categorization then the data property approach works for it.

Right, but you can't "recast" for any type of value without writing a prior typeof conjunct. Or did you mean something else by "recast"?

Using an alternative prototype for all values in a "subclass" (eg all generators) seems like a technique that might be plausible in situations like this. It is essentially just a way to factor out of each generator the storage of the true value for the isGenerator property. It doesn't require the exposure of a separate Generator constructor.

I think this misses the mark. Both Array.isArray in ES5 and Function.isGenerator in ES.next are testing nominal type tag. They are not testing some ad-hoc boolean "tag" that is not reliable and that can be forged.

We are trying to generalize to a pattern to apply to all (or at least most isFoo) situations. Here is what we seem to have observed so far:

A isFoo method works well for value classification for situations where you will generally already know the "class" of the value.

Agreed.

A independent classification function (perhaps hung from a constructor) may be a good solution when value classification will generally be done in situations where the "class" of the value is not predetermined.

Agreed.

A truthy isFoo data property works will for class-like categorization as long as all values that share the same prototype are considered members of the same category.

Disagree for the use-cases you applied this to. ES5 mandates a [[Class]] check for Array.isArray, and the impetus there was cross-frame Array classification based on nominal really-truly-built-in-Array type tag.

Ditto for any isGenerator worth its name, that is, to the extent that the use-case for isGenerator does not simply want to test "is it an iterator factory", which could be a structural or duck-type test.

# David Herman (14 years ago)

Adding a non-enumerable Array.prototype method seems doable to me, if the name is clear and not commonly used.

We can probably still add Array.prototoype.isArray if that would help to establish the pattern. Document as being preferred over Array.isArray

This doesn't make sense to me. Let's say I have a variable x and it's bound to some value but I don't know what. That is, it has the type "any." I could check to see that it's an object:

typeof x === "object"

and then that it has an isArray method:

typeof x.isArray === "function"

but how do I know that this isArray method has the semantics I intend? I could check that it's equal to Array.prototype.isArray, but that won't work across iframes.

IOW, I can't call x.isArray() to find out what I want it to tell me until I already know the information I want it to tell me.

Alternatively, if I expect isArray to be a boolean rather than a predicate, I still don't have any way of knowing whether x is of a type that has an entirely different semantics for the name "isArray." It's anti-modular to claim a universal semantics for that name across all possible datatypes in any program ever.

These "static" predicates (which are just glorified module-globals) intuitively have the type:

any -> boolean:T

(I'm using notation from SamTH's research here.) This means a function that takes any value whatsoever and returns a boolean indicating whether the result is of the type T. Because they accept the type "any", it doesn't make sense to put them in an inheritance hierarchy. It makes sense to have them as functions. Since globals are out of the question, in the past they've been made into "statics." But with modules, we can actually make them functions in their appropriate modules.

# Brendan Eich (14 years ago)

On Jul 11, 2011, at 4:18 PM, Brendan Eich wrote:

So I think we took the wrong door here. Function.isGenerator by analogy to Array.isGenerator,

"... by analogy to Array.isArray", of course.

# David Herman (14 years ago)

I'm not so sure about this now. I was just reviewing with Dave how the design evolved. We had Function.isGenerator, analogous to Array.isArray. For taskjs, Dave had thought he had a use-case where the code has a function and wants to know whether it's a generator. It turned out (IIUC) that what he really wanted (this will warm your heart) was simply the duck-type "does it implement the iterator protocol" test.

Right-- it was a poor initial implementation decision on my part; I shouldn't have done any test at all.

JJB had a different use case: for Firebug, he wanted the ability to check before calling a function whether it was going to have a different control-flow semantics. IOW, he wanted to be able to reliably reflect on values.

On the other hand, code that wants to ask "is this value a generator?" may not have a-priori knowledge that the value is a function, so a class method such as Function.isGenerator wins.

Yeah, I think it's generally more convenient for type-distinguishing predicates to have input type 'any', so that clients don't have to do any other tests beforehand. That way, you can always simply do

isGenerator(x)

instead of

typeof x === "function" && x.isGenerator()

without knowing anything about x beforehand.

So I think we took the wrong door here. Function.isGenerator by analogy to Array.isArray, or an isGenerator export from "@iter" (equivalent semantically), is the best way.

I agree.

# Allen Wirfs-Brock (14 years ago)

On Jul 11, 2011, at 4:19 PM, David Herman wrote:

Adding a non-enumerable Array.prototype method seems doable to me, if the name is clear and not commonly used.

We can probably still add Array.prototoype.isArray if that would help to establish the pattern. Document as being preferred over Array.isArray

This doesn't make sense to me. Let's say I have a variable x and it's bound to some value but I don't know what. That is, it has the type "any." I could check to see that it's an object:

typeof x === "object"

and then that it has an isArray method:

typeof x.isArray === "function"

but how do I know that this isArray method has the semantics I intend? I could check that it's equal to Array.prototype.isArray, but that won't work across iframes.

IOW, I can't call x.isArray() to find out what I want it to tell me until I already know the information I want it to tell me.

Alternatively, if I expect isArray to be a boolean rather than a predicate, I still don't have any way of knowing whether x is of a type that has an entirely different semantics for the name "isArray." It's anti-modular to claim a universal semantics for that name across all possible datatypes in any program ever.

In fact, this is exactly how polymorphic reuse and extensibility occurs in dynamically typed OO languages. And, for the most part it actually works even in quite complete programs. You aren't assigning a universal semantics to a name in all programs but you are asserting a contextual semantics for this particular program. Even within a single program, different objects can assign complete different and unrelated meanings to isFoo as long as the different objects never need to both be processed by common code that does a isFoo test.

In practice, if such objects invalidly cross paths, any wrong answers from classification methods are usually easy to spot because they quickly lead to other dynamically detected failures. Once that occurs there typically are easily observable differences between the objects that make the problem easy to find.

# Allen Wirfs-Brock (14 years ago)

On Jul 11, 2011, at 4:18 PM, Brendan Eich wrote:

On Jul 11, 2011, at 12:40 PM, Allen Wirfs-Brock wrote:

isGenerator is essentially a value test rather than class-like categorization. Methods work well for this because a method can dynamically inspect the value being tested in order to make the determination.

I'm not so sure about this now. I was just reviewing with Dave how the design evolved. We had Function.isGenerator, analogous to Array.isArray. For taskjs, Dave had thought he had a use-case where the code has a function and wants to know whether it's a generator. It turned out (IIUC) that what he really wanted (this will warm your heart) was simply the duck-type "does it implement the iterator protocol" test.

On the other hand, code that wants to ask "is this value a generator?" may not have a-priori knowledge that the value is a function, so a class method such as Function.isGenerator wins.

Indeed Array.isArray, apart from being named as if it were a class method just to avoid polluting the global (or some other) object, is a predicate function -- not a method. You can't know that you have an Array instance, never mind an object-type (typeof sense) value, all the time. If you don't know whether x is an array or even an object as opposed to a primitive, just call Array.isArray(x).

This strongly suggests to me that we want predicate functions. For the "@name" built-in module, isName is one such. There's no need to stick that into a "class object" just to avoid name pollution, since the module lets the importer name the import.

To rehash a subtle point again: our Function.prototype.isGenerator method is particular nasty if a user knows Array.isArray and expects Function.isGenerator(x) to test whether x is a generator (or any non-generator, whether function or not) value. That expression actually evaluates equivalently to Function.prototype.isGenerator.call(Function, x), and x is ignored -- and Function is of course not a generator.

So I think we took the wrong door here. Function.isGenerator by analogy to Array.isGenerator, or an isGenerator export from "@iter" (equivalent semantically), is the best way.

I was almost sold on this argument, but I see a different issue. "Global" predicate functions like this aren't extensible. Array.isArray and Function.isGenerator work fine because they are testing deep implementation level characteristics of special objects that can not be emulated by JS code (at least without proxies, in the case of Array). However, for pure JS classification you want them to be duck-type extensible. It is easy to add a new implementation for some category if the category test uses an instance property classification property (whether method or data property) and perhaps with some monkey patching. But a single global predicate can't be post facto extended to recognize new implementations of the category unless it was built with some internal extension mechanism (and any any such mechanism is likely to depend upon some per instance property, so that just loops us back to the same solution). I think you already brought this up earlier in this thread when you asked how would I extend Array.isArray to recognize a proxy based implementation of Array.

However, methods are less desirable for class-like categorization because they require an existence predicated call (f.isFoo && f.isFoo()) which potentially leads to monkey patching Object.prototype (Object.prototype.isFoo = function(){return false}). A truthy data property is a plausable alternative that avoids the need for monkey patching, but it doesn't work for value tests.

Only if you know you have an object. If the predicate you need has signature "any -> boolean" then you want a function.

All values except for null and undefined are automatically coerced to objects for property access. If a value is expected to possibly be null or undefined then that probably represents a separate condition and should have a separate test. If null/undefined is not an anticipate value then an exception on the property access is probably a fine dynamic error check.

If a value tests can be recast as a class-like categorization then the data property approach works for it.

Right, but you can't "recast" for any type of value without writing a prior typeof conjunct. Or did you mean something else by "recast"?

No, I don't mean a dynamic type cast. I meant something like giving generator functions a distinct prototype so the isGenerator: true could be factored out of the individual instances.

Using an alternative prototype for all values in a "subclass" (eg all generators) seems like a technique that might be plausible in situations like this. It is essentially just a way to factor out of each generator the storage of the true value for the isGenerator property. It doesn't require the exposure of a separate Generator constructor.

I think this misses the mark. Both Array.isArray in ES5 and Function.isGenerator in ES.next are testing nominal type tag. They are not testing some ad-hoc boolean "tag" that is not reliable and that can be forged.

I agree. But we are trying extrapolate to a pattern that is applicable for any application defined classification scheme. Those typically won't be deep nominal types with implementation dependencies. It may simply be that isGenerator (and perhaps isArray) just isn't a good exemplar for the more general situation.

We are trying to generalize to a pattern to apply to all (or at least most isFoo) situations. Here is what we seem to have observed so far:

A isFoo method works well for value classification for situations where you will generally already know the "class" of the value.

Agreed.

A independent classification function (perhaps hung from a constructor) may be a good solution when value classification will generally be done in situations where the "class" of the value is not predetermined.

Agreed.

and the classification doesn't need to be extensible to new implementations.

A truthy isFoo data property works will for class-like categorization as long as all values that share the same prototype are considered members of the same category.

Disagree for the use-cases you applied this to. ES5 mandates a [[Class]] check for Array.isArray, and the impetus there was cross-frame Array classification based on nominal really-truly-built-in-Array type tag.

Ditto for any isGenerator worth its name, that is, to the extent that the use-case for isGenerator does not simply want to test "is it an iterator factory", which could be a structural or duck-type test.

Agreed, these are both cases where the category isn't user extensible. However, I think my statement holds for class-like categorization that are extensible.

# Brendan Eich (14 years ago)

On Jul 11, 2011, at 5:29 PM, Allen Wirfs-Brock wrote:

I was almost sold on this argument, but I see a different issue. "Global" predicate functions like this aren't extensible. Array.isArray and Function.isGenerator work fine because they are testing deep implementation level characteristics of special objects that can not be emulated by JS code (at least without proxies, in the case of Array).

True of isName and Proxy.isProxy (neé isTrapping) too.

I agree the duck-typing and class-tag test use-cases are different. I'm pretty sure we don't have any standard or proposed isDuck examples, though.

However, for pure JS classification you want them to be duck-type extensible. It is easy to add a new implementation for some category if the category test uses an instance property classification property (whether method or data property) and perhaps with some monkey patching. But a single global predicate can't be post facto extended to recognize new implementations of the category unless it was built with some internal extension mechanism (and any any such mechanism is likely to depend upon some per instance property, so that just loops us back to the same solution).

Why switch topics to pure JS classification, though? There are two different use-cases here.

I think you already brought this up earlier in this thread when you asked how would I extend Array.isArray to recognize a proxy based implementation of Array.

I did, but that's not the same case. Array.isArray is not asking a duck- or structural-typing question. It really wants all the magic, which can't be written down via recursive structural types. If you can make a Proxy instance that fools users and passes the Array-Turing-test, great! But then you have to reimplement Array.isArray at least -- perhaps even replace all of Array.

However, methods are less desirable for class-like categorization because they require an existence predicated call (f.isFoo && f.isFoo()) which potentially leads to monkey patching Object.prototype (Object.prototype.isFoo = function(){return false}). A truthy data property is a plausable alternative that avoids the need for monkey patching, but it doesn't work for value tests.

Only if you know you have an object. If the predicate you need has signature "any -> boolean" then you want a function.

All values except for null and undefined are automatically coerced to objects for property access.

Yes, null and undefined "count" -- especially since with an unknown value you are more likely to have undefined than 42 or "foo". Also, nasty host objects.

I don't think "close enough, ignore null and undefined" cuts it.

If a value is expected to possibly be null or undefined then that probably represents a separate condition and should have a separate test.

Why?

One can write any -> boolean functions. Many built-in operators and functions take any type input argument, not only non-null object arguments.

A restriction against null and undefined is arbitrary and it taxes all programmers who have to (remember to) write the extra test, instead of taxing the implementation to provide the predicate once for all.

If null/undefined is not an anticipate value then an exception on the property access is probably a fine dynamic error check.

Not for Array.isArray or the other built-ins we're discussing.

We shouldn't make arbitrary judgments against certain use-cases. Sometimes you know you have an object (or even a function), and a "narrower" input type for a predicate will do. Other times, you need the widest (any -> boolean) predicate. And the latter can work in the former cases without trouble. So why insist on object.method() with throw on null or undefined?

If a value tests can be recast as a class-like categorization then the data property approach works for it.

Right, but you can't "recast" for any type of value without writing a prior typeof conjunct. Or did you mean something else by "recast"?

No, I don't mean a dynamic type cast. I meant something like giving generator functions a distinct prototype so the isGenerator: true could be factored out of the individual instances.

That does not handle the nominal type test you agree is the purpose of isGenerator, so I don't know why you're citing this use-case for the other, is-a-loose-duck-type case.

Using an alternative prototype for all values in a "subclass" (eg all generators) seems like a technique that might be plausible in situations like this. It is essentially just a way to factor out of each generator the storage of the true value for the isGenerator property. It doesn't require the exposure of a separate Generator constructor.

I think this misses the mark. Both Array.isArray in ES5 and Function.isGenerator in ES.next are testing nominal type tag. They are not testing some ad-hoc boolean "tag" that is not reliable and that can be forged.

I agree. But we are trying extrapolate to a pattern that is applicable for any application defined classification scheme.

No, or at least: I'm not. Rather, I'm trying to find out whether there is one pattern, or two. I see at least one, the nominal tag test. That needs attention.

We can get to the other one, the loose foo.isDuck boolean data property or undefined falsy-value test, only if we really need to. I think you've made a good case for its extensible nature. But I'd rather leave it for JS users and library authors. And isGenerator has nothing to do with it.

Those typically won't be deep nominal types with implementation dependencies. It may simply be that isGenerator (and perhaps isArray) just isn't a good exemplar for the more general situation.

Whew! That's what I've been getting at.

But per my above words, I'd like to go further in seeking clarity about what not to do, as well as what to do, and what patterns to keep in mind for later (so I'm not complaining about taking the time to diagram the use-cases and solutions a bit). I simply don't see why we need the isDuck method pattern for any built-ins in ES5 or proposed for Hamony. And here I include elaboration of generators' prototype chains.

An independent classification function (perhaps hung from a constructor) may be a good solution when value classification will generally be done in situations where the "class" of the value is not predetermined.

Agreed.

and the classification doesn't need to be extensible to new implementations.

Right, that's a good point. The isDuck scheme is extensible -- to a fault! :-P

A truthy isFoo data property works will for class-like categorization as long as all values that share the same prototype are considered members of the same category.

Disagree for the use-cases you applied this to. ES5 mandates a [[Class]] check for Array.isArray, and the impetus there was cross-frame Array classification based on nominal really-truly-built-in-Array type tag.

Ditto for any isGenerator worth its name, that is, to the extent that the use-case for isGenerator does not simply want to test "is it an iterator factory", which could be a structural or duck-type test.

Agreed, these are both cases where the category isn't user extensible. However, I think my statement holds for class-like categorization that are extensible.

Do we have any examples of those in the spec., or contemplated for ES.next?

# Allen Wirfs-Brock (14 years ago)

On Jul 11, 2011, at 6:14 PM, Brendan Eich wrote:

On Jul 11, 2011, at 5:29 PM, Allen Wirfs-Brock wrote:

However, for pure JS classification you want them to be duck-type extensible. It is easy to add a new implementation for some category if the category test uses an instance property classification property (whether method or data property) and perhaps with some monkey patching. But a single global predicate can't be post facto extended to recognize new implementations of the category unless it was built with some internal extension mechanism (and any any such mechanism is likely to depend upon some per instance property, so that just loops us back to the same solution).

Why switch topics to pure JS classification, though? There are two different use-cases here.

Well, I thought we were trying to find a pattern that was applicable to both:

On Jul 9, 2011, at 11:19 AM, Allen Wirfs-Brock wrote:

A consideration here is that whatever conventions we follow sets a precedent for user written code. I don't think we want to encourage the addition of such classification functions to Object or Object.prototype. So from that perspective Array.isArray, Function.isGenerator, and Proxy.isProxy are better exemplars than Object.isArray, etc.

Maybe we weren't as user written code normally does define these sorts of implementation classifications. However, the fact it took this long for us (well, at least me) to see that probably is a good indication that this distinction isn't always an obvious one.

However, methods are less desirable for class-like categorization because they require an existence predicated call (f.isFoo && f.isFoo()) which potentially leads to monkey patching Object.prototype (Object.prototype.isFoo = function(){return false}). A truthy data property is a plausable alternative that avoids the need for monkey patching, but it doesn't work for value tests.

Only if you know you have an object. If the predicate you need has signature "any -> boolean" then you want a function.

All values except for null and undefined are automatically coerced to objects for property access.

Yes, null and undefined "count" -- especially since with an unknown value you are more likely to have undefined than 42 or "foo". Also, nasty host objects.

I don't think "close enough, ignore null and undefined" cuts it.

If a value is expected to possibly be null or undefined then that probably represents a separate condition and should have a separate test.

Why?

When used as an explicit marker then there is probably explicit logic to deal with whatever the marker represents. The test for that condition can precede any other classification test.

If they are being used intentionally then as such a marker then property accessing dereferencing them is probably an unintended error situation.

One can write any -> boolean functions. Many built-in operators and functions take any type input argument, not only non-null object arguments.

Regardless of whether or not that was a wise design choice, it is precedent that we do need to consider.

A restriction against null and undefined is arbitrary and it taxes all programmers who have to (remember to) write the extra test, instead of taxing the implementation to provide the predicate once for all.

If null/undefined is not an anticipate value then an exception on the property access is probably a fine dynamic error check.

Not for Array.isArray or the other built-ins we're discussing.

Yes, but since we seem to be circling in predicate functions for those tests it's not an issue for them.

We shouldn't make arbitrary judgments against certain use-cases. Sometimes you know you have an object (or even a function), and a "narrower" input type for a predicate will do. Other times, you need the widest (any -> boolean) predicate. And the latter can work in the former cases without trouble. So why insist on object.method() with throw on null or undefined?

Well, this may be a style point that we can't agree on. I do think, that undefined and null are special cases that probably need to be explicitly dealt within when their use is intentional.

If a value tests can be recast as a class-like categorization then the data property approach works for it.

Right, but you can't "recast" for any type of value without writing a prior typeof conjunct. Or did you mean something else by "recast"?

No, I don't mean a dynamic type cast. I meant something like giving generator functions a distinct prototype so the isGenerator: true could be factored out of the individual instances.

That does not handle the nominal type test you agree is the purpose of isGenerator, so I don't know why you're citing this use-case for the other, is-a-loose-duck-type case.

Because I got confused about what problem we were trying to solve. Also, like I said above, the distinction between the two use cases may not be so obvious.

Using an alternative prototype for all values in a "subclass" (eg all generators) seems like a technique that might be plausible in situations like this. It is essentially just a way to factor out of each generator the storage of the true value for the isGenerator property. It doesn't require the exposure of a separate Generator constructor.

I think this misses the mark. Both Array.isArray in ES5 and Function.isGenerator in ES.next are testing nominal type tag. They are not testing some ad-hoc boolean "tag" that is not reliable and that can be forged.

I agree. But we are trying extrapolate to a pattern that is applicable for any application defined classification scheme.

No, or at least: I'm not. Rather, I'm trying to find out whether there is one pattern, or two. I see at least one, the nominal tag test. That needs attention.

We can get to the other one, the loose foo.isDuck boolean data property or undefined falsy-value test, only if we really need to. I think you've made a good case for its extensible nature. But I'd rather leave it for JS users and library authors. And isGenerator has nothing to do with it.

Yes, I agree. Except for the concern that users in duckish situations will follow the nominal tag test pattern we establish because that is the only examplar they have.

Those typically won't be deep nominal types with implementation dependencies. It may simply be that isGenerator (and perhaps isArray) just isn't a good exemplar for the more general situation.

Whew! That's what I've been getting at.

But per my above words, I'd like to go further in seeking clarity about what not to do, as well as what to do, and what patterns to keep in mind for later (so I'm not complaining about taking the time to diagram the use-cases and solutions a bit). I simply don't see why we need the isDuck method pattern for any built-ins in ES5 or proposed for Hamony. And here I include elaboration of generators' prototype chains.

Perhaps not, as we have said we want to steer away from adding complex libraries to the core language. However, some of the discussion today about extending Array.prototype function feels like it might lead into a space that might need isDuck. Same for the I18N library work.

An independent classification function (perhaps hung from a constructor) may be a good solution when value classification will generally be done in situations where the "class" of the value is not predetermined.

Agreed.

and the classification doesn't need to be extensible to new implementations.

Right, that's a good point. The isDuck scheme is extensible -- to a fault! :-P

A truthy isFoo data property works will for class-like categorization as long as all values that share the same prototype are considered members of the same category.

Disagree for the use-cases you applied this to. ES5 mandates a [[Class]] check for Array.isArray, and the impetus there was cross-frame Array classification based on nominal really-truly-built-in-Array type tag.

Ditto for any isGenerator worth its name, that is, to the extent that the use-case for isGenerator does not simply want to test "is it an iterator factory", which could be a structural or duck-type test.

Agreed, these are both cases where the category isn't user extensible. However, I think my statement holds for class-like categorization that are extensible.

Do we have any examples of those in the spec., or contemplated for ES.next?

See strawman:es5_internal_nominal_typing #8: In JSON.stringify, use a isJSONArray property to control output in array literal syntax and eliminate corresponding [[Class]] check.

# Allen Wirfs-Brock (14 years ago)

On Jul 11, 2011, at 7:31 PM, Allen Wirfs-Brock wrote:

Agreed, these are both cases where the category isn't user extensible. However, I think my statement holds for class-like categorization that are extensible.

Do we have any examples of those in the spec., or contemplated for ES.next?

See strawman:es5_internal_nominal_typing #8: In JSON.stringify, use a isJSONArray property to control output in array literal syntax and eliminate corresponding [[Class]] check

also see docs.google.com/document/d/1sSUtri6joyOOh23nVDfMbs1wDS7iDMDUFVVeHeRdSIw/edit?hl=en&authkey=CI-FopgC&pli=1 :

The built-in method Array.prototype.concat (15.4.4.4) tests each of its arguments for [[Class]] equal to “Array” to determine whether the argument should be concatenated as a single object or whether it should be considered a transparent container of objects that are individually concatenated. Because it is a class test it is context independent.

This test is problematic in that prevents used defined array-like objects from have this same transparent container behavior. This applied even to objects that have the Array prototype on their prototype chain. This transparent container treatment has no fundamental dependency upon the actual unique semantics of built-in arrays. It should be replaced with a check for some user-definable flag property that enables/disables transparent container treatment.

# Andreas Rossberg (14 years ago)

On 9 July 2011 17:48, Brendan Eich <brendan at mozilla.com> wrote:

Adding names as a distinct reference and typeof type, extending WeakMap to have as its key type (object | name), adds complexity compared to subsuming name under object.

It seems to me that you are merely shifting the additional complexity from one place to another: either weak maps must be able to handle (object + name) for keys (instead of just object), or objects must handle (string + object * isName) as property names (instead of just string + name). Moreover, the distinction between names and proper objects will have to be deeply engrained in the spec, because it changes a fundamental mechanism of the language. Whereas WeakMaps are more of an orthogonal feature with rather local impact on the spec. (The same is probably true for implementations.)

Why do you think that it would that make WeakMap more complicated? As far as I can see, implementations will internally make that very type distinction anyways.

No, as proposed private name objects are just objects, and WeakMap implementations do not have to distinguish (apart from usual GC mark method virtualization internal to implementations) between names and other objects used as keys.

And the spec also has to make it, one way or the other.

Not if names are objects.

I think an efficient implementation of names in something like V8 will probably want to assign different internal type tags to them either way. Otherwise, we'll need extra tests for each property access, and cannot specialise as effectively.

I'm not sure which "class" you mean. The [[ClassName]] disclosed by Object.prototype.toString.call(x).slice(8,-1) is one possibility, which is one of the many and user-extensible ways of distinguishing among objects. ES.next class is just sugar for constructor/prototype patterns with crucial help for extends and super.

I meant the [[Class]] property (I guess that's what you are referring to as well). Not sure what you mean when you say it is user-extensible, though. Is it in some implementations? (I'm aware of the somewhat scary note on p.32 of the spec.) Or are you just referring to the toString method?

I appreciate the ongoing discussion, but I'm somewhat confused. Can I ask a few questions to get a clearer picture?

  1. We seem to have (at least) a two-level nominal "type" system: the first level is what is returned by typeof, the second refines the object type and is hidden in the [[Class]] property (and then there is the oddball "function" type, but let's ignore that). Is it the intention that all "type testing" predicates like isArray, isName, isGenerator will essentially expose the [[Class]] property?

  2. If there are exceptions to this, why? Would it make sense to clean this up? (I saw Allen's cleanup strawman, but it seems to be going the opposite direction, and I'm not quite sure what it's trying to achieve exactly.)

  3. If we can get to a uniform [[Class]] mechanism, maybe an alternative to various ad-hoc isX attributes would be a generic classof operator?

  4. What about proxies? Is the idea that proxies can never emulate any behaviour that relies on a specific [[Class]]? For example, I cannot proxy a name. Also, new classes can only be introduced by the spec.

  5. What are the conventions by which the library distinguishes between "regular" object properties and operations, and meta (reflective) ones? It seems to me that part of the confusion(?) in the discussion is that the current design makes no real distinction. I think it is important, though, since e.g. proxies should be able to trap regular operations, but not reflective ones (otherwise, e.g. isProxy wouldn't make sense). Also, modern reflective patterns like mirrors make the point that no reflective method should be on the reflected object itself.

Thanks,

# Brendan Eich (14 years ago)

On Jul 12, 2011, at 2:54 AM, Andreas Rossberg wrote:

On 9 July 2011 17:48, Brendan Eich <brendan at mozilla.com> wrote:

Adding names as a distinct reference and typeof type, extending WeakMap to have as its key type (object | name), adds complexity compared to subsuming name under object.

It seems to me that you are merely shifting the additional complexity from one place to another: either weak maps must be able to handle (object + name) for keys (instead of just object), or objects must handle (string + object * isName) as property names (instead of just string + name).

Quite right. This was a part of ECMA-357, E4X, which we have suppported in SpiderMonkey starting in late 2004. It was a part of ES4 as well.

On the other hand, there's no free lunch for private names having a new typeof type: "name". They cannot be strings, so the VM's property lookup and identifier-handling code paths all need a wider type. True, they could be UUIDs in strings that have a tag bit set to prevent forgery, but that's not a likely implementation.

So given that id types and lookup code paths have to change anyway, this seems more of a wash than you suggest. Not a total wash, but not a clear with for new typeof-type "name".

# Claus Reinke (14 years ago)

I think there is a (usually unstated) desire to also test for ES.next features that may also start to show up as extensions to "ES5" level implementations. For example, generators in Firefox. You can't depend upon modules in such situations.

For me, the issue is that ES engines tend to implement new language versions gradually (currently, still not everyone implementing ES5, yet some providing experimental features beyond ES5). That practice is good (gets useful features out there faster) and it matches the ES proposal wiki organization but JS language versioning on the web is not yet as modular (see the other thread for generators example, url below).

I tend to think about language feature pragmas as a solution, but that is because I've seen that approach work in practice elsewhere. The idea of using ES/next modules to manage dependencies on language features is new, so I'm not sure it would work as well. It depends on two assumptions:

- ES/next modules need to be the first ES/next feature to 
    be implemented, so that the other features can be
    versioned using modules; 

    that is, Firefox generators would only be available 
    through modular versioning after adding modules

- dependencies on language.feature modules have to
    be treated specially, to get useful error messages:
    usually, module dependencies are looked into after
    parsing, which is too late to complain about features
    that are not yet supported (I don't know how this
    interacts with transitive dependencies - might render
    the idea unworkable..)

In other words, it is an ad-hoc idea about using module dependencies as language version pragmas, without having to add pragma syntax. It does have the advantage that modules have dependency management and early errors as part of their feature set already.

The thread with subject "Design principles for extending ES object abstractions" got into this, and I thikn (I can't be sure because he hasn't replied yet) Luke Hoban, Dave Herman, and I were kinda/sorta agreeing that the "old scripts" (ES5 and below) need just one API entry point: SystemLoader as a global property (I misspelled it Object.ModuleLoader in my reply to Luke).

Once old script has object-detected its way to joy via SystemLoader, it can load built-in ES6+ libraries such as "@name" and "@iter".

Thanks. My own single-message thread is here:

feature-based and compartment-based language versioning
https://mail.mozilla.org/pipermail/es-discuss/2011-July/015755.html

From some of the messages in your thread it seems you

have found yet another variation on mixing versions: using ES/next functionality while sticking to ES5 syntax (imperative access to some syntax-based declarative ES/next features).

The API for module detection is under development, but it would allow both synchronous testing for a built-in or already-loaded MRL, and an async form for loading from an MRL over the 'net.

Given this (forgive the lack of complete specs, but it's enough, I claim), why do we need anything more for future-friendly "feature detection"?

  1. so that we can write code that wants to be ES/next, and get early failures with good error messages if that code tries to load on an engine that does not implement all ES/next features the code depends on.

    (not "parse error at 'yield'" but "this engine does not implement generators yet, sorry")

  2. so that we can write code that only depends on some ES/next features, and enable engines to figure out early if they are able to load and run this code, based on their partially implemented ES/next feature set.

    (if code depends only on generators, current Firefox may be able to run it, but we can't specify a dependency on only that feature)

  3. so that we can selectively enable experimental language features, which will continue to be implemented prior to standardization, even after ES/next.

    (if code depends on generators, but should not be used in combination with some of the other experimental features in Firefox, we cannot specify such a selection at the moment)

  4. So that coders can document their language feature dependencies in their source code (as page scripting is replaced by JS applications, and script files are replaced by modules).

Claus

# Brendan Eich (14 years ago)

On Jul 12, 2011, at 7:43 AM, Brendan Eich wrote:

On the other hand, there's no free lunch for private names having a new typeof type: "name". They cannot be strings, so the VM's property lookup and identifier-handling code paths all need a wider type. True, they could be UUIDs in strings that have a tag bit set to prevent forgery, but that's not a likely implementation.

So given that id types and lookup code paths have to change anyway, this seems more of a wash than you suggest. Not a total wash, but not a clear with for new typeof-type "name".

Er, "not a clear win...."

Another point about implementations: any that perform at all well already use a sum type for their internal property id representation, to accomodate int or uint or string. Adding another arm to that discriminated union is not a huge amount of work, in our experience.

For JavaScriptCore, Oliver has argued that internal to the engine, keeping id uncoerced as an arbitrary value, until late or "deep" in the layering, is not troublesome and could allow better true array or vector indexing semantics, e.g. via proxies.

# Allen Wirfs-Brock (14 years ago)

On Jul 12, 2011, at 2:54 AM, Andreas Rossberg wrote:

On 9 July 2011 17:48, Brendan Eich <brendan at mozilla.com> wrote:

Moreover, the distinction between names and proper objects will have to be deeply engrained in the spec, because it changes a fundamental mechanism of the language. Whereas WeakMaps are more of an orthogonal feature with rather local impact on the spec.

Not a big deal for the spec. One short ToPropertyKey algorithm in clause 9 and calls to it in a handful of places.

I think an efficient implementation of names in something like V8 will probably want to assign different internal type tags to them either way. Otherwise, we'll need extra tests for each property access, and cannot specialise as effectively.

Isn't that pretty much a wash either way

I'm not sure which "class" you mean. The [[ClassName]] disclosed by Object.prototype.toString.call(x).slice(8,-1) is one possibility, which is one of the many and user-extensible ways of distinguishing among objects. ES.next class is just sugar for constructor/prototype patterns with crucial help for extends and super.

I meant the [[Class]] property (I guess that's what you are referring to as well). Not sure what you mean when you say it is user-extensible, though. Is it in some implementations? (I'm aware of the somewhat scary note on p.32 of the spec.) Or are you just referring to the toString method?

Because Table 8 says all objects (including host objects) have a string valued [[Class]] property, some people and third party specifications (and perhaps some implementations) have assumed that [[Class]] is an specified extension mechanism for Object.prototype.toString.

To eliminate any such confusion, in the initial draft for the 6th edition spec. I have eliminated [[Class]] and replaced it with non-parameterized object brands using distinct internal properties. I will make that draft available within the next 24 hours.

I appreciate the ongoing discussion, but I'm somewhat confused. Can I ask a few questions to get a clearer picture?

  1. We seem to have (at least) a two-level nominal "type" system: the first level is what is returned by typeof, the second refines the object type and is hidden in the [[Class]] property (and then there is the oddball "function" type, but let's ignore that).

You can certainly read it that way.

The real nominal types of ECMAScript are defined in clause 8. Only the "ECMAScript language types" subset are exposed to ECMAScript programmers. The "specification types" subset are just that, specification devices.

One of these nominal types is object. Specific kinds of objects are classified and discriminated in various ways within the specification (including via the value of [[Class]]). If you wish to thing of these categorization schemes as "nominal types" you can but the specification does not define them as such. In particular, it does not define any rules for forming such nominal types and it does not impose any orthogonality requirements on the categorization schemes. Essentially, this is an ad hoc layer that has been used for specification convenience.

Is it the intention that all "type testing" predicates like isArray, isName, isGenerator will essentially expose the [[Class]] property?

Absolutely not. But note that that these are not "type testing" in the sense that ES uses the work "type". They discriminate different kinds (categorizations) of objects but ES does not define these as types. This is not an unusual dichotomy. For example, if you define Integer to be a type, what do you consider odd integers to be? Another layer of typing or just a further subcategorization of the member of the Integer type.

  1. If there are exceptions to this, why? Would it make sense to clean this up? (I saw Allen's cleanup strawman, but it seems to be going the opposite direction, and I'm not quite sure what it's trying to achieve exactly.)

see the associated google doc: docs.google.com/document/d/1sSUtri6joyOOh23nVDfMbs1wDS7iDMDUFVVeHeRdSIw/edit?hl=en_US&authkey=CI-FopgC

[[Class]] has caused all sorts of confusion in the realm of host objects. It's use in the spec. has also resulted in some unnecessary over specification that gets in the way extensibility.

  1. If we can get to a uniform [[Class]] mechanism, maybe an alternative to various ad-hoc isX attributes would be a generic classof operator?

What is it you want to classify? What is a "class"? There is an infinite variety of ways that ECMAScript objects might be classified. No single one of them is universal. The built-in objects can present some exemplar classification patterns. But the built-ins actually represent a very small set of objects and use cases. We need to be careful about over generalizing from that small sample.

  1. What about proxies? Is the idea that proxies can never emulate any behaviour that relies on a specific [[Class]]? For example, I cannot proxy a name. Also, new classes can only be introduced by the spec.

Another reason way I'm eliminating [[Class]].

Also see spreadsheets.google.com/spreadsheet/ccc?key=0Ak51JfLL8QLYdFRCOXBRczJfRzNJSEk2eXptQ3BzalE&hl=en_US

I may be getting redundant, but the internal classification schemes used by the specification are not a real class hierarchy with any rules regarding their formulation definition. There are a number of different uses of object classification within the spec. (from the google doc referenced in item 2 above):

Among them, the object classifications are used for various purposes within the specification including: • Explicit data parametrization of built-in functions based upon object “type”. For example, Object.prototype.toString. • Explicit behavioral parametrization of built-in functions based upon object “type”. For example, Array.prototype.concat. • Explicit behavioral parametrization of language features based upon the object “type” of operands. For example, the delete operator’s use of the [[Delete]] internal method. • Association of private runtime state with specific object instances where the state is used by various specification algorithms. For example, the [[Prototype]] internal property. • Association of abstracted private state with specific object instances where the actual runtime state space is an implementation detail. For example, the [[Scope]] and [[Code]] internal properties of function objects. • Association of abstracted private state with specific object categories where the state is only used for specification purposes and need not actually exists at runtime. For example, the [[FormalParameters]] internal property of function objects. • Specifying a dependency between private runtime state of specific object instances and specific built-in methods that use that state. Date methods that depend upon the [[PrimitiveValue]] property. • Specifying a dependency between object type specific internal behaviors and specific built-in methods that use those behaviors. For example, String.prototype.split method’s use of RegExp [[Match]] internal method.

  1. What are the conventions by which the library distinguishes between "regular" object properties and operations, and meta (reflective) ones? It seems to me that part of the confusion(?) in the discussion is that the current design makes no real distinction. I think it is important, though, since e.g. proxies should be able to trap regular operations, but not reflective ones (otherwise, e.g. isProxy wouldn't make sense). Also, modern reflective patterns like mirrors make the point that no reflective method should be on the reflected object itself.

This is what we are in the process of sorting out. The ES spec. (and language) wasn't originally designed from that perspective but I think we can move it there.

# Brendan Eich (14 years ago)

Allen, thanks for replying -- I ran out of time this morning. A few comments on top of yours.

On Jul 12, 2011, at 10:18 AM, Allen Wirfs-Brock wrote:

I think an efficient implementation of names in something like V8 will probably want to assign different internal type tags to them either way. Otherwise, we'll need extra tests for each property access, and cannot specialise as effectively.

Isn't that pretty much a wash either way

Not only that (I argued in my reply), but engines do already tag identifier types. Indexing does not use strings "0", "1", etc., even interned strings, in engines that aspire to be fast. Such (interned) strings must of course be observably equivalent to 0, 1, etc. when used as indexes and preserved as type-tagged integer domain property identifiers.

I meant the [[Class]] property (I guess that's what you are referring to as well). Not sure what you mean when you say it is user-extensible, though. Is it in some implementations? (I'm aware of the somewhat scary note on p.32 of the spec.) Or are you just referring to the toString method?

Because Table 8 says all objects (including host objects) have a string valued [[Class]] property, some people and third party specifications (and perhaps some implementations) have assumed that [[Class]] is an specified extension mechanism for Object.prototype.toString.

To eliminate any such confusion, in the initial draft for the 6th edition spec. I have eliminated [[Class]] and replaced it with non-parameterized object brands using distinct internal properties. I will make that draft available within the next 24 hours.

Just to amplify Allen's reply, we are working on dom.js (andreasgal/dom.js), the self-hosted DOM. Per its specs, it must be able in some future JS edition to configure the "class name" reported by Object.prototype.toString.call(x, 8, -1).

  1. We seem to have (at least) a two-level nominal "type" system: the first level is what is returned by typeof, the second refines the object type and is hidden in the [[Class]] property (and then there is the oddball "function" type, but let's ignore that).

You can certainly read it that way.

The real nominal types of ECMAScript are defined in clause 8. Only the "ECMAScript language types" subset are exposed to ECMAScript programmers. The "specification types" subset are just that, specification devices.

The typeof distinction is among primitive types with by-value semantics (null, undefined, boolean, number, string) and object, which as discussed recently has by-reference semantics, observable because objects are in general mutable.

Among the objects, there are many and non-hierarchically-related classification schemes, as Allen reminds us (worth doing to avoid making brittle, crappy, tyrannical nominal hierarchies :-/).

There isn't much "nominal" here in an untyped language. Perhaps we should use "branding" or "trademarking", but when I use "nominal" in connection with JS I mean the same thing: certain built-ins have internal [[Class]] properties which not only contribute to Object.prototype.toString's result, the [[Class]] values are checked aggressively in built-in methods of these objects, such that the methods cannot in general be invoked on objects with other [[Class]] values.

In engine implementations, C++ nominal types are often used, but the JS dispatching machinery does not reflect via C++ RTTI or any such thing. Instead, something corresponding to [[Class]] is compared to a known singleton, to enforce that these non-generic methods are called on the "correct type of |this|".

To sum up, typeof is for value or primitive vs. object type identification. When typeof x === "object", x may then be classified in many ways. As a rule, and I believe the committee agrees on this, we wish built-ins to be generic where practical for implementations to unify representations so as to avoid the [[Class]]-equivalent nominal type tag, AKA brand or trademark, test.

A proposal for Harmony gets into surfacing branding in the language:

strawman:trademarks

This didn't make ES.next.

(To repeat a minor comment I gave at the May TC39 meeting, prefigured above, I hope we can try to use one word, ideally as short as "brand" or "branding", for this "nominal type tag" idea.)

Is it the intention that all "type testing" predicates like isArray, isName, isGenerator will essentially expose the [[Class]] property?

Absolutely not. But note that that these are not "type testing" in the sense that ES uses the work "type". They discriminate different kinds (categorizations) of objects but ES does not define these as types. This is not an unusual dichotomy. For example, if you define Integer to be a type, what do you consider odd integers to be? Another layer of typing or just a further subcategorization of the member of the Integer type.

However, as my discussion with Allen brought out (I hope!), the three predicates Andreas mentions, isName, isGenerator, and isArray, are all testing "brands".

Sometimes, and I recall agreeing with MarkM on this in 2008 at the July Oslo "Harmony" TC39 meeting, nominal tag testing is critical for, e.g., a security architecture. It is in the DOM, although we've argued about whether or how much is truly needed. It's a minority use-case, and actually more important IMHO (this was Mark's point too, I think) to provide to users of the language than to reserve as magic that only TC39 can burn into specifications.

  1. If there are exceptions to this, why? Would it make sense to clean this up? (I saw Allen's cleanup strawman, but it seems to be going the opposite direction, and I'm not quite sure what it's trying to achieve exactly.)

see the associated google doc: docs.google.com/document/d/1sSUtri6joyOOh23nVDfMbs1wDS7iDMDUFVVeHeRdSIw/edit?hl=en_US&authkey=CI-FopgC

[[Class]] has caused all sorts of confusion in the realm of host objects. It's use in the spec. has also resulted in some unnecessary over specification that gets in the way extensibility.

Agreed.

  1. What are the conventions by which the library distinguishes between "regular" object properties and operations, and meta (reflective) ones? It seems to me that part of the confusion(?) in the discussion is that the current design makes no real distinction. I think it is important, though, since e.g. proxies should be able to trap regular operations, but not reflective ones (otherwise, e.g. isProxy wouldn't make sense). Also, modern reflective patterns like mirrors make the point that no reflective method should be on the reflected object itself.

This is what we are in the process of sorting out. The ES spec. (and language) wasn't originally designed from that perspective but I think we can move it there.

Definitely! I was in a big rush. I recall putting in primitive types (something I've repented at leisure) for two reasons: (1) be like Java (this wasn't just superficial, we were planning the JS/Java bridge, LiveConnect, even from the start in 1995); (2) ease of implementation (arguably I was biasing toward a certain kind of implementation, but it was what I knew how to do fast: fat discriminated union value types, hardcoded arithmetic for the NUMBER discriminant).

For functions, I made a distinct typeof result so that users could determine callability (remember, there was no try/catch/finally in JS1 and anyway, try around a call attempt is too heavyweight).

Among objects apart from functions, I created the prototype.constructor back-link but otherwise left it up to object implementors to come up with classification schemes.

# Brendan Eich (14 years ago)

On Jul 12, 2011, at 11:08 AM, Brendan Eich wrote:

  1. What are the conventions by which the library distinguishes between "regular" object properties and operations, and meta (reflective) ones? It seems to me that part of the confusion(?) in the discussion is that the current design makes no real distinction. I think it is important, though, since e.g. proxies should be able to trap regular operations, but not reflective ones (otherwise, e.g. isProxy wouldn't make sense). Also, modern reflective patterns like mirrors make the point that no reflective method should be on the reflected object itself.

This is what we are in the process of sorting out. The ES spec. (and language) wasn't originally designed from that perspective but I think we can move it there.

Definitely! I was in a big rush. I recall putting in primitive types (something I've repented at leisure) for two reasons: (1) be like Java (this wasn't just superficial, we were planning the JS/Java bridge, LiveConnect, even from the start in 1995); (2) ease of implementation (arguably I was biasing toward a certain kind of implementation, but it was what I knew how to do fast: fat discriminated union value types, hardcoded arithmetic for the NUMBER discriminant).

For functions, I made a distinct typeof result so that users could determine callability (remember, there was no try/catch/finally in JS1 and anyway, try around a call attempt is too heavyweight).

Among objects apart from functions, I created the prototype.constructor back-link but otherwise left it up to object implementors to come up with classification schemes.

A bit more on the history of reflective facilities added at the base layer:

For JS1 there was zero time to do proper mirrors-style reflective APIs. Economy of implementation and expression, plus the utility of objects as hashes, led me to make o[e] evaluate e into a result and equate it to a string property name. Even in the first implementation, which had some goofy bugs, I int-specialized for speed. From AWK (capitalized like a fanboy, after its creators' surname initials) I copied for-in as the matching way of enumerating the keys in the object-as-hash.

So at base level, one can reflect on what other languages do not disclose without mirrors: the names of properties and, given any expression, the value of a property named by the string conversion of the result of evaluating that expression.

ES3 added "in", "instanceof" (we've talked about how this went wrong in the face of multiple globals), and a paltry (six, IIRC) Object.prototype reflective methods, some of which seem misdesigned (e.g. propertyIsEnumerable does not have "own" in its name but does not look up the proto chain). This was just shortest-path evolution in TC39 TG1, mostly, although I seem to recall prototyping "in" in SpiderMonkey before ES3 was far along.

Allen's work with mirrors shows how rich and multifarious the reflection API world can be. This is good work and we need more of it. I don't think we should prematurely standardize mirror-style reflective APIs. We do need to correct course on reflective and other API design path missteps where we can.

Waldemar made this point about standards processes: they can cycle faster but create path dependencies and enshrine mistakes, painting the language into a corner. Cycling too slow is bad too, obviously -- we have been there and done that (it's why we at Mozilla added so many things like getters and setters, proto, Array extras back during the dark ages -- we had no choice with a monopoly shutdown of the standards process, and taking that lying down is not "good" in itself or obviously better than making de-facto standards).

Cycling at a pace that fits the Ecma->ISO protocols can work but leaves us with mistakes small (we hope) enough to live with, but sometimes worth fixing (we should consider each such potential mistake). I know Allen feels this regarding Object.create, specifically the property descriptor attribute defaults and consequent verbosity when describing typical properties. I feel it of course over all the mistakes in most of the language that I've created or had a hand in (or where I took time off to found mozilla.org, and did not help ES3). I'm not guilty at this advanced age, just resolved to fix what can be fixed.

Mistakes happen, we need to be careful -- but we aren't going to do anyone any favors by sitting and thinking on how to achieve perfection -- so we ought to make corrections as we go. Backward compatibility then means some redundancy over time. There are much worse problems to have.