Design principles for extending ES object abstractions

# Allen Wirfs-Brock (13 years ago)

On Jul 8, 2011, at 2:58 PM, Brendan Eich wrote on the thread using Private name objects for declarative property definition. :

But whatever the class syntax, and the disposition of private in class and even classes in ES.next, I agree we should expect private declarative and expression forms to work the same in object initialisers and in classes.

It would be good to get everyone buying into this private-means-property-with-private-name-key-everywhere agreement.

I wanted to generalize this a bit. In designing "classes" and other new ES abstractions there are a couple design principles that I think it is important that we follow:

  1. Everything works with plain objects.

Objects (and functions) are the primitive abstraction mechanisms of ES. Any new functionality we add must be applicable and available to plain vanilla singleton objects. Anti-example: super keyword that is only available in a class declaration Acceptable solution: super keyword is available for both class declaration and object literals.

  1. Anything that can be done declaratively can also be done imperatively.

Imperative/reflective object construction is a power feature of ES that has been widely exploited by everyday developers as well as metaprogrammers. Any new object capabilities that we make available via declarative constructs must also be available via an imperative API. Anti-example: functions definitions using super keyword may only occur within an object literal or class declaration. Acceptable solution: Object.defineMethod can be used to bind an externally defined function that uses the super keyword to a specific object.

I don't expect that anybody will significantly disagree with either of these principles. But I think it is good to explicitly articulate them and make sure we have agreement one them. Sometimes we spend a lot of time discussing an idea that doesn't or can't conform to these principles.

# Brendan Eich (13 years ago)

On Jul 8, 2011, at 3:49 PM, Allen Wirfs-Brock wrote:

On Jul 8, 2011, at 2:58 PM, Brendan Eich wrote on the thread using Private name objects for declarative property definition. :

But whatever the class syntax, and the disposition of private in class and even classes in ES.next, I agree we should expect private declarative and expression forms to work the same in object initialisers and in classes.

It would be good to get everyone buying into this private-means-property-with-private-name-key-everywhere agreement.

I wanted to generalize this a bit. In designing "classes" and other new ES abstractions there are a couple design principles that I think it is important that we follow:

  1. Everything works with plain objects.

Objects (and functions) are the primitive abstraction mechanisms of ES. Any new functionality we add must be applicable and available to plain vanilla singleton objects. Anti-example: super keyword that is only available in a class declaration Acceptable solution: super keyword is available for both class declaration and object literals.

  1. Anything that can be done declaratively can also be done imperatively.

Imperative/reflective object construction is a power feature of ES that has been widely exploited by everyday developers as well as metaprogrammers. Any new object capabilities that we make available via declarative constructs must also be available via an imperative API. Anti-example: functions definitions using super keyword may only occur within an object literal or class declaration. Acceptable solution: Object.defineMethod can be used to bind an externally defined function that uses the super keyword to a specific object.

I don't expect that anybody will significantly disagree with either of these principles. But I think it is good to explicitly articulate them and make sure we have agreement one them. Sometimes we spend a lot of time discussing an idea that doesn't or can't conform to these principles.

+1.

Note that generators as we've prototyped them have the same imperative/reflective constructor as functions: Function. You just use "yield" in the body string. In working on generators for standardization, we proposed requiring "*" after "function" at the head to distinguish (for readers) generator functions from non-generator functions, since "yield" in the body is sometimes far removed from the start of the function.

But the stronger reason for "function*" as mandator generator syntax introducer came when we considered "yield*" ("yield from" in Python's PEP380): the utility of a zero-iterations basis case for a sub-generator. As Dave noted, otherwise you'd have to write "function* () {if (false) yield;}" or some such.

Does this mean we need a distinguished generator function constructor, e.g. Function.createGenerator? Probably so, for the stronger (zero-iterations, empty generator basis case) and for symmetry. Not a big deal but something to consider.

In any case, I like the generalizations you make here.

# Brendan Eich (13 years ago)

On Jul 8, 2011, at 3:59 PM, Brendan Eich wrote:

But the stronger reason for "function*" as mandator

"mandatory" of course (need new keyboard).

generator syntax introducer came when we considered "yield*" ("yield from" in Python's PEP380): the utility of a zero-iterations basis case for a sub-generator. As Dave noted, otherwise you'd have to write "function* () {if (false) yield;}" or some such.

"function () {if (false) yield;}" of course -- the hypothesis there assumes no "*" after "function", in which case you must write a dead "yield" to make the function be a generator in the first place.

# Luke Hoban (13 years ago)

I agree wholeheartedly with these. In fact, I'd go further on (2), and say "Anything that can be done declaratively can also be done imperatively, using ES5 syntax". ES.next will have two syntaxes running on a single runtime, sharing objects across a shared heap. I think we should ensure that all relevant semantics of ES.next are pushed down into the shared runtime, and exposed through libraries available to both syntaxes, to ensure full interoperability between the two.

The one additional place I know of in existing ES.next proposals where I believe this principle is not yet met is the ability to define a module from ES5 syntax, and consume it from ES.next syntax. Inline with these principles, I expect the module loader API should offer a way to imperatively define a module that can be later loaded using module loader APIs.

Luke

From: es-discuss-bounces at mozilla.org [mailto:es-discuss-bounces at mozilla.org] On Behalf Of Allen Wirfs-Brock Sent: Friday, July 08, 2011 3:49 PM To: es-discuss at mozilla.org Subject: Design principles for extending ES object abstractions

[snip...]

I wanted to generalize this a bit. In designing "classes" and other new ES abstractions there are a couple design principles that I think it is important that we follow:

  1. Everything works with plain objects.

Objects (and functions) are the primitive abstraction mechanisms of ES. Any new functionality we add must be applicable and available to plain vanilla singleton objects. Anti-example: super keyword that is only available in a class declaration Acceptable solution: super keyword is available for both class declaration and object literals.

  1. Anything that can be done declaratively can also be done imperatively.

Imperative/reflective object construction is a power feature of ES that has been widely exploited by everyday developers as well as metaprogrammers. Any new object capabilities that we make available via declarative constructs must also be available via an imperative API. Anti-example: functions definitions using super keyword may only occur within an object literal or class declaration. Acceptable solution: Object.defineMethod can be used to bind an externally defined function that uses the super keyword to a specific object.

I don't expect that anybody will significantly disagree with either of these principles. But I think it is good to explicitly articulate them and make sure we have agreement one them. Sometimes we spend a lot of time discussing an idea that doesn't or can't conform to these principles.

# Brendan Eich (13 years ago)

On Jul 8, 2011, at 4:05 PM, Luke Hoban wrote:

I agree wholeheartedly with these. In fact, I’d go further on (2), and say “Anything that can be done declaratively can also be done imperatively, using ES5 syntax”.

The problem here is that some new syntax cannot be faked with old syntax, namely function calls, without quoting code in strings. This is not usable.

A second problem is that adding API functions means injecting more names into some extant object, probably not the global object. Must all new APIs be Object.createPrivateName and only that? We have already accepted proposals that use built-in modules instead, so that there is no name pollution.

ES.next will have two syntaxes running on a single runtime, sharing objects across a shared heap.

The shared heap imposes some requirements on us, including that old code operating using old syntax with known semantics on a new object must not behave "badly" (details vary).

But this does not require that all new features, especially those requiring new syntax to be usable, must have old, string-based, name-pollusting API functions.

# Luke Hoban (13 years ago)

I agree wholeheartedly with these. In fact, I'd go further on (2), and say "Anything that can be done declaratively can also be done imperatively, using ES5 syntax".

The problem here is that some new syntax cannot be faked with old syntax, namely function calls, without quoting code in strings. This is not usable.

I think it's fine for the imperative solution to be less usable. That's the value-add of opting-in to the ES.next syntax. And of course some (most) new syntax is just syntax, and the ultimate objects it creates are ones that could have been created using some more complex path. Those don't need any library support.

When Allen mentioned "imperatively", I assumed he meant "with a library". I'm not actually sure what other interpretation there would be. So I sort of expected that clarification to "using ES5 syntax" to be a no-op, though I expect it is practically quite important.

A second problem is that adding API functions means injecting more names into some extant object, probably not the global object. Must all new APIs be Object.createPrivateName and only that? We have already accepted proposals that use built-in modules instead, so that there is no name pollution.

I hope, and believe, there are actually not very many new runtime capabilities being added in ES.next that don't already have proposed libraries. I do think there will need to be some rationalization of the goal to use built-in modules with the reality of ES5-syntax consumers of these libraries. I'm not sure whether module loaders currently provide a way to do this that would feel accessible.

ES.next will have two syntaxes running on a single runtime, sharing objects across a shared heap.

The shared heap imposes some requirements on us, including that old code operating using old syntax with known semantics on a new object must not behave "badly" (details vary). But this does not require that all new features, especially those requiring new syntax to be usable, must have old, string-based, name-pollusting API functions.

I agree, the shared heap requirement by itself does not impose this. But I believe the design principle Allen outlined should lead us to this anyway, and the value we'll offer to the many millions of existing 'text/javascript' developers through object-detectable runtime capability additions is a nice bonus :).

# Allen Wirfs-Brock (13 years ago)

On Jul 8, 2011, at 4:27 PM, Luke Hoban wrote:

I agree wholeheartedly with these. In fact, I’d go further on (2), and say “Anything that can be done declaratively can also be done imperatively, using ES5 syntax”.

The problem here is that some new syntax cannot be faked with old syntax, namely function calls, without quoting code in strings. This is not usable.

I think it’s fine for the imperative solution to be less usable. That’s the value-add of opting-in to the ES.next syntax. And of course some (most) new syntax is just syntax, and the ultimate objects it creates are ones that could have been created using some more complex path. Those don’t need any library support.

When Allen mentioned “imperatively”, I assumed he meant “with a library”. I’m not actually sure what other interpretation there would be. So I sort of expected that clarification to “using ES5 syntax” to be a no-op, though I expect it is practically quite important.

Mostly, although obj[foo] = blah; is also an imperative way to define a property.

Also note that my intent was to restricted both principles to matters directly relating to objects even though I didn't explicitly mention objects in naming the 2nd principle. There are many things in Es.next (and ES5, for that matter) that can be done "declaratively" WRT constructing closures that has no API based alternative (other than eval, which I choose not to count). There is probably an argument to be made for accomplishing the somethings via reflective APIs. However, given that there is no history of that in ES I don't think we need to make it a Es.next requirement.

# David Herman (13 years ago)

I agree wholeheartedly with these. In fact, I’d go further on (2), and say “Anything that can be done declaratively can also be done imperatively, using ES5 syntax”.

The problem here is that some new syntax cannot be faked with old syntax, namely function calls, without quoting code in strings. This is not usable.

Like most principles, I think these are reasonable to keep in mind but not absolute. In particular, I see no sensible way to claim that generators can be "done imperatively" in the old language. Arguing that you can "do generators" with some new Function constructor is falling into the Turing tarpit: Function is just an eval in sheep's clothing. You can do anything Harmony can do in ES5 if you have access to the Harmony eval, or if you write a Harmony interpreter, or if you do an RPC to another computer that has a Harmony evaluator installed on it, or...

But hey -- generators are awesome! They're Harmonious, and web developers deserve them. I don't care if they fit into some platonic ideal of ES5-Harmony parity.

I hope, and believe, there are actually not very many new runtime capabilities being added in ES.next that don’t already have proposed libraries. I do think there will need to be some rationalization of the goal to use built-in modules with the reality of ES5-syntax consumers of these libraries. I’m not sure whether module loaders currently provide a way to do this that would feel accessible.

I agree, but I think this could be done with a minimum of global namespace pollution. For example, let's say we only make the Harmony SystemLoader available to legacy code. That would be enough for ES5 code to:

  • get access to new standard Harmony modules, such as "@name"
  • get access to a Harmony evaluator via SystemLoader.eval
  • get access to user-created Harmony modules

And it wouldn't require overloading the Object constructor -- from here until eternity -- with a bunch of short-term backwards-compatibility cruft.

But still, I agree with Allen that we should strive where possible to shoot for the goal of making dynamic/reflective analogs of static constructs. For example, Luke has mentioned that he'd like an ability to dynamically construct module instance objects. I think this is a good goal. But not so much for legacy code to have access to it, as for the ability for meta-level code to dynamically interact with base-level code. For example (using totally made-up API's, please don't bikeshed the names):

childLoader = parentLoader.create(....);
childLoader.registerModule("m", childLoader.buildModule({
    x: 42,
    f: function() { ... }
});
# Brendan Eich (13 years ago)

On Jul 8, 2011, at 4:27 PM, Luke Hoban wrote:

I agree wholeheartedly with these. In fact, I’d go further on (2), and say “Anything that can be done declaratively can also be done imperatively, using ES5 syntax”.

The problem here is that some new syntax cannot be faked with old syntax, namely function calls, without quoting code in strings. This is not usable.

I think it’s fine for the imperative solution to be less usable.

I think we need to agree that eval does not count, as Allen just wrote.

That is, if you'd be happy if old script could call, e..g

Object.evalInHarmony("...")

then we're done.

That’s the value-add of opting-in to the ES.next syntax. And of course some (most) new syntax is just syntax, and the ultimate objects it creates are ones that could have been created using some more complex path. Those don’t need any library support.

If eval does count, we're done.

If eval doesn't count, then how pray tell does old code create a generator? Not by building an interpreter in JS.

When Allen mentioned “imperatively”, I assumed he meant “with a library”. I’m not actually sure what other interpretation there would be. So I sort of expected that clarification to “using ES5 syntax” to be a no-op, though I expect it is practically quite important.

The issue is not "with a library", it is whether the only new APIs in ES.next must be Object.uglyNameGoesHere, with string arguments for anything that can't be expressed in the old syntax (like yield in a generator).

If we have built-in modules in ES.next, we shouldn't have duplicate Object.mumble APIs for them as well (Object is one of those shared-heap objects common to old and new scripts loaded against the same global with the default module loader).

A second problem is that adding API functions means injecting more names into some extant object, probably not the global object. Must all new APIs be Object.createPrivateName and only that? We have already acceptedproposals that use built-in modules instead, so that there is no name pollution.

I hope, and believe, there are actually not very many new runtime capabilities being added in ES.next that don’t already have proposed libraries. I do think there will need to be some rationalization of the goal to use built-in modules with the reality of ES5-syntax consumers of these libraries. I’m not sure whether module loaders currently provide a way to do this that would feel accessible.

That's a good point. If we expose just one property, say

Object.ModuleLoader

then by my reading of the module loaders proposal, ES5 code can do whatever it wants with built-in and other new modules.

Would this be enough for what you're after?

The shared heap imposes some requirements on us, including that old code operating using old syntax with known semantics on a new object must not behave "badly" (details vary). But this does not require that all new features, especially those requiring new syntax to be usable, must have old, string-based, name-pollusting API functions.

I agree, the shared heap requirement by itself does not impose this. But I believe the design principle Allen outlined should lead us to this anyway, and the value we’ll offer to the many millions of existing ‘text/javascript’ developers through object-detectable runtime capability additions is a nice bonus J.

Arguing about "Principles" as if they were ironclad law is a good timekiller, over which TC39 will fall out of Harmony quickly if we do make the mistake of killing too much time.

We need use-cases as well as abstractions like capital-P Principles. We need Aristotle as well as Plato (answers.yahoo.com/question/index?qid=20090513192850AAGmPAH). And we need some good taste and judgment in knowing when to ease off on the Principle-mongering.

# Brendan Eich (13 years ago)

On Jul 8, 2011, at 3:49 PM, Allen Wirfs-Brock wrote:

  1. Anything that can be done declaratively can also be done imperatively.

What's the imperative API for <| (which has the syntactic property that it operators on newborns on the right, and cannot mutate the [[Prototype]] of an object that was already created and perhaps used with its original [[Prototype]] chain)?

# Allen Wirfs-Brock (13 years ago)

On Jul 8, 2011, at 5:16 PM, Brendan Eich wrote:

On Jul 8, 2011, at 3:49 PM, Allen Wirfs-Brock wrote:

  1. Anything that can be done declaratively can also be done imperatively.

What's the imperative API for <| (which has the syntactic property that it operators on newborns on the right, and cannot mutate the [[Prototype]] of an object that was already created and perhaps used with its original [[Prototype]] chain)?

Fair point and one I was already thinking about :-)

For regular objects, it is Object.create.

For special built-in object with literal forms, I've previously argument that <| can be used to implement an imperative API:

Array.create = function (proto,members) { let obj = proto <| {}; Object.defineProperties(obj,members); return obj; }

Basically, <| is sorta half imperative operator, half declaration component.

This may be good enough. It would be nice it it was and we didn't have to have additional procedural APIs for constructing instances of the built-ins. Somebody has already pointed out <| won't work for built-in Date objects because they lack a literal form. I think the best solution for that is to actually reify access to a Date object's timevalue by making it a private named property. BTW, way did ES1(?) worry about allowing for alternative internal timevalue representations? If in really there really any perf issues that involve whether or not the timevalue is represented as a double or something else?

# David Herman (13 years ago)

I think I still haven't fully grokked what <| means on array literals, but could it also be used to "subclass" Array? For example:

function SubArray() {
    return SubArray.prototype <| [];
}

SubArray.prototype = new Array;

I'm not sure what Array.prototype methods would or wouldn't work on instances of SubArray.

# Brendan Eich (13 years ago)

On Jul 8, 2011, at 5:48 PM, Allen Wirfs-Brock wrote:

On Jul 8, 2011, at 5:16 PM, Brendan Eich wrote:

What's the imperative API for <| (which has the syntactic property that it operators on newborns on the right, and cannot mutate the [[Prototype]] of an object that was already created and perhaps used with its original [[Prototype]] chain)?

Fair point and one I was already thinking about :-)

For regular objects, it is Object.create.

For special built-in object with literal forms, I've previously argument that <| can be used to implement an imperative API:

Array.create = function (proto,members) { let obj = proto <| {}; Object.defineProperties(obj,members); return obj; }

And likewise for Function.create and RegExp.create. Boolean, Number, String, and Date get nothing :-P.

We have a somewhat-troubled proposal in Harmony to make Function.create an alternative Function constructor that takes a leading name parameter, and then parameters and body string parameters. But perhaps that could be renamed Function.createNamed.

Basically, <| is sorta half imperative operator, half declaration component.

This may be good enough. It would be nice it it was and we didn't have to have additional procedural APIs for constructing instances of the built-ins. Somebody has already pointed out <| won't work for built-in Date objects because they lack a literal form. I think the best solution for that is to actually reify access to a Date object's timevalue by making it a private named property.

That is an old idea I've brought up from time to time. If that private name were exported, you could even make useful Date subclasses (rather than Date instances that have extended proto chains).

BTW, way did ES1(?) worry about allowing for alternative internal timevalue representations? If in really there really any perf issues that involve whether or not the timevalue is represented as a double or something else?

The extrapolated Gregorian calendar's range in milliseconds was chosen carefully to fit in an IEEE 754 double without loss of precision.

Real implementations decode the double into commonly-accessed fields that would have to track any updates to the milliseconds since (negative for before) the epoch.

# Brendan Eich (13 years ago)

On Jul 8, 2011, at 5:53 PM, David Herman wrote:

I think I still haven't fully grokked what <| means on array literals, but could it also be used to "subclass" Array? For example:

function SubArray() {
    return SubArray.prototype <| [];
}

SubArray.prototype = new Array;

I'm not sure what Array.prototype methods would or wouldn't work on instances of SubArray.

Instances of SubArray must mean (new SubArray) results, but those are indeed true Array instances. They simply have a prototype chain that has been extended:

a = SubArray(); // or new SubArray

Object.getPrototypeOf(a) -> SubArray.prototype (which is an Array)

Object.getPrototypeOf(SubArray.prototype) -> Array.prototype

and Array.prototype of course delegates to Object.prototype

This allows new methods to be added to SubArray.prototype.

Sometimes people try to make a new Array instance be the prototype of some other object:

js> var o = Object.create([])

js> o[2] = 0

0 js> o.length

0 js> o[1] = 1

1 js> o[0] = 2

2 js> o.toString() "" js> o.slice(0,1) [] js> o.length = 3

3 js> o.slice(0,1) [2] js> o.sort(function(a,b){return a-b}).toString() "0,1,2" js> o.reverse().toString() "2,1,0"

Note the lack of automagic length maintenance in this case. I think this is all per ES5.

# Allen Wirfs-Brock (13 years ago)

On Jul 8, 2011, at 5:53 PM, David Herman wrote:

I think I still haven't fully grokked what <| means on array literals, but could it also be used to "subclass" Array? For example:

yes, it creates a new object that is an array instance ([[Class]]=='Array', support the length constraints, etc.) that has LHS of <| as its [[Prototype]] value. (BTW, in my initial working draft for the "ES6" spec. I have already purged [[Class]] from the specification)

function SubArray() {
    return SubArray.prototype <| [];
}

SubArray.prototype = new Array;

I'm not sure what Array.prototype methods would or wouldn't work on instances of SubArray.

All of them. They are all generic. However, subarray instances have all the internal state and methods that make them true arrays so even if some of the inherited Array methods weren't generic they would still work.

# Brendan Eich (13 years ago)

On Jul 8, 2011, at 6:21 PM, Brendan Eich wrote:

Sometimes people try to make a new Array instance be the prototype of some other object:

js> var o = Object.create([]) js> o[2] = 0 0 js> o.length 0 js> o[1] = 1 1 js> o[0] = 2 2 js> o.toString() "" js> o.slice(0,1) [] js> o.length = 3 3 js> o.slice(0,1) [2] js> o.sort(function(a,b){return a-b}).toString() "0,1,2" js> o.reverse().toString() "2,1,0"

Note the lack of automagic length maintenance in this case.

And the lack of automagic truncation when length is retracted:

js> o.length=0

0 js> o[2]

0 js> o[1]

1 js> o[0]

2

I think this is all per ES5.

Indeed this all follows from [[DefineOwnProperty]] internal method usage specification. It's considered a drag by people trying to make an array-like, rather than an array with an extended proto chain. I know folks who want the former and currently have to reach for Proxy as a power-tool, but then the economics are noticeably different.

# Allen Wirfs-Brock (13 years ago)

On Jul 8, 2011, at 6:03 PM, Brendan Eich wrote:

On Jul 8, 2011, at 5:48 PM, Allen Wirfs-Brock wrote:

On Jul 8, 2011, at 5:16 PM, Brendan Eich wrote:

What's the imperative API for <| (which has the syntactic property that it operators on newborns on the right, and cannot mutate the [[Prototype]] of an object that was already created and perhaps used with its original [[Prototype]] chain)?

Fair point and one I was already thinking about :-)

For regular objects, it is Object.create.

For special built-in object with literal forms, I've previously argument that <| can be used to implement an imperative API:

Array.create = function (proto,members) { let obj = proto <| {}; Object.defineProperties(obj,members); return obj; }

And likewise for Function.create and RegExp.create. Boolean, Number, String, and Date get nothing :-P.

Actually in the <| proposal I define it to work with boolean, number, and string literals on the LHS. Sorta useless but I included them so the complete set of literals was covered. So it really is only Date that didn't get invited to the party.

We have a somewhat-troubled proposal in Harmony to make Function.create an alternative Function constructor that takes a leading name parameter, and then parameters and body string parameters. But perhaps that could be renamed Function.createNamed.

I think that create methods on Constructors should generally follow the argument pattern of Object.create. Things that don't should get a different name.

Basically, <| is sorta half imperative operator, half declaration component.

This may be good enough. It would be nice it it was and we didn't have to have additional procedural APIs for constructing instances of the built-ins. Somebody has already pointed out <| won't work for built-in Date objects because they lack a literal form. I think the best solution for that is to actually reify access to a Date object's timevalue by making it a private named property.

That is an old idea I've brought up from time to time. If that private name were exported, you could even make useful Date subclasses (rather than Date instances that have extended proto chains).

BTW, way did ES1(?) worry about allowing for alternative internal timevalue representations? If in really there really any perf issues that involve whether or not the timevalue is represented as a double or something else?

The extrapolated Gregorian calendar's range in milliseconds was chosen carefully to fit in an IEEE 754 double without loss of precision.

Real implementations decode the double into commonly-accessed fields that would have to track any updates to the milliseconds since (negative for before) the epoch.

Seems like this could be an invisible implementation detail. An it is really worth the effort. How often does anybody set Date components in a situation that is so time critical that this would matter. (any shouldn't dates be immutable...oh well)

# Brendan Eich (13 years ago)

On Jul 8, 2011, at 6:38 PM, Allen Wirfs-Brock wrote:

And likewise for Function.create and RegExp.create. Boolean, Number, String, and Date get nothing :-P.

Actually in the <| proposal I define it to work with boolean, number, and string literals on the LHS. Sorta useless but I included them so the complete set of literals was covered. So it really is only Date that didn't get invited to the party.

For ES4 we entertained date literals based ISO 8601 "T" literals. Couldn't justify 'em, the use-cases were all unlikely hardcodings.

We have a somewhat-troubled proposal in Harmony to make Function.create an alternative Function constructor that takes a leading name parameter, and then parameters and body string parameters. But perhaps that could be renamed Function.createNamed.

I think that create methods on Constructors should generally follow the argument pattern of Object.create. Things that don't should get a different name.

Agreed.

The extrapolated Gregorian calendar's range in milliseconds was chosen carefully to fit in an IEEE 754 double without loss of precision.

Real implementations decode the double into commonly-accessed fields that would have to track any updates to the milliseconds since (negative for before) the epoch.

Seems like this could be an invisible implementation detail.

Certainly, it is invisible.

An it is really worth the effort. How often does anybody set Date components in a situation that is so time critical that this would matter. (any shouldn't dates be immutable...oh well)

SunSpider, cough.

# Brendan Eich (13 years ago)

On Jul 8, 2011, at 7:47 PM, Brendan Eich wrote:

An it is really worth the effort. How often does anybody set Date components in a situation that is so time critical that this would matter. (any shouldn't dates be immutable...oh well)

SunSpider, cough.

Forgot to say that mutable Date objects were fresh from java.util.Date in 1995.

# David Herman (13 years ago)

I'm not sure what Array.prototype methods would or wouldn't work on instances of SubArray.

All of them. They are all generic.

We're speaking too broadly here. It depends on what we want to work how. For example, .map can't magically know how to produce a SubArray as its result if that's how SubArray wants it to work. But what I'm actually more concerned about is the behavior of .length. Does the <| semantics make .length work automagically the way it does for ordinary Array instances?

However, subarray instances have all the internal state and methods that make them true arrays so even if some of the inherited Array methods weren't generic they would still work.

Including .length (which isn't a method, but YKWIM)?

# Brendan Eich (13 years ago)

On Jul 10, 2011, at 3:02 PM, David Herman wrote:

I'm not sure what Array.prototype methods would or wouldn't work on instances of SubArray.

All of them. They are all generic.

We're speaking too broadly here. It depends on what we want to work how. For example, .map can't magically know how to produce a SubArray as its result if that's how SubArray wants it to work. But what I'm actually more concerned about is the behavior of .length. Does the <| semantics make .length work automagically the way it does for ordinary Array instances?

Of course, since the thing on the right of <| is a newly minted Array which has its own length.

There's no issue here -- are you thinking of the other case (which I showed), Object.create([])? That's where length doesn't "inherit".

However, subarray instances have all the internal state and methods that make them true arrays so even if some of the inherited Array methods weren't generic they would still work.

Including .length (which isn't a method, but YKWIM)?

The <| operator sets [[Prototype]] in the newborn-because-literally-expressions right operand, to the value of the left operand. So (proto <| [1,2,3]) is an Array instance, no mystery or inheritance of Arrayness required. The trick as your SubArray example showed is not to lose the Array.prototype heritage -- proto ought to have Array.prototype on its proto-chain.

'length' is own in each Array instance, maintained by [[Put]] in ES1-3, [[DefineOwnProperty]] in ES5.

# Brendan Eich (13 years ago)

On Jul 10, 2011, at 3:54 PM, Brendan Eich wrote:

On Jul 10, 2011, at 3:02 PM, David Herman wrote:

I'm not sure what Array.prototype methods would or wouldn't work on instances of SubArray.

All of them. They are all generic.

We're speaking too broadly here. It depends on what we want to work how. For example, .map can't magically know how to produce a SubArray as its result if that's how SubArray wants it to work.

This is the real concern, I believe. The Array methods (all generic in that they take array-like |this| -- at least one is Array-specific in argument handling: concat) that return a new array always make an instance of Array.

Alex Russell brought up the idea of extending these array-producing methods to call a user-defined constructor. We discussed doing it by reflecting on this.constructor, but that is not a compatible change: people today call Array.prototype.slice.call(arguments) intending to get an Array instance whose elements come from the arguments object. It would be wrong to change this code to return a new Object, but that's what reflecting on this.constructor would do, because:

js> function f(){return arguments.constructor}

js> f()

function Object() {[native code]}

So we have two choices:

  1. Duplicate the array methods with new twins mutated to reflect on this.constructor -- this is ugly and bloaty, even if we can segregate the methods so as to avoid mangling the twins' names.

  2. Add a new protocol, perhaps enabled by setting a well-known private name object (i.e., a unique public name), denoted it kResultConstructor, so that the methods can detect something like this[kResultConstructor].

(2) adds a bit of runtime cost to the methods, which are not super-optimized in many way in today's engines. Not sure that hurts it much, compared to (1). Do modules help us make a better, less bloaty form of (1)?

# Allen Wirfs-Brock (13 years ago)

On Jul 10, 2011, at 4:01 PM, Brendan Eich wrote:

On Jul 10, 2011, at 3:54 PM, Brendan Eich wrote:

On Jul 10, 2011, at 3:02 PM, David Herman wrote:

I'm not sure what Array.prototype methods would or wouldn't work on instances of SubArray.

All of them. They are all generic.

We're speaking too broadly here. It depends on what we want to work how. For example, .map can't magically know how to produce a SubArray as its result if that's how SubArray wants it to work.

This is the real concern, I believe. The Array methods (all generic in that they take array-like |this| -- at least one is Array-specific in argument handling: concat) that return a new array always make an instance of Array.

This is something that is one my list to cleanup as part of my [[Class]] reform effort: strawman:es5_internal_nominal_typing

Alex Russell brought up the idea of extending these array-producing methods to call a user-defined constructor. We discussed doing it by reflecting on this.constructor, but that is not a compatible change: people today call Array.prototype.slice.call(arguments) intending to get an Array instance whose elements come from the arguments object. It would be wrong to change this code to return a new Object, but that's what reflecting on this.constructor would do, because:

My thinking was to make allow to have such methods look for a specific instance property (Smalltalk called its "species", but we might want to use a private name) whose value is the constructor to use to create the private name.

The default value of the property would be Array. Same if it was missing. For example:

const augmentedArrayProto = { species: AugmentedArray, ... }; function AugmentedArray (...args) { return augmentedArrayProto <| [...args] };

The main problem with this approach is that the developer has to explicitly decide to do so. However, if isn't clear that there is an "automatic" scheme that would preserve backwards compat. Also, if you actually are create a new kind of Array-like collection this is probably something you do want to think about as you don't necessarily always want to produce the same kind of collection as the this value.

js> function f(){return arguments.constructor} js> f() function Object() {[native code]}

So we have two choices:

  1. Duplicate the array methods with new twins mutated to reflect on this.constructor -- this is ugly and bloaty, even if we can segregate the methods so as to avoid mangling the twins' names.

I've considered providing an Array.extensiblePrototype object that would be a better [[Prototype]] than Array.prototype for new collections. However, I always concluded that it would just create confusion. It would be better to take the "species" approach which is a target solution to a specific problem.

  1. Add a new protocol, perhaps enabled by setting a well-known private name object (i.e., a unique public name), denoted it kResultConstructor, so that the methods can detect something like this[kResultConstructor].

see above

(2) adds a bit of runtime cost to the methods, which are not super-optimized in many way in today's engines. Not sure that hurts it much, compared to (1).

Except for very small collections, this extra indirection should be a small overhead compared to the actual cost of processing the collection.

# Luke Hoban (13 years ago)

I agree wholeheartedly with these. In fact, I'd go further on (2), and say "Anything that can be done declaratively can also be done imperatively, using ES5 syntax".

  >>Like most principles, I think these are reasonable to keep in mind but not absolute. In particular, I see no sensible way to claim that generators can be "done imperatively" in the old language.

My understanding of generators was naively that they are syntactic sugar for defining an iterator. From that perspective, iterators are the interoperable runtime mechanism that needs to be available in ES5 syntax, and being just duck-typed objects with next(), this appears to be okay.

Re-reading the generators proposal, I was concerned at first that somehow the semantics of the syntactic desugaring might be taking dependencies on the internal properties of the generator objects when consumed in a generator, such as in a "yield* other". However, it looks like even there, the semantics are in terms of the public API on the object, so that a user defined object that provides next/send/throw/close can correctly interoperate.

So I was wrong about iterators being the general interoperable runtime mechanism, but next/send/throw/close objects appear to be fully iterable and consumable in generators. Is that right? If so, it seems safe to consider generators as sugar for producing objects whose visible behavior could have been built independently. And interoperation appears to work cleanly in both directions using these objects.

  >>>> I hope, and believe, there are actually not very many new runtime capabilities being added in ES.next that don't already have proposed libraries.  I do think there will need to be some rationalization of the goal to use built-in modules with the reality of ES5-syntax consumers of these libraries.  I'm not sure whether module loaders currently provide a way to do this that would feel accessible.

  >> I agree, but I think this could be done with a minimum of global namespace pollution. For example, let's say we only make the Harmony SystemLoader available to legacy code. That would be enough for ES5 code to:
  - get access to new standard Harmony modules, such as "@name"
  - get access to a Harmony evaluator via SystemLoader.eval
  - get access to user-created Harmony modules
  And it wouldn't require overloading the Object constructor -- from here until eternity -- with a bunch of short-term backwards-compatibility cruft.

I generally like the idea of this. It may indeed be able to provide convenient and object-detectable ES5 access to new library/runtime functionality. Reducing the global namespace pollution is certainly a good goal. This would require that the syntax for loading these standard modules from ES5 is simple and can be implemented efficiently. I haven't yet been able to intuit from the module_loaders page what is needed to accomplish each of the above though. For example, if it is the case that loading the "@name" module required putting all my code in a callback passed to SystemLoader.load, that feels like it might be too heavy. Do you have examples of what each of these would look like given the current proposal?

  >> But still, I agree with Allen that we should strive where possible to shoot for the goal of making dynamic/reflective analogs of static constructs. For example, Luke has mentioned that he'd like an ability to dynamically construct module instance objects. I think this is a good goal. But not so much for legacy code to have access to it, as for the ability for meta-level code to dynamically interact with base-level code. For example (using totally made-up API's, please don't bikeshed the names):
      childLoader = parentLoader.create(....);
      childLoader.registerModule("m", childLoader.buildModule({
          x: 42,
          f: function() { ... }
      });

Great - agreed that this is a really valuable addition to module_loaders. A few questions on the example: (1) why is a child loader needed? (2) any particular reason why the buildModule and registerModule are separated? (3) Would this allow declaring module dependencies for the new module? As one comparison, the require.js module definition syntax is simpler in terms of APIs, but also requires an extra closure due to module dependencies, which may also be needed in the model above:

define("m", [], function() {
    return {
        x: 42,
        f: function() { ...  }
    }
});

ASIDE: It still feels a bit odd to see ES5 syntax running on ES.next runtime referred to as 'legacy'. For one thing, it doesn't even exist yet! But there will be an enormous userbase in this situation beginning in a few years, and probably for many years to come. Opting into ES6 syntax requires opting into at least strict mode breaking changes and a new script tag. In contrast, every piece of existing JavaScript code on the web will have the opportunity to object-detect and leverage targeted new runtime functionality. Module_loaders in particular seems to have a ton to offer to ES5 syntax, by providing a standard means for module definition and consumption for existing JavaScript code, which can be shimmed out to a slower JS implementation where not available.

Luke

# Brendan Eich (13 years ago)

On Jul 11, 2011, at 10:00 PM, Luke Hoban wrote:

So I was wrong about iterators being the general interoperable runtime mechanism, but next/send/throw/close objects appear to be fully iterable and consumable in generators. Is that right?

That's right, there is no nominal type test.

If so, it seems safe to consider generators as sugar for producing objects whose visible behavior could have been built independently. And interoperation appears to work cleanly in both directions using these objects.

You're talking about interoperation in the "old script calls new generator" sense. That's using generators from old script, not implementing them using imperative API in old script.

The contested principle is Anything that can be done declaratively can also be done imperatively, using ES5 syntax.

You can't write a yield expression using an imperative API. Yielding is something "that can be done" (whether declaratively or by expression syntax, let's not quibble). So the principle overreaches.

As for someone using "legacy" to describe ES5-ish JS (JS of today), don't take it to heart. JS is JS, the new version is just that. We survived ES1/2 to ES3. We'll survive the next turn.

# Luke Hoban (13 years ago)

If so, it seems safe to consider generators as sugar for producing objects whose visible behavior could have been built independently. And interoperation appears to work cleanly in both directions using these objects.

You're talking about interoperation in the "old script calls new generator" sense. That's using generators from old script, not implementing them using imperative API in old script.

I'm actually interested in both directions of interoperation. For example, future-jQuery might want to expose objects that are iterable and can be consumed in generators when used in ES6 syntax, but without itself opting into ES6. It could presumably do so by implementing the required state machines by hand and exposing next/send/throw/close.

Luke

# Brendan Eich (13 years ago)

On Jul 11, 2011, at 10:38 PM, Luke Hoban wrote:

If so, it seems safe to consider generators as sugar for producing objects whose visible behavior could have been built independently. And interoperation appears to work cleanly in both directions using these objects.

You're talking about interoperation in the "old script calls new generator" sense. That's using generators from old script, not implementing them using imperative API in old script.

I’m actually interested in both directions of interoperation. For example, future-jQuery might want to expose objects that are iterable and can be consumed in generators when used in ES6 syntax, but without itself opting into ES6. It could presumably do so by implementing the required state machines by hand and exposing next/send/throw/close.

Implementing a structural or duck-typed protocol such as the iteration protocol, or the generator interface, with "legacy" (I kid!) JS, meaning no new syntax, is absolutely part of the design. We've supported that since JS1.7 (2006).

That's not the same as somehow getting an API callable from old script, let's call it doYield(e), that captures the shallow continuation and returns e to the sender or .next caller. Yet such an API is what the Principle, with full capital P if not all-caps, implies. That's where I draw the line.

# David Herman (13 years ago)

My understanding of generators was naively that they are syntactic sugar for defining an iterator.

Well, I think I understand what you're getting at: there's a sense in which generators don't add the ability to do something that's absolutely impossible to express in ES5.

OTOH, generators make it possible to express something that otherwise -- in general -- requires a deep translation (in particular, a CPS transformation of the entire function). This is more than syntactic sugar. This is a new expressiveness that wasn't there before. The slope between Felleisen-expressiveness ("is it possible to do X without a deep transformation?") and Turing-expressiveness ("is it possible to do X at all?") is a slippery one.

I don't want to quibble about philosophical pedantry, but that's really all I mean by my shyness about General Principles. They're strictly harder to decide on than the problem at hand. It's too easy to lose focus and waste time arguing abstractions. And then even when we agree on general principles, when we get into concrete situations it's too easy to start quibbling about exactly how to interpret and apply the principles for the problem at hand.

I prefer to work out from the middle of the maze, learning and refining principles as I go, rather than trying to decide on them all up front.

Re-reading the generators proposal, I was concerned at first that somehow the semantics of the syntactic desugaring might be taking dependencies on the internal properties of the generator objects when consumed in a generator, such as in a “yield* other”. However, it looks like even there, the semantics are in terms of the public API on the object, so that a user defined object that provides next/send/throw/close can correctly interoperate.

Yup, nothing to worry about there. Fear not the yield*.

I haven’t yet been able to intuit from the module_loaders page what is needed to accomplish each of the above though. For example, if it is the case that loading the “@name” module required putting all my code in a callback passed to SystemLoader.load, that feels like it might be too heavy. Do you have examples of what each of these would look like given the current proposal?

I need to have another go 'round at the module loaders API, and I will get there before too long. Sorry about that.

But it should not be necessary to have a callbacky version for builtin modules like "@name" -- the API should include a simple, synchronous way to test for the presence of modules in a loader. So it should be possible to do something like SystemLoader.getLoaded("@name").

(1) why is a child loader needed?

Not needed, just the point of the example. Let me make the example more concrete:

You're writing an online code editor like CodeMirror or ACE, and you want to support the programmer running JS code from within the editor. So you want to run that code in a sandbox, so that it doesn't see the data structures that are implementing the IDE itself. So you create a child loader. Now, within that child loader, you might want the ability to construct some initial modules that make up the initial global environment that the user's code is running in. So you use buildModule to dynamically construct those modules.

(2) any particular reason why the buildModule and registerModule are separated?

Because you might want to build a single shared module instance that you register in multiple loaders. These are orthogonal primitives that can be composed. It may also make sense to have conveniences for common operations, layered on top of the primitives.

(3) Would this allow declaring module dependencies for the new module? As one comparison, the require.js module definition syntax is simpler in terms of APIs, but also requires an extra closure due to module dependencies, which may also be needed in the model above:

define("m", [], function() {
    return {
        x: 42,
        f: function() { …  }
    }
});

I think the more straightforward approach is just to pre-load (and pre-register in the loader, if appropriate) whatever dependencies are needed.

ASIDE: It still feels a bit odd to see ES5 syntax running on ES.next runtime referred to as ‘legacy’.

No pejorative overtones intended. We just don't yet have any decent terminology for distinguishing the full ES.next front end from the backwards-compatible one.

# Allen Wirfs-Brock (13 years ago)

On Jul 12, 2011, at 11:28 AM, David Herman wrote:

No pejorative overtones intended. We just don't yet have any decent terminology for distinguishing the full ES.next front end from the backwards-compatible one.

In the working draft of the specification I'm using "extended code" to mean the extended ES.next language constructors that will require opt-in access. I also redefine "strict code" to encompass both ES5 strict mode mode and extended code.

# Luke Hoban (13 years ago)

Well, I think I understand what you're getting at: there's a sense in which generators don't add the ability to do something that's absolutely impossible to express in ES5.

  OTOH, generators make it possible to express something that otherwise -- in general -- requires a deep translation (in particular, a CPS transformation of the entire function). This is more than syntactic sugar. This is a new expressiveness that wasn't there before. The slope between Felleisen-expressiveness ("is it possible to do X without a deep transformation?") and Turing-expressiveness ("is it possible to do X at all?") is a slippery one.

  I don't want to quibble about philosophical pedantry, but that's really all I mean by my shyness about General Principles. They're strictly *harder* to decide on than the problem at hand. It's too easy to lose focus and waste time arguing abstractions. And then even when we agree on general principles, when we get into concrete situations it's too easy to start quibbling about exactly how to interpret and apply the principles for the problem at hand.

  I prefer to work out from the middle of the maze, learning and refining principles as I go, rather than trying to decide on them all up front.

Agreed. The principle itself is not really the primary point. But I do think there is a fairly crisp line that can be drawn at the level of "any observable behaviour of an object that can be created with extended code can also be provided by an object created without using extended code". Ultimately though, the goal is just to provide the most value to the most users.

  But it should not be necessary to have a callbacky version for builtin modules like "@name" -- the API should include a simple, synchronous way to test for the presence of modules in a loader. So it should be possible to do something like SystemLoader.getLoaded("@name").

That's good to hear. I do hope we can ultimately make this API even simpler - both for convenient feature-detection and for authoring modular code in normal syntax. "require('@name')" is a pleasantly simple primitive for modularity - while it may not be possible to be quite so terse in the ES.next API, I hope we can move in that direction.

(1) why is a child loader needed?

  Not needed, just the point of the example. Let me make the example more concrete:

  You're writing an online code editor like CodeMirror or ACE, and you want to support the programmer running JS code from within the editor. So you want to run that code in a sandbox, so that it doesn't see the data structures that are implementing the IDE itself. So you create a child loader. Now, within that child loader, you might want the ability to construct some initial modules that make up the initial global environment that the user's code is running in. So you use buildModule to dynamically construct those modules.


  (2) any particular reason why the buildModule and registerModule are separated?

  Because you might want to build a single shared module instance that you register in multiple loaders. These are orthogonal primitives that can be composed. It may also make sense to have conveniences for common operations, layered on top of the primitives.

This use case makes sense. Though generally the use of child loaders does feel like a <10% case for modules. I'll be interested in discussing the conveniences for common operations here, once there are more details on the primitives to provide feedback on.

  (3) Would this allow declaring module dependencies for the new module?  As one comparison, the require.js module definition syntax is simpler in terms of APIs, but also requires an extra closure due to module dependencies, which may also be needed in the model above:

      define("m", [], function() {
          return {
              x: 42,
              f: function() { ...  }
          }
      });

  I think the more straightforward approach is just to pre-load (and pre-register in the loader, if appropriate) whatever dependencies are needed.

I'm not sure I follow what this would look like. Why would I need a new loader at all in this case? I think I just need to see some more examples to grok how this API is intended to be used.

Luke

# Brendan Eich (13 years ago)

On Jul 14, 2011, at 12:00 AM, Luke Hoban wrote:

I don't want to quibble about philosophical pedantry, but that's really all I mean by my shyness about General Principles. They're strictly harder to decide on than the problem at hand. It's too easy to lose focus and waste time arguing abstractions. And then even when we agree on general principles, when we get into concrete situations it's too easy to start quibbling about exactly how to interpret and apply the principles for the problem at hand.

I prefer to work out from the middle of the maze, learning and refining principles as I go, rather than trying to decide on them all up front.

Agreed. The principle itself is not really the primary point. But I do think there is a fairly crisp line that can be drawn at the level of “any observable behaviour of an object that can be created with extended code can also be provided by an object created without using extended code”. Ultimately though, the goal is just to provide the most value to the most users.

We could add some requirements to harmony:harmony. We already have this first bullet from "Means":

Minimize the additional semantic state needed beyond ES5.

Perhaps this should be an end in itself, but I don't think so -- not as stated. We serve the users, which may require new (minimized) semantic state, even if on the inside of the observable boundary (e.g., generators, yield).

To your point, if there are shared-heap-exposed new semantics that would surprise old script, that would be bad, but not strictly verboten in my view -- not the end of the world. We've had "unusual" host objects in various browsers (notably IE) for a long time.

But there's novel and then there is crazy.

Here's an example of crazy: were we to add call/cc to ES.something, an old script could lose its invariants by innocently calling a new function via the shared heap, where the new function captures the full continuation, returns to the event loop, and much later calls the continuation.

This is one reason we won't add call/cc (the other is that implementors would have to capture native activations, to preserve functional abstraction over self-hosted vs. embedding-native-code-hosted implementation, and that raises the bar, risking some defecting, harming interoperation).

If you want to propose some changes to harmony:harmony please let me know. We could waste a lot of time on abstract principles, but any more concrete and crisp fixes or additions are welcome.