Continuing woes in reading the ES6 spec language
On Sep 10, 2013, at 5:38 PM, Oliver Hunt wrote:
Okay, I'm having a really hard time navigating and following the current spec layout. The many different places that behavior is specified for the same productions, and the absence of direct links between makes extraordinarily hard to read and understand.
In my current reading I was attempting to determine in what scope default initializers are evaluated, but from reading the spec (starting at #14.1) I could not find initialization of any environment record, and while there was reference to various places in which we determine the expected argument count, or whether there are initializers, i could not find anything that handle default initializers. I'm sure it's there but it I have spent many hours following random names from place to place in the spec (again, this is hard as the same productions are described repeatedly for the many different semantic modes that they're in), and I still do not know what the expected results of evaluation are.
Try starting with people.mozilla.org/~jorendorff/es6-draft.html#sec-9.1.16.1 This is the [[Call]] behavior of an ordinary function object (one whose body is defined using ES code) You will see it set the environment record for the invoked functions and in step 13 you can fall the link to Function Declaration Instantiation people.mozilla.org/~jorendorff/es6-draft.html#sec-9.1.16.1 step 21 calls Binding Initialisation for formals, which is the syntactic productions for the actual formal parameters of the function.
The current spec layout seems to be a substantial regression in terms of usability vs. the old spec. Some of this would be helped if we were using html and links, but a lot of it seems controlled by the decision to remove the step-by-step evaluation ordering of the ES5 spec.
Not sure what you think was removed. There a variety of of runtime evaluation algorithms (such as Binding Initialisation) mentioned above rather than simply calling them by the catch all name "evaluation". But everything is still step-by-step.
But yes, we now have a more complex language and it talks a more complex specification to describe it.
Suggestions for improving navigation are appreciated.
On Sep 10, 2013, at 6:09 PM, Allen Wirfs-Brock wrote:
Suggestions for improving navigation are appreciated.
Fior example, I just added the following to clause 24 immediately before 14.1
NOTE Various ECMAScript language elements cause the creation of ordinary function objects (9.1.16). Evaluation of such functions starts with the execution of their [[Call]] internal method (9.1.16.1).
In the HTML and PDF versions the section reference will be links.
On Sep 10, 2013, at 5:38 PM, Oliver Hunt wrote:
The current spec layout seems to be a substantial regression in terms of usability vs. the old spec.
Allen, here are a few concrete usability concerns that I think could be addressed... it just takes time. :-|
-
In the ES6 drafts, as opposed to ES5:
-
many more algorithms do not have unique names.
-
more sections contain a lot of algorithms (for example, 14.1.1.2 contains 13 implementations of 5 different methods).
-
more algorithms have names that are multiple words, like "Binding Initialisation" (with a space between, as opposed to "ToString", "GetValue", "CheckObjectCoercible").
These things combined have made it harder for me to talk about the spec with others in practice. Just reporting a bug in an algorithm requires figuring out how to refer to it.
Our implementation has comments like:
// ES5 8.12.9 Step 1
. There is no similarly concise way to refer to some ES6 algorithms.Proposed fixes:
-
Give every algorithm (that is, every list of steps) its own section heading and/or its own unique name. There will be a lot. It's worth it. Ideally, we'd have a systematic naming scheme ("OrdinaryFunction»[[Call]]", "FormalParameterList#3»BindingInitialization" since FormalParameterList has multiple productions)
These names could live in the document’s margins or something, if making them all section headings is just too much.
-
Run together all multiword method names ("BindingInitialisation").
-
-
There is no way to find, for example, all Binding Initialisation implementations. You can search the document. Having found one implementation, you can't tell if there are others.
In ES6 many more methods have multiple implementations, and they have more implementations on average.
Also it's not clear whether you're looking at all the implementations or if there are others, possibly living in distant sections. Algorithms are not grouped strictly by name ("horizontally"), nor by class/grammar production ("vertically"). It's a hybrid approach. I can see why. But it's surprising.
This is a major problem for people trying to read the spec. Right now, if a step says "Perform Binding Instantiation for lhs..." there is nothing for "Binding Instantiation" to link to.
I propose adding a section for each method that has multiple implementations. This new section would (a) have the name of the method as its heading, and serve as the link target for all the method's call sites; (b) explain informally what the method is—its purpose and roughly what you can expect to have happen when you see it's being called—and (c) list all algorithms that implement it, with links (except Evaluation, which has too many implementations to list). The spec currently does this, for some methods, using tables (Table 4 and Table 5 in 6.1.6.2; and Annex F)—but not all, and the implementation lists are out of date. I can see this sort of thing being left until later in the ES6 cycle, but it would be awfully useful to have (a) and (b) now.
-
Along similar lines, it would be fantastic if every builtin class had something like Table 37 — I don't know if a table is the right format, but just a list of the data properties and an informal description of what they're for.
"Show me your flow charts and conceal your tables and I shall continue to be mystified;" and all that.
-
Can we rename either PropertyDescriptor.[[Get/Set]] or Object.[[Get/Set]]? Maybe PropertyDescriptor.[[GetAccessor]]; or else Object.[[GetProperty]].
Not a new issue in ES6. But fixable in ES6.
These are great points Jason; thanks for putting into concrete words some of my vague misgivings about reading the spec. I won't +1 everything individually, but I'll point out that for
Along similar lines, it would be fantastic if every builtin class had something like Table 37 — I don't know if a table is the right format, but just a list of the data properties and an informal description of what they're for.
This is exactly what I ended up wanting to do for promises, resulting in 1. I think it is seriously helpful while reading the subsequent algorithms.
Thanks, for the great feedback.
Annex F was my own attempt and keeping track of all the algorithms, but I out grew it and it's too much trouble to manually keep it up to date while things are still in flux. The other tension I've run into between easy identification ia section numbers and a table of contents that's so big it's hard to get around within a navigation pane.
For now, I'm going to try giving each semantic routine in a subsection, a section number. Note this may include multiple algorithms corresponding to various productions. I'll also add a "see also" under each such heading that references other subsections that implement the same semantic action for other productions. That should make it easier to navigate to all implementation of, for example, "BindingInitialisation".
What we have here is a highly polymorphic nonlinear program and as old Smalltalker knows, the best way to find your way around is at any reference to a name to be able to popup a navigable list of both implementaors and callers of that name. Maybe that's something we could get working in the HTML version. We're already halfway there with some of the definition links.
For now, I need to mostly focus on getting the spec. feature complete by the end of the year. After that we will have some time where we can cleanup the presentation.
Still, keep the suggestions coming and feel free to file bugs on such things.
On 12 September 2013 01:41, Allen Wirfs-Brock <allen at wirfs-brock.com> wrote:
For now, I'm going to try giving each semantic routine in a subsection, a section number. Note this may include multiple algorithms corresponding to various productions. I'll also add a "see also" under each such heading that references other subsections that implement the same semantic action for other productions. That should make it easier to navigate to all implementation of, for example, "BindingInitialisation".
What we have here is a highly polymorphic nonlinear program and as old Smalltalker knows, the best way to find your way around is at any reference to a name to be able to popup a navigable list of both implementaors and callers of that name. Maybe that's something we could get working in the HTML version. We're already halfway there with some of the definition links.
Well, yes, but just to be clear, the readability problems are not due to polymorphism or non-linearity as such but mainly due to OO-style decomposition. That is simply not a good match for a language spec (to put it mildly). Tool support may be a viable work-around for dealing with code, but not for something whose primary communication form is in (physical or virtual) print.
A spec is a system which (a) wants to be transparent (as opposed to encapsulated), and (b) is pretty much closed-world (extension points notwithstanding). Consequently, none of the advantages of OO apply, and it becomes primarily a barrier.
(I'm not proposing any change to that at this stage, obviously. But it is a growing technical debt that we might consider clearing at some point.)
On Sep 12, 2013, at 7:51 AM, Andreas Rossberg <rossberg at google.com> wrote:
On 12 September 2013 01:41, Allen Wirfs-Brock <allen at wirfs-brock.com> wrote:
For now, I'm going to try giving each semantic routine in a subsection, a section number. Note this may include multiple algorithms corresponding to various productions. I'll also add a "see also" under each such heading that references other subsections that implement the same semantic action for other productions. That should make it easier to navigate to all implementation of, for example, "BindingInitialisation".
What we have here is a highly polymorphic nonlinear program and as old Smalltalker knows, the best way to find your way around is at any reference to a name to be able to popup a navigable list of both implementaors and callers of that name. Maybe that's something we could get working in the HTML version. We're already halfway there with some of the definition links.
Well, yes, but just to be clear, the readability problems are not due to polymorphism or non-linearity as such but mainly due to OO-style decomposition. That is simply not a good match for a language spec (to put it mildly). Tool support may be a viable work-around for dealing with code, but not for something whose primary communication form is in (physical or virtual) print.
I absolutely agree with this - the primary user of the spec document is implementers and ES developers, not tools. For that reason the spec should be tailored to human readers, with machine readability seen as a nice optional extra.
A spec is a system which (a) wants to be transparent (as opposed to encapsulated), and (b) is pretty much closed-world (extension points notwithstanding). Consequently, none of the advantages of OO apply, and it becomes primarily a barrier.
(I'm not proposing any change to that at this stage, obviously. But it is a growing technical debt that we might consider clearing at some point.)
I disagree here, a spec that is hard to read, is a spec that is hard to implement correctly. It doesn't matter if it is technically unambiguous if it is hard enough to read that an implementer can't determine what the unambiguous behavior is.
I've spent a lot of time now trying to read this spec, and have still not been able to determine the semantics of a number of the new language features. I guess it's possible i'm just being thick, but I'd like to imagine that after so many years I know how to read and implement a spec.
Just as a worked example, answering Oliver's question about default expressions took me the better part of an hour.
The default-expression h() in code like
function f(x=h(), y) {
function h() { ... }
}
is evaluated in the scope of f, as I expected, so {x, h, y} are all in scope there.
But I was surprised by the timing; h() is evaluated (a) before y is populated with the actual argument value; (b) before the arguments object is fully initialized.
That last surprise seems really unfortunate, since the expression can access the arguments object, observing it before and after CompleteMapped/StrictArgumentsObject, or even cause those to fail. Can we change that?
It also means that we have to store actual arguments "somewhere else" while default-expressions are being evaluated, since the spec requires y to be undefined while the default-expression h() is being evaluated.
Here are the steps (many are elided).
- Start in 8.1.16.1 Ordinary Function [[Call]] (thisArgument, argumentsList).
- Step 8 or 9 creates localEnv (8 if we're calling an arrow function, 9 otherwise)
- Steps 10-12 make localEnv the current scope
- Step 13 calls 9.1.16.11 Function Declaration Initialisation, passing localEnv
- Step 5 of that algorithm calls VarScopedDeclarations(code).
- 13.1.1 and 15.1.1 contain implementations of VarScopedDeclarations and TopLevelVarScopedDeclarations.
- VarScopedDeclarations(code) returns a list of the top-level VariableStatements and FunctionDeclarations in the function body. Note that VariableStatements in blocks are not included. (A bug?)
- Step 8 creates bindings for nested functions.
- Step 9 creates bindings for parameters. They're all undefined so far.
- Step 11 creates a binding for "arguments".
- Step 13 creates bindings for vars (not using the result of step 5, but a different static semantic routine, VarDeclaredNames, which I assumed, but did not check, is accurate for this purpose).
- Step 15 creates bindings for let/const/class declarations.
- Step 16 actually creates functions (in the scope of the new environment) to populate each nested function binding created in step 8.
- Steps 18-20 create the arguments object and populates the binding created in step 11.
- Step 21 does Binding Instantiation, which is defined in many places
but here we care about 14.1.1.2.
- That invokes Indexed Binding Instantiation on the parts of the FormalParameters, which is defined further down in 14.1.1.2.
- Which ultimately performs Indexed Binding Initialisation for each FormalParameter. (This call site is step 3 of the 8th algorithm in 14.1.1.2.)
- I wasn't able to find a definition for that, but syntactically
a FormalParameter is just a BindingElement, and Indexed Binding
Initialization
is defined for those, in the 15th algorithm in 13.2.3.2.
- Step 6 of that says, if an Initialiser_opt (that is, a default value expression) is present, and the value is undefined, we evaluate the Initializer. Since it doesn't say anything about in what scope, I assume it uses the current scope, established way back at the beginning of this adventure.
On Sep 12, 2013, at 12:40 PM, Jason Orendorff <jason.orendorff at gmail.com> wrote:
Just as a worked example, answering Oliver's question about default expressions took me the better part of an hour.
The default-expression h() in code like
function f(x=h(), y) { function h() { ... } }
is evaluated in the scope of f, as I expected, so {x, h, y} are all in scope there.
But I was surprised by the timing; h() is evaluated (a) before y is populated with the actual argument value; (b) before the arguments object is fully initialized.
That last surprise seems really unfortunate, since the expression can access the arguments object, observing it before and after CompleteMapped/StrictArgumentsObject, or even cause those to fail. Can we change that? From talking to SpiderMonkey folk it sounds like SM nukes the |arguments| identifier entirely if you use any of the new parameter logic (deconstruction, defaults, or rest params). I would be happy with that.
The spec (from my understanding) says that we can't declare a parameter named arguments but does limit reads of it. Also that doesn't limit declarations of a local named arguments
On Sep 12, 2013, at 12:45 PM, Oliver Hunt wrote:
On Sep 12, 2013, at 12:40 PM, Jason Orendorff <jason.orendorff at gmail.com> wrote:
Just as a worked example, answering Oliver's question about default expressions took me the better part of an hour.
The default-expression h() in code like
function f(x=h(), y) { function h() { ... } }
is evaluated in the scope of f, as I expected, so {x, h, y} are all in scope there.
But I was surprised by the timing; h() is evaluated (a) before y is populated with the actual argument value; (b) before the arguments object is fully initialized.
That last surprise seems really unfortunate, since the expression can access the arguments object, observing it before and after CompleteMapped/StrictArgumentsObject, or even cause those to fail. Can we change that? From talking to SpiderMonkey folk it sounds like SM nukes the |arguments| identifier entirely if you use any of the new parameter logic (deconstruction, defaults, or rest params). I would be happy with that.
That isn't we agreed to within TC39, as I recall. Instead, if the parameter list uses any new syntax (see the IsSimpleParameterList static semantic routine in 14.1) then it gets an array-like arguments object, just like a strict function gets. This should be accounted for in 9.1.16.11 Function Declaration Instantiation but apparently isn't. That's a spec. bug that I'll fixed.
Other static restrictions on formal parameters are defined in the Early Error section of 14.1.
Function Declaration Instantiation current only initializes the arguments binding at its very end, after initializing all the formal parameters in step 21. So, as currently spec. any reference to arguments in a default value initializer should throw. I believe this is wrong, step 23 (initializing arguments) needs to move above step 21.
The spec (from my understanding) says that we can't declare a parameter named arguments but does limit reads of it. Also that doesn't limit declarations of a local named arguments
See early errors for BindingIdentifer in 13.2. In strict code, it is always illegal to declare 'eval' or 'arguments' using any binding form.
In general, I suggest asking me things, before spending too much time puzzling things out. There are spec. bugs and things that are missing and if something. If you can;t figure it out relatively quickly let me know. It is quite likely to be a bug, or an indication of an area where I need to add clarifying material.
On 9/12/2013 12:45 PM, Oliver Hunt wrote:
From talking to SpiderMonkey folk it sounds like SM nukes the |arguments| identifier entirely if you use any of the new parameter logic (deconstruction, defaults, or rest params). I would be happy with that.
This doesn't seem to be accurate, or I'm not understanding what you're saying. In SpiderMonkey, I see:
function argsAsDefault(x = arguments){
return x;
}
argsAsDefault(); // returns 0 length Arguments object
function ignoreDestructuredParam({ x }){
return arguments;
}
ignoreDestructuredParam({ x: 10 }); // returns 1 length Arguments object
function restArgs(...rest){
return arguments;
}
restArgs(1, 2, 3); // SyntaxError: 'arguments' object may not be used in
conjunction with a rest parameter
I thought we agreed with Andreas Rossberg's proposal to isolate default expressions from hoisted body declarations, as if (kind of) desugaring
var x = "outer"; function f(a = x, b = y) { var x = "inner"; function y() {} body code here; }
to
var x = "outer"; function f(a) { if (a === undefined) a = x; if (b === undefined) b = y; return function () { var x = "inner"; function y() {} body code here; }.call(this, a, b); }
Did I misremember?
On Sep 12, 2013, at 12:40 PM, Jason Orendorff wrote:
Just as a worked example, answering Oliver's question about default expressions took me the better part of an hour.
The default-expression h() in code like
function f(x=h(), y) { function h() { ... } }
is evaluated in the scope of f, as I expected, so {x, h, y} are all in scope there.
But I was surprised by the timing; h() is evaluated (a) before y is populated with the actual argument value;
yup, initialization happens left to right and binding are in their "temperal dead zone" untell they are initialized
(b) before the arguments object is fully initialized.
I that that's a spec. bug.
That last surprise seems really unfortunate, since the expression can access the arguments object, observing it before and after CompleteMapped/StrictArgumentsObject, or even cause those to fail. Can we change that?
yes
It also means that we have to store actual arguments "somewhere else" while default-expressions are being evaluated, since the spec requires y to be undefined while the default-expression h() is being evaluated.
Here are the steps (many are elided).
- Start in 8.1.16.1 Ordinary Function [[Call]] (thisArgument, argumentsList).
- Step 8 or 9 creates localEnv (8 if we're calling an arrow function, 9 otherwise)
- Steps 10-12 make localEnv the current scope
- Step 13 calls 9.1.16.11 Function Declaration Initialisation, passing localEnv
- Step 5 of that algorithm calls VarScopedDeclarations(code).
- 13.1.1 and 15.1.1 contain implementations of VarScopedDeclarations and TopLevelVarScopedDeclarations.
- VarScopedDeclarations(code) returns a list of the top-level VariableStatements and FunctionDeclarations in the function body. Note that VariableStatements in blocks are not included. (A bug?)
doesn't matter, we're only us that list to filtering out top level function delclarations (step 91)
- Step 8 creates bindings for nested functions.
- Step 9 creates bindings for parameters. They're all undefined so far.
- Step 11 creates a binding for "arguments".
- Step 13 creates bindings for vars (not using the result of step 5, but a different static semantic routine, VarDeclaredNames, which I assumed, but did not check, is accurate for this purpose).
yup,
- Step 15 creates bindings for let/const/class declarations.
- Step 16 actually creates functions (in the scope of the new environment) to populate each nested function binding created in step 8.
- Steps 18-20 create the arguments object and populates the binding created in step 11 .
Step 12? At least in my draft.
But that binding isn't initialized until step 23.C which I think is a bug. It's way prevents arguments from being reference in initializers.
- Step 21 does Binding Instantiation, which is defined in many places but here we care about 14.1.1.2.
- That invokes Indexed Binding Instantiation on the parts of the FormalParameters, which is defined further down in 14.1.1.2.
- Which ultimately performs Indexed Binding Initialisation for each FormalParameter. (This call site is step 3 of the 8th algorithm in 14.1.1.2.)
- I wasn't able to find a definition for that, but syntactically a FormalParameter is just a BindingElement, and Indexed Binding Initialization is defined for those, in the 15th algorithm in 13.2.3.2.
Exactly, the semantics for BindingElement apply. It's important to apply the chain production semantics evaluation rule described 5.2 paragraph 4. IndexedBindingInitialization(FormalParamter : BindingElement) = IndexedBindingInitialization(BindingElement)
- Step 6 of that says, if an Initialiser_opt (that is, a default value expression) is present, and the value is undefined, we evaluate the Initializer. Since it doesn't say anything about in what scope, I assume it uses the current scope, established way back at the beginning of this adventure.
up at the very top, "step 10-12 make localEnv the current scope"
On Sep 12, 2013, at 1:50 PM, Brendan Eich wrote:
I thought we agreed with Andreas Rossberg's proposal to isolate default expressions from hoisted body declarations, as if (kind of) desugaring
Damned if I know...there is a big note in the draft that says: "This version reflects the consensus as of the Sept. 2012 TC39 meeting. However, it now appears that the binding semantics of formal parameters is likely to change again. " So, this may not be the latest consensus and I'm not actually sure there is a current consensus.
I know some of us weren't very happy with that de-sugaring.
I'm pretty sure some decisions were left hanging. I'll need to review notes, etc.
Allen
"
On Thu, Sep 12, 2013 at 11:14 PM, Allen Wirfs-Brock <allen at wirfs-brock.com>wrote:
On Sep 12, 2013, at 1:50 PM, Brendan Eich wrote:
I thought we agreed with Andreas Rossberg's proposal to isolate default expressions from hoisted body declarations, as if (kind of) desugaring
Damned if I know...there is a big note in the draft that says: "This version reflects the consensus as of the Sept. 2012 TC39 meeting. However, it now appears that the binding semantics of formal parameters is likely to change again. " So, this may not be the latest consensus and I'm not actually sure there is a current consensus.
I know some of us weren't very happy with that de-sugaring.
I'm pretty sure some decisions were left hanging. I'll need to review notes, etc.
It looks like the last time tc39 discussed this was 2012-11-291
The resolution back then was that Andreas should draft a proposal for the next meeting. I can't find any mention of that being discussed in later meetings, though, so this seems to have been left undecided indeed.
On 13 September 2013 11:27, Till Schneidereit <till at tillschneidereit.net> wrote:
On Thu, Sep 12, 2013 at 11:14 PM, Allen Wirfs-Brock <allen at wirfs-brock.com> wrote:
On Sep 12, 2013, at 1:50 PM, Brendan Eich wrote:
I thought we agreed with Andreas Rossberg's proposal to isolate default expressions from hoisted body declarations, as if (kind of) desugaring
Damned if I know...there is a big note in the draft that says: "This version reflects the consensus as of the Sept. 2012 TC39 meeting. However, it now appears that the binding semantics of formal parameters is likely to change again. " So, this may not be the latest consensus and I'm not actually sure there is a current consensus.
I know some of us weren't very happy with that de-sugaring.
I'm pretty sure some decisions were left hanging. I'll need to review notes, etc.
It looks like the last time tc39 discussed this was 2012-11-291
The resolution back then was that Andreas should draft a proposal for the next meeting. I can't find any mention of that being discussed in later meetings, though, so this seems to have been left undecided indeed.
Ouch. I'm sorry, I obviously dropped the ball on this. I'll try to draft something for next week.
Okay so we were discussing among ourselves, and we thought that a sane "desugaring" for
function f(x=expr) { ... }
would be
function f(x) { if (arguments.length < 1) x=expr; ... }
This would produce a consistent and easy to follow semantic for default evaluation, it doesn't introduce any new types of scope, nor does it require any new concepts (temporal dead zones, etc) to be learned by a developer who just wants default parameter values.
We also felt that if you use default or rest parameters we should consider that as a opt out from the magic arguments object, and just poison the arguments identifier entirely (no read, write, or declare)
Oliver Hunt <mailto:oliver at apple.com> September 13, 2013 6:11 PM Okay so we were discussing among ourselves, and we thought that a sane "desugaring" for
function f(x=expr) { ... }
would be
function f(x) { if (arguments.length < 1) x=expr; ... }
ES6 has two things that go against this, already in consensus among those at the meetings (we miss you shiny fruit-company people ;-):
-
You can have default parameters before parameters without defaults.
-
You can pass undefined to trigger defaulting -- this is important for composition / delegation.
I think we all agree that new formal parameter syntax gets rid of arguments. Yay!
The real issue that Andreas's proposal addressed was the insanity of body declarations hoisting to be in view of default. But it sounds like we do not have consensus on whether or not that's crazy.
This would produce a consistent and easy to follow semantic for default evaluation, it doesn't introduce any new types of scope, nor does it require any new concepts (temporal dead zones, etc) to be learned by a developer who just wants default parameter values.
The hoisting issue still hits your non-consensus actual parameter count (arguments.length) thing, though. When reading the function in source order, one sees a default expression. Mentally moving the defaulting to the body means that hoisted body decls trump outer bindings for names in the expression, indeed. But mentally moving is the objectionable part
On Sep 13, 2013, at 9:18 AM, Brendan Eich <brendan at mozilla.com> wrote:
Oliver Hunt <mailto:oliver at apple.com> September 13, 2013 6:11 PM Okay so we were discussing among ourselves, and we thought that a sane "desugaring" for
function f(x=expr) { ... }
would be
function f(x) { if (arguments.length < 1) x=expr; ... }
ES6 has two things that go against this, already in consensus among those at the meetings (we miss you shiny fruit-company people ;-):
More of the consensus discussion need to happen on es-discuss, this is the only standards body i'm involved in that seems to require details be hammered out in F2F meetings. It's one of the things I find disappointing about the ES standard - the avoidance of es-discuss when discussing language features makes it actively difficult for input from the wider dev community.
- You can have default parameters before parameters without defaults.
The spec explicitly states that any parameter without a default, but after another parameter that does have a default assignment implicitly has a default property of undefined
- You can pass undefined to trigger defaulting -- this is important for composition / delegation.
No, it's a terrible feature :D i'm unaware of any language that supports arbitrary ordering of arguments.
The elegant solution i saw someone mention was
function f({option1, option2, option3} = {option1:default option1, ...}) { … }
That gives you nicely named parameters! (It was this example that made me start looking at default parameters in the first place)
I think we all agree that new formal parameter syntax gets rid of arguments. Yay!
If only we could kill it everywhere! My proposal was to simply ban arguments as an identifier entirely
The real issue that Andreas's proposal addressed was the insanity of body declarations hoisting to be in view of default. But it sounds like we do not have consensus on whether or not that's crazy.
This would produce a consistent and easy to follow semantic for default evaluation, it doesn't introduce any new types of scope, nor does it require any new concepts (temporal dead zones, etc) to be learned by a developer who just wants default parameter values.
The hoisting issue still hits your non-consensus actual parameter count (arguments.length) thing, though. When reading the function in source order, one sees a default expression. Mentally moving the defaulting to the body means that hoisted body decls trump outer bindings for names in the expression, indeed. But mentally moving is the objectionable part -- cognitive load, and non-source-order evaluation (more hoisting).
Hoisting need not perpetuate itself this way. We can do better, or at least Andreas proposes that we do. I'm sympathetic.
Temporal dead zones are indeed weirder but perhaps pay for themselves by catching errors. But is it an error to use an outer name binding in the default expression, given its position in the formal parameter list? I think not.
The problem with temporal dead zones is that they lead to weird behaviour in edge cases, and almost all of them should be syntactically identifiable as errors up front. The problem is that you can only almost get syntax checked behaviour upfront because eval.
We also felt that if you use default or rest parameters we should consider that as a opt out from the magic arguments object, and just poison the arguments identifier entirely (no read, write, or declare)
SpiderMonkey's prototype implementation does this, and I asserted above that ES6 specs it. Not so?
The ES6 spec (as far as i can read it) only bans declaration of parameters/vars named "arguments" it doesn't ban reading them.
On 13 September 2013 18:39, Oliver Hunt <oliver at apple.com> wrote:
The problem with temporal dead zones is that they lead to weird behaviour in edge cases, and almost all of them should be syntactically identifiable as errors up front. The problem is that you can only almost get syntax checked behaviour upfront because eval.
There must be a misunderstanding -- eval is not relevant to the issue that TDZ is addressing. The issue is mutual recursion between bindings that can perform arbitrary (higher-order) computations. In that situation, it is generally impossible to ensure the absence of errors without a type system. And even with one, it is tedious to the degree that it is arguably impractical. So, as long as JavaScript allows that (and I don't see how we can change that), TDZ is the best/only we can do.
What "weird edge case" do you have in mind? I cannot imagine anything that isn't even weirder without TDZ.
On Sep 13, 2013, at 9:58 AM, Andreas Rossberg <rossberg at google.com> wrote:
On 13 September 2013 18:39, Oliver Hunt <oliver at apple.com> wrote:
The problem with temporal dead zones is that they lead to weird behaviour in edge cases, and almost all of them should be syntactically identifiable as errors up front. The problem is that you can only almost get syntax checked behaviour upfront because eval.
There must be a misunderstanding -- eval is not relevant to the issue that TDZ is addressing. The issue is mutual recursion between bindings that can perform arbitrary (higher-order) computations. In that situation, it is generally impossible to ensure the absence of errors without a type system. And even with one, it is tedious to the degree that it is arguably impractical. So, as long as JavaScript allows that (and I don't see how we can change that), TDZ is the best/only we can do.
What "weird edge case" do you have in mind? I cannot imagine anything that isn't even weirder without TDZ.
It isn't eval doing anything magic here - you can syntactically identify and use of not yet live properties, except eval(), e.g.
function f1(a=b, b=...) ... // can syntax error easily function f2(a=function () {return b}, b=...) ... // again you can syntax error, or you can be clever and say we're not calling it and not error function f3(a=function () {return b}(), b=...) ... // syntax error as we can prove you're going to use b function f4(a=eval("b"), b=...) ... // eval makes things opaque
Personally I don't believe TDZ's are a good solution - they essentially add another null/undefined value internally. They make things more complex for developers.
I certainly don't get what the TDZ win is for default parameters, and example would help me reason about it.
On 13 September 2013 19:18, Oliver Hunt <oliver at apple.com> wrote:
On Sep 13, 2013, at 9:58 AM, Andreas Rossberg <rossberg at google.com> wrote:
On 13 September 2013 18:39, Oliver Hunt <oliver at apple.com> wrote:
The problem with temporal dead zones is that they lead to weird behaviour in edge cases, and almost all of them should be syntactically identifiable as errors up front. The problem is that you can only almost get syntax checked behaviour upfront because eval.
There must be a misunderstanding -- eval is not relevant to the issue that TDZ is addressing. The issue is mutual recursion between bindings that can perform arbitrary (higher-order) computations. In that situation, it is generally impossible to ensure the absence of errors without a type system. And even with one, it is tedious to the degree that it is arguably impractical. So, as long as JavaScript allows that (and I don't see how we can change that), TDZ is the best/only we can do.
What "weird edge case" do you have in mind? I cannot imagine anything that isn't even weirder without TDZ.
It isn't eval doing anything magic here - you can syntactically identify and use of not yet live properties, except eval(), e.g.
function f1(a=b, b=...) ... // can syntax error easily function f2(a=function () {return b}, b=...) ... // again you can syntax error, or you can be clever and say we're not calling it and not error function f3(a=function () {return b}(), b=...) ... // syntax error as we can prove you're going to use b function f4(a=eval("b"), b=...) ... // eval makes things opaque
Personally I don't believe TDZ's are a good solution - they essentially add another null/undefined value internally. They make things more complex for developers.
I certainly don't get what the TDZ win is for default parameters, and example would help me reason about it.
OK, I assumed you were talking about TDZ in general, and not just for the specific case of parameter lists. For parameter lists, I agree that there is no reason to treat them as mutually recursive. However, if scoping of bindings is supposed to be sequential, but each default expression shall see the previous parameters, then the only clean solution is to conceptually have each parameter open a new nested scope (like is the case in some other languages). That solves your 'eval' case as well. I'd be happy with that, but I remember concerns about more fine-grained scoping on the committee.
As for TDZ in general, it is expressly not the case that it introduces another kind of nullish value -- because there is no value. Quite the contrary: its purpose is to avoid accidentally leaking nullish error values into arbitrary computations, like it is plaguing JavaScript elsewhere. Instead, a recursive initialisation error manifests immediately.
Also note again that there is no sane alternative for 'const' and other immutable bindings, nor for a potential 'let' with guards. And inconsistency is the worst possible choice.
On Sep 13, 2013, at 11:09 AM, Andreas Rossberg <rossberg at google.com> wrote:
OK, I assumed you were talking about TDZ in general, and not just for the specific case of parameter lists. For parameter lists, I agree that there is no reason to treat them as mutually recursive. However, if scoping of bindings is supposed to be sequential, but each default expression shall see the previous parameters, then the only clean solution is to conceptually have each parameter open a new nested scope (like is the case in some other languages). That solves your 'eval' case as well. I'd be happy with that, but I remember concerns about more fine-grained scoping on the committee.
Introducing a new scope for each argument could be extraordinarily expensive, also would make some semantics weird:
function f(a=1, b=function(){return a}) { var a = 2; return b() // What's this? }
if we say sequential scoping it becomes inconsistent for b to return 1, but the parameter shadowing of var means that it should.
Oliver Hunt <mailto:oliver at apple.com> September 13, 2013 6:39 PM On Sep 13, 2013, at 9:18 AM, Brendan Eich<brendan at mozilla.com> wrote:
Oliver Hunt<mailto:oliver at apple.com> September 13, 2013 6:11 PM Okay so we were discussing among ourselves, and we thought that a sane "desugaring" for
function f(x=expr) { ... }
would be
function f(x) { if (arguments.length< 1) x=expr; ... } ES6 has two things that go against this, already in consensus among those at the meetings (we miss you shiny fruit-company people ;-): More of the consensus discussion need to happen on es-discuss, this is the only standards body i'm involved in that seems to require details be hammered out in F2F meetings. It's one of the things I find disappointing about the ES standard - the avoidance of es-discuss when discussing language features makes it actively difficult for input from the wider dev community.
Some of us live on es-discuss and make it to only two of three days of TC39 meetings. Others work very hard during meetings taking notes. I think we could do better but (here you go :-/) you need to pay attention more, because we've been over this decision on es-discuss. See
et seq. in the thread.
So if you missed it in es-discuss, no foul -- that happens. But don't blame TC39!
- You can have default parameters before parameters without defaults. The spec explicitly states that any parameter without a default, but after another parameter that does have a default assignment implicitly has a default property of undefined
That's not the issue, and it's vacuously true of JS today. The counterexample you seek is
function foo(x = 42, y) { return [x, y]; }
foo(undefined, 99); // ES6 draft says [42, 99]
Your proposed arguments.length test would (were arguments allowed in a function with new formal parameter syntax) fail to default x to 42. in that foo call.
- You can pass undefined to trigger defaulting -- this is important for composition / delegation. No, it's a terrible feature :D i'm unaware of any language that supports arbitrary ordering of arguments.
No one said arbitrary ordering. Are you thinking of keyword (named) parameter to actual argument binding? No one proposed that for JS, object literals sucked all air from any such would-be feature.
The elegant solution i saw someone mention was
function f({option1, option2, option3} = {option1:default option1, ...}) { … }
That gives you nicely named parameters! (It was this example that made me start looking at default parameters in the first place)
Yup, I think you misread me.
The undefined-as-defaulting-sentinel decision goes back to the 2012 es-discuss thread cited above, ratified by TC39 at at least this meeting, and reconfirmed many times:
I think we all agree that new formal parameter syntax gets rid of arguments. Yay!
If only we could kill it everywhere! My proposal was to simply ban arguments as an identifier entirely
Wat.
We can't break JS. Are you thinking we have opt-in versioning? We don't. 1JS, dude.
Temporal dead zones are indeed weirder but perhaps pay for themselves by catching errors. But is it an error to use an outer name binding in the default expression, given its position in the formal parameter list? I think not.
The problem with temporal dead zones is that they lead to weird behaviour in edge cases, and almost all of them should be syntactically identifiable as errors up front. The problem is that you can only almost get syntax checked behaviour upfront because eval.
Still, they're in ES6 -- again many es-discuss threads and what seems like at least every July TC39 meeting for two years running, we've reconfirmed/rejustified this.
We also felt that if you use default or rest parameters we should consider that as a opt out from the magic arguments object, and just poison the arguments identifier entirely (no read, write, or declare) SpiderMonkey's prototype implementation does this, and I asserted above that ES6 specs it. Not so?
The ES6 spec (as far as i can read it) only bans declaration of parameters/vars named "arguments" it doesn't ban reading them.
Are you talking about strict-mode code? Sloppy mode allows arguments for any name, for backward compatibility.
Allen ack'ed in this thread that we have a draft bug: just a missing static semantics check where new formal parameter syntax is used to proscribe arguments usage in the body.
On 13 September 2013 20:17, Oliver Hunt <oliver at apple.com> wrote:
Introducing a new scope for each argument could be extraordinarily expensive, also would make some semantics weird:
function f(a=1, b=function(){return a}) { var a = 2; return b() // What's this? }
The semantics we tentatively agreed upon at the November meeting would make this return 1 either way, since it would be roughly equivalent to
function f(a1, b1) { let a = (a1 !== undefined) ? a1 : 1 let b = (b1 !== undefined) ? b1 : function(){return a} return function(a, b){ var a = 2 return b() }.call(this, a, b) }
That is, default arguments are evaluated is if you had written a wrapper function that computes them. Note that this answer is independent of the relative scoping of parameters.
We have discussed various semantics, and weird examples with closures, and IIRC eventually came to the conclusion that closures in default arguments are pretty much a lost cause, since any semantics will have pathological cases given some of the ES5 rules we have to be backwards-compatible with.
if we say sequential scoping it becomes inconsistent for b to return 1, but the parameter shadowing of var means that it should.
I'm not following you here. Did you mean 2 instead of 1?
- You can pass undefined to trigger defaulting -- this is important for composition / delegation. No, it's a terrible feature :D i'm unaware of any language that supports arbitrary ordering of arguments.
FWIW, ColdFusion allows to call functions with named parameters in an arbitrary order.[1] This has one benefit: You can simply skip arguments in the middle instead of having to provide a default value for them the call.
Transferred to JS syntax that would be:
function sayHello(a, b=true, c="people") { return b ? a + " " + c : a; }
foo(a="Hey", c="there!"); // returns "Hey there!"
(Excuse the poor example, but it should be enough to understand the idea behind it.)
Sebastian
Right.
Just to be super-clear, nothing like this is proposed for JS, for the reason I gave.
- You can have default parameters before parameters without defaults. The spec explicitly states that any parameter without a default, but after another parameter that does have a default assignment implicitly has a default property of undefined
As an aside: This can be useful. For example:
function getRandomInteger(lower = 0, upper) {
return Math.floor(Math.random() * (upper - lower)) + lower;
}
Implementing this function in some other manner is much more complicated.
But I agree that you want to name parameters if there are more than 2.
On Fri, Sep 13, 2013 at 2:52 PM, Sebastian Zartner <sebastianzartner at gmail.com> wrote:
- You can pass undefined to trigger defaulting -- this is important for composition / delegation.
No, it's a terrible feature :D i'm unaware of any language that supports arbitrary ordering of arguments.
FWIW, ColdFusion allows to call functions with named parameters in an arbitrary order.[1] This has one benefit: You can simply skip arguments in the middle instead of having to provide a default value for them the call.
A likely-more-familiar example is Python, which is the same - you can provide any argument by name, in any order. Actually, any language I can think of that lets you pass arguments by name allows this, so I've no idea what Oliver is going on about. (As Brendan has said, JS won't grow named arguments a la Python, but its equivalent functionality - using an options object + destructuring - also allows passing the arguments in arbitrary order.)
Furthermore, as Brendan pointed out, this statement has nothing to do with named arguments, it's about argument defaulting. When creating "wrapper" functions, if the wrapped function has default values for some arguments, you need a succinct way to say "I don't actually want to pass this argument, even though I'm giving you a value - just use the default value instead".
If you don't have this, you end up having to create a combinatorial explosion of call paths for the wrapped function, one for each possible set of defaulted arguments. I know how frustrating this is first-hand, because Common Lisp has several functions that act differently based on whether an argument is passed or not, and wrapping them is extremely clumsy and frustrating.
To get around it, JS has defined that passing undefined triggers defaulting behavior, identical to if you didn't pass any value at all. This lets the wrapper not provide any defaults at all, and just do a pass-through of its arguments - if the caller didn't pass a given argument, the wrapper will get "undefined" in that arg, and pass it on to the wrapped function, which will then do its proper defaulting behavior.
On Fri, Sep 13, 2013 at 3:06 PM, Axel Rauschmayer <axel at rauschma.de> wrote:
- You can have default parameters before parameters without defaults.
The spec explicitly states that any parameter without a default, but after another parameter that does have a default assignment implicitly has a default property of undefined
As an aside: This can be useful. For example:
function getRandomInteger(lower = 0, upper) { return Math.floor(Math.random() * (upper - lower)) + lower; }
Implementing this function in some other manner is much more complicated.
This doesn't do what you appear to think it does. In particular, calling it with "getRandomInteger(5)" is not equivalent to calling it with "getRandomInteger(0, 5)", it's equivalent to calling it with "getRandomInteger(5, undefined)".
This is precisely what the spec section you've quoted says, so I'm not sure how you're confused on this point.
function getRandomInteger(lower = 0, upper) { return Math.floor(Math.random() * (upper - lower)) + lower; }
This doesn't do what you appear to think it does. In particular, calling it with "getRandomInteger(5)" is not equivalent to calling it with "getRandomInteger(0, 5)", it's equivalent to calling it with "getRandomInteger(5, undefined)".
This is precisely what the spec section you've quoted says, so I'm not sure how you're confused on this point.
Thanks! I didn’t look at the spec section, I (incorrectly) assumed I knew how they worked. Then I don’t see a use case for this, I’d find it confusing.
Axel Rauschmayer <mailto:axel at rauschma.de> September 14, 2013 12:36 AM
Thanks! I didn’t look at the spec section, I (incorrectly) assumed I knew how they worked. Then I don’t see a use case for this, I’d find it confusing.
What alternative do you want?
-
Parameter default value before default-free parameter means call without arity match throws?
-
What Oliver may have feared in switching to named parameters: actual argument list shorter than formal list with default on left causes parameter binding to skip defaults? This is not coherent:
function foo(x, y = a, z = y) {...}
foo() // x=undefined, y=a, z=a foo(1) // x=1, y=a, z=a foo(1, 2) // x=1, y=a, z=2??? foo(1, 2, 3) // x=1, y=2, z=3
- What ES6 draft says.
On Sep 13, 2013, at 7:19 PM, Brendan Eich <brendan at mozilla.com> wrote:
Axel Rauschmayer <mailto:axel at rauschma.de> September 14, 2013 12:36 AM
Thanks! I didn’t look at the spec section, I (incorrectly) assumed I knew how they worked. Then I don’t see a use case for this, I’d find it confusing.
What alternative do you want?
Parameter default value before default-free parameter means call without arity match throws?
What Oliver may have feared in switching to named parameters: actual argument list shorter than formal list with default on left causes parameter binding to skip defaults? This is not coherent:
function foo(x, y = a, z = y) {...}
foo() // x=undefined, y=a, z=a foo(1) // x=1, y=a, z=a foo(1, 2) // x=1, y=a, z=2???
Why the ???
This makes sense, and it is what other languages with default values do
What alternative do you want?
Parameter default value before default-free parameter means call without arity match throws?
What Oliver may have feared in switching to named parameters: actual argument list shorter than formal list with default on left causes parameter binding to skip defaults? This is not coherent:
function foo(x, y = a, z = y) {...}
foo() // x=undefined, y=a, z=a foo(1) // x=1, y=a, z=a foo(1, 2) // x=1, y=a, z=2??? foo(1, 2, 3) // x=1, y=2, z=3
- What ES6 draft says.
The only thing I can see are two, mutually exclusive, modes:
- Trailing optional positional parameters.
- Leading optional positional parameters.
There are a few cases, were #2 would be useful. I don’t see other alternatives that make sense. But that may be due to a lack of understand why one would want to have default values in an order that is neither #1 nor #2.
Axel
Oliver Hunt wrote:
function foo(x, y = a, z = y) {...}
foo() // x=undefined, y=a, z=a foo(1) // x=1, y=a, z=a foo(1, 2) // x=1, y=a, z=2??? Why the ???
This makes sense, and it is what other languages with default values do
What other languages?
Say a = 4 outside. Then
foo(1, 2)
should bind x = 1, y = 2, and z = undefined -- it should not skip y just because y has a parameter default value and bind 2 to z.
Brendan Eich <mailto:brendan at mozilla.com> September 15, 2013 7:22 PM
What other languages?
Say a = 4 outside. Then
foo(1, 2)
should bind x = 1, y = 2, and z = undefined
Sorry, I botched the example.
Remind me not to mail while jetlagged.
Trying again:
a = 4 b = 5 function foo(x, y = a, z = b) {...}
foo(1, 2)
My point was that simply because 2 of 3 arguments were supplied and y has a default, we should not skip it and bind 2 to z. In this fixed example, z defaults to b = 5, no other choice. Right?
IOW, actual to formal parameter binding in JS is always positional, with undefined triggering defaulting, and (therefore) missing actuals triggering defaulting.
Umm.. I don't know if needed, but named parameters could be an special case, that is, the normal parameters has no name, but you can invoke the function with
f(1,2, x: 3, z: 4);
and use them:
function (arg1, arg2) { var x = arguments.x; var y = arguments.y; ... }
AFAIR, in V8, arguments is an object, not an array. So, it can be used for named parameters, a la Ruby.
Angel "Java" Lopez @ajlopez gh:ajlopez
Angel Java Lopez <mailto:ajlopez2000 at gmail.com> September 15, 2013 7:43 PM Umm.. I don't know if needed, but named parameters could be an special case, that is, the normal parameters has no name, but you can invoke the function with
f(1,2, x: 3, z: 4);
Again, we are not going there. Object literals suck all the air away.
AFAIR, in V8, arguments is an object, not an array.
V8, Schmee-8 :-|.
The ECMA-262 spec says exactly what arguments is, and V8 follows the spec AFAIK.
So, it can be used for named parameters, a la Ruby.
Not going here either. The arguments object is a botch, in ES6 when you use new forms you lose it in favor of those forms.
Andreas Rossberg <rossberg at google.com> writes:
A spec is a system which (a) wants to be transparent (as opposed to encapsulated), and (b) is pretty much closed-world (extension points notwithstanding). Consequently, none of the advantages of OO apply, and it becomes primarily a barrier.
(I'm not proposing any change to that at this stage, obviously. But it is a growing technical debt that we might consider clearing at some point.)
If it's helpful, some of us at Imperial and Inria have been working on a machine-readable (using the Coq theorem prover) operational semantics for ES5. At the moment we think that the ES5 standard is usually easier to read for intuition, but we hope that the Coq code will sometime be of use to readers looking for a certain kind of precision.
The project also comes with a reference interpreter, which we prove correct according our operational semantics, and test using test262. There's a good video-intro to the project here:
air.mozilla.org/trusted-javascript-spec
...and I'm always happy to chat.
Thanks,
Gareth
Okay, I'm having a really hard time navigating and following the current spec layout. The many different places that behavior is specified for the same productions, and the absence of direct links between makes extraordinarily hard to read and understand.
In my current reading I was attempting to determine in what scope default initializers are evaluated, but from reading the spec (starting at #14.1) I could not find initialization of any environment record, and while there was reference to various places in which we determine the expected argument count, or whether there are initializers, i could not find anything that handle default initializers. I'm sure it's there but it I have spent many hours following random names from place to place in the spec (again, this is hard as the same productions are described repeatedly for the many different semantic modes that they're in), and I still do not know what the expected results of evaluation are.
The current spec layout seems to be a substantial regression in terms of usability vs. the old spec. Some of this would be helped if we were using html and links, but a lot of it seems controlled by the decision to remove the step-by-step evaluation ordering of the ES5 spec.