An array destructing specification choice

# Allen Wirfs-Brock (13 years ago)

In the following declaration, what should be the value of z?

let [z,y,z] = {0:0, 1:1, length: 2, 2:2};

should it be be 2 or undefined

undefined might be reasonable because it is an array pattern, and the source object is only has two "array-like" elements 2 might be reasonable because the source object actually has a property named "2"

Which alternative will be least surprising to JS programmers?

# Allen Wirfs-Brock (13 years ago)

In a similar vain, what is the value of r in:

let [z,y,...r] = {0:0, 1:1, 2:2, length: 3, 3:3,4:4};

should it be [2] or [2,3,4] (and if the latter how is that determined)?

It seems to me that [2] is the right answer, and if that is the cause, consistency requires that for the first problem z gets the value undefined.

# Dmitry Soshnikov (13 years ago)

On 05.11.2011 20:28, Allen Wirfs-Brock wrote:

In the following declaration, what should be the value of z?

let [z,y,z] = {0:0, 1:1, length: 2, 2:2};

should it be be 2 or undefined

undefined might be reasonable because it is an array pattern, and the source object is only has two "array-like" elements 2 might be reasonable because the source object actually has a property named "2"

Which alternative will be least surprising to JS programmers?

Since arrays may participate in the same sense as objects, e.g. for-in loop or some generic methods (e.g. o = {push: [].push}; o.push(1); o.length is created and is 1), the example can be (by logic) correct.

Besides, since the destructuring pattern matching isn't strict (i.e. it's used only for exactly for destructuring), the example also is OK since it doesn't say "we match an array", but we only destructure a complex structure on some less complex parts, and if the pattern on the left can manage it, it's OK.

If of course there had been the case of strict pattern matching (like e.g. in Erlang in this case), then it should be an error.

IMO.

Dmitry.

# Brendan Eich (13 years ago)

On Nov 5, 2011, at 9:28 AM, Allen Wirfs-Brock wrote:

In the following declaration, what should be the value of z?

let [z,y,z] = {0:0, 1:1, length: 2, 2:2};

should it be be 2 or undefined

undefined might be reasonable because it is an array pattern, and the source object is only has two "array-like" elements 2 might be reasonable because the source object actually has a property named "2"

Which alternative will be least surprising to JS programmers?

What I implemented long ago in SpiderMonkey based on the ES4 proposal gives z=2. I still think that's the best answer.

Destructuring is "irrefutable" in that it desugars to assignments from properties of the RHS. It is not typed; it is not refutable (no one mention Erlang -- oops, Dmitry did; ok, refutable match is a separate beast, proposed but deferred: wiki.ecmascript.org/doku.php?id=strawman:pattern_matching). It should not impose any particular constraints on the RHS object based on the LHS pattern.

In this case, the RHS object is not consistent with Array invariants anyway. That means as much, if not more, than the array pattern. Someone may have set it up that way for a reason. It could have come from a JSON deserialization. The array pattern should not check for 'length' and enforce Array invariants that do not apply inherently on the RHS.

# Dmitry Soshnikov (13 years ago)

On 05.11.2011 20:38, Allen Wirfs-Brock wrote:

In a similar vain, what is the value of r in:

let [z,y,...r] = {0:0, 1:1, 2:2, length: 3, 3:3,4:4};

should it be [2] or [2,3,4] (and if the latter how is that determined)?

It seems to me that [2] is the right answer,

How so? If ...n is the range slice operator, it should be [2, 3, 4], no?

Dmitry.

# Allen Wirfs-Brock (13 years ago)

On Nov 5, 2011, at 9:38 AM, Dmitry Soshnikov wrote:

On 05.11.2011 20:28, Allen Wirfs-Brock wrote:

In the following declaration, what should be the value of z?

let [z,y,z] = {0:0, 1:1, length: 2, 2:2};

should it be be 2 or undefined

undefined might be reasonable because it is an array pattern, and the source object is only has two "array-like" elements 2 might be reasonable because the source object actually has a property named "2"

Which alternative will be least surprising to JS programmers?

Since arrays may participate in the same sense as objects, e.g. for-in loop or some generic methods (e.g. o = {push: [].push}; o.push(1); o.length is created and is 1), the example can be (by logic) correct.

The question isn't whether or not the example is correct. It is already determined that it is valid syntax. The question is about semantics. Does "array" destructuring of a "array-like" object controlled by the "length" property of the object?

Besides, since the destructuring pattern matching isn't strict (i.e. it's used only for exactly for destructuring), the example also is OK since it doesn't say "we match an array", but we only destructure a complex structure on some less complex parts, and if the pattern on the left can manage it, it's OK.

Personally, I dislike calling this style of destructuring (as defined in JS) "pattern matching" because that makes it sound like it is doing much more than it really is.

# Allen Wirfs-Brock (13 years ago)

On Nov 5, 2011, at 9:45 AM, Dmitry Soshnikov wrote:

On 05.11.2011 20:38, Allen Wirfs-Brock wrote:

In a similar vain, what is the value of r in:

let [z,y,...r] = {0:0, 1:1, 2:2, length: 3, 3:3,4:4};

should it be [2] or [2,3,4] (and if the latter how is that determined)?

It seems to me that [2] is the right answer,

How so? If ...n is the range slice operator, it should be [2, 3, 4], no?

How do you define the range of "array-like elements" of an object. In all other places of the ES specification, the value of the "length" property is used to determine the upper bound of the elements and any "integer keyed" elements beyond the length are ignored by array operations and functions.

# Allen Wirfs-Brock (13 years ago)

On Nov 5, 2011, at 9:44 AM, Brendan Eich wrote:

On Nov 5, 2011, at 9:28 AM, Allen Wirfs-Brock wrote:

In the following declaration, what should be the value of z?

let [z,y,z] = {0:0, 1:1, length: 2, 2:2};

should it be be 2 or undefined

undefined might be reasonable because it is an array pattern, and the source object is only has two "array-like" elements 2 might be reasonable because the source object actually has a property named "2"

Which alternative will be least surprising to JS programmers?

What I implemented long ago in SpiderMonkey based on the ES4 proposal gives z=2. I still think that's the best answer.

Destructuring is "irrefutable" in that it desugars to assignments from properties of the RHS. It is not typed; it is not refutable (no one mention Erlang -- oops, Dmitry did; ok, refutable match is a separate beast, proposed but deferred: wiki.ecmascript.org/doku.php?id=strawman:pattern_matching). It should not impose any particular constraints on the RHS object based on the LHS pattern.

In this case, the RHS object is not consistent with Array invariants anyway. That means as much, if not more, than the array pattern. Someone may have set it up that way for a reason. It could have come from a JSON deserialization. The array pattern should not check for 'length' and enforce Array invariants that do not apply inherently on the RHS.

But all other array operators and functions use "length" to limit their bounds when dealing with array-like objects. Also, via inheritance a real array can acquire an integer keyed property that is beyond its length bound:

let [z,y,z] = {2:2} <| [0,1];

# Axel Rauschmayer (13 years ago)

To me the lhs indicates: array. Thus, the rhs is a pseudo-array. When used as such we get: > [].join.call({0:0, 1:1, length: 2, 2:2}, "-") '0-1' Thus, 2 is invisible. That’s what I would expect, too, intuitively. I’d think that the assignment should reflect that (thus: z = undefined)

# John J Barton (13 years ago)

On Sat, Nov 5, 2011 at 9:28 AM, Allen Wirfs-Brock <allen at wirfs-brock.com> wrote:

In the following declaration, what should be the value of z?

let [z,y,z] = {0:0, 1:1, length: 2, 2:2};

should it be be 2 or undefined

undefined might be reasonable because it is an array pattern, and the source object is only has two "array-like" elements 2 might be reasonable because the source object actually has a property named "2"

Which alternative will be least surprising to JS programmers?

(Oh great more features that turn programming into little brain teasers :-( )

Devs need an algorithm:

let _lhs = []; let _rhs = {0:0, 1:1, length: 2, 2:2}; _lhs[0] = _rhs[0]; _lhs[1] = _rhs[1]; _lhs[2] = _rhs[2]; let z = _lhs[0]; let y = _lhs[1]; let z = _lhs[2];

So z would be 2.

I guess the algorithm can be shorter: let _rhs = {0:0, 1:1, length: 2, 2:2}; let z = _rhs[0]; let y = _rhs[1]; let z = _rhs[2];

I don't understand what " the source object is only has two "array-like" elements" can mean.

jjb

# Lasse Reichstein (13 years ago)

On Sat, Nov 5, 2011 at 5:28 PM, Allen Wirfs-Brock <allen at wirfs-brock.com> wrote:

In the following declaration, what should be the value of z?

let [z,y,z] = {0:0, 1:1, length: 2, 2:2};

(I assume the pattern should be [x,y,z], not [z,y,z], or am I missing a point?)

should it be be 2 or undefined

If it does anything at all (and not, say, throw a TypeError because the RHS isn't an Array, which it won't), I'd go for 2.

There is no reason to involve RHS.length at all. The LHS isn't an array, why should it require array-like-ness of the RHS. The LHS together with the assignment operator is a construct that binds the properties "0", "1", and "2" of the object on the RHS to the variables "x", "y", and "z". It should be equivalent to the pattern {0:x, 1:y, 2:z}.

What if the RHS doesn't have a length property at all? Or it has one with a value that isn't convertible to a number? No need for that complexity.

undefined might be reasonable because it is an array pattern, and the source object is only has two "array-like" elements 2 might be reasonable because the source object actually has a property named "2"

Which alternative will be least surprising to JS programmers?

Mine, obviously. At least the least surprising to me. I've learned not to generalize from that.

# Dmitry Soshnikov (13 years ago)

On 05.11.2011 20:49, Allen Wirfs-Brock wrote:

On Nov 5, 2011, at 9:38 AM, Dmitry Soshnikov wrote:

On 05.11.2011 20:28, Allen Wirfs-Brock wrote:

In the following declaration, what should be the value of z?

let [z,y,z] = {0:0, 1:1, length: 2, 2:2};

should it be be 2 or undefined

undefined might be reasonable because it is an array pattern, and the source object is only has two "array-like" elements 2 might be reasonable because the source object actually has a property named "2"

Which alternative will be least surprising to JS programmers?

Since arrays may participate in the same sense as objects, e.g. for-in loop or some generic methods (e.g. o = {push: [].push}; o.push(1); o.length is created and is 1), the example can be (by logic) correct. The question isn't whether or not the example is correct. It is already determined that it is valid syntax. The question is about semantics. Does "array" destructuring of a "array-like" object controlled by the "length" property of the object?

I've perfectly understood what the question is about. Yes, I think it's the correct semantics (this is what I mean "correct by logic"). This of course only because some JS semantics is already has similar behavior

# Dmitry Soshnikov (13 years ago)

On 05.11.2011 20:54, Allen Wirfs-Brock wrote:

On Nov 5, 2011, at 9:45 AM, Dmitry Soshnikov wrote:

On 05.11.2011 20:38, Allen Wirfs-Brock wrote:

In a similar vain, what is the value of r in:

let [z,y,...r] = {0:0, 1:1, 2:2, length: 3, 3:3,4:4};

should it be [2] or [2,3,4] (and if the latter how is that determined)?

It seems to me that [2] is the right answer, How so? If ...n is the range slice operator, it should be [2, 3, 4], no? How do you define the range of "array-like elements" of an object. In all other places of the ES specification, the value of the "length" property is used to determine the upper bound of the elements and any "integer keyed" elements beyond the length are ignored by array operations and functions.

Let's see.

What is the result in case of:

let a = [0, 1, 2, 3, 4]; a.length = 6;

let [z, y, ...r] = a;

if ...r depends on the length of a, then r is [2, 3, 4, undefined]; If not, then r just takes only existing values (as some array methods do, e.g. forEach or map), and it's [2, 3, 4]. In later case, the simple object semantics obeys this rule.

Dmitry.

# Dmitry Soshnikov (13 years ago)

On 05.11.2011 21:04, Dmitry Soshnikov wrote:

On 05.11.2011 20:54, Allen Wirfs-Brock wrote:

On Nov 5, 2011, at 9:45 AM, Dmitry Soshnikov wrote:

On 05.11.2011 20:38, Allen Wirfs-Brock wrote:

In a similar vain, what is the value of r in:

let [z,y,...r] = {0:0, 1:1, 2:2, length: 3, 3:3,4:4};

should it be [2] or [2,3,4] (and if the latter how is that determined)?

It seems to me that [2] is the right answer, How so? If ...n is the range slice operator, it should be [2, 3, 4], no? How do you define the range of "array-like elements" of an object.
In all other places of the ES specification, the value of the "length" property is used to determine the upper bound of the elements and any "integer keyed" elements beyond the length are ignored by array operations and functions.

Oh! I missed that the length is 2! Then yes, it seems it should be [2]. Though, have to think on missed properties also.

Dmitry.

# Allen Wirfs-Brock (13 years ago)

On Nov 5, 2011, at 9:59 AM, Lasse Reichstein wrote:

On Sat, Nov 5, 2011 at 5:28 PM, Allen Wirfs-Brock <allen at wirfs-brock.com> wrote:

In the following declaration, what should be the value of z?

let [z,y,z] = {0:0, 1:1, length: 2, 2:2};

(I assume the pattern should be [x,y,z], not [z,y,z], or am I missing a point?)

oops...

should it be be 2 or undefined

If it does anything at all (and not, say, throw a TypeError because the RHS isn't an Array, which it won't), I'd go for 2.

There is no reason to involve RHS.length at all. The LHS isn't an array, why should it require array-like-ness of the RHS. The LHS together with the assignment operator is a construct that binds the properties "0", "1", and "2" of the object on the RHS to the variables "x", "y", and "z". It should be equivalent to the pattern {0:x, 1:y, 2:z}.

then what are the rules if ... is used on the LHS.

What if the RHS doesn't have a length property at all? Or it has one with a value that isn't convertible to a number? No need for that complexity.

This case is consistently handled throughtout the ES spec: ToInteger( obj.[Get]) evaluates to 0 if length is missing or has a bogus value.

# Allen Wirfs-Brock (13 years ago)

On Nov 5, 2011, at 10:04 AM, Dmitry Soshnikov wrote:

On 05.11.2011 20:54, Allen Wirfs-Brock wrote:

On Nov 5, 2011, at 9:45 AM, Dmitry Soshnikov wrote:

On 05.11.2011 20:38, Allen Wirfs-Brock wrote:

In a similar vain, what is the value of r in:

let [z,y,...r] = {0:0, 1:1, 2:2, length: 3, 3:3,4:4};

should it be [2] or [2,3,4] (and if the latter how is that determined)?

It seems to me that [2] is the right answer, How so? If ...n is the range slice operator, it should be [2, 3, 4], no? How do you define the range of "array-like elements" of an object. In all other places of the ES specification, the value of the "length" property is used to determine the upper bound of the elements and any "integer keyed" elements beyond the length are ignored by array operations and functions.

Let's see.

What is the result in case of:

let a = [0, 1, 2, 3, 4]; a.length = 6;

let [z, y, ...r] = a;

if ...r depends on the length of a, then r is [2, 3, 4, undefined]; If not, then r just takes only existing values (as some array methods do, e.g. forEach or map), and it's [2, 3, 4]. In later case, the simple object semantics obeys this rule.

As the draft ES6 spec. is currently written, r will be [2,3,4,,] (r.length= 4). In other orders, "holes" holes are captured by the result argument. This is all pretty consistent with "array" processing throughout the Es spec.

# Allen Wirfs-Brock (13 years ago)

On Nov 5, 2011, at 9:59 AM, John J Barton wrote:

(Oh great more features that turn programming into little brain teasers :-( )

Devs need an algorithm:

let _lhs = []; let _rhs = {0:0, 1:1, length: 2, 2:2}; _lhs[0] = _rhs[0]; _lhs[1] = _rhs[1]; _lhs[2] = _rhs[2]; let z = _lhs[0]; let y = _lhs[1]; let z = _lhs[2];

So z would be 2.

I guess the algorithm can be shorter: let _rhs = {0:0, 1:1, length: 2, 2:2}; let z = _rhs[0]; let y = _rhs[1]; let z = _rhs[2];

I don't understand what " the source object is only has two "array-like" elements" can mean.

If you used any of the array functions to process _rhs, they would ignore element 2. For example:

_rhs.forEach(function(v) {print(v)});

would print: 0 1

# Claus Reinke (13 years ago)

In the following declaration, what should be the value of z?

let [z,y,z] = {0:0, 1:1, length: 2, 2:2};

and there I was, thinking I understood more than half of JS..

Since this was new to me, I referred to the spec. Here <strike>is my reading</strike> are my readings:

The rhs is an Object, but not an Array - in spite of duck typing, the spec's isArray and Object initializer seem clear on this.

If the rhs was an Array, .. object literal instantiation appears to be defined as going from left to right, so the '2:2' would increment 'length'. I haven't checked the destructuring proposal, but suspect if this was valid, the second match for 'z' would just overwrite the first, leading to 'z' being '2'.

However, .. since the rhs isn't an Array, but the lhs matches Arrays, I would (now) expect this to fail (where is real pattern matching with fall-through when you need it?-).

Except.. destructuring isn't even structural matching, it is just a concise description of object/array selectors. And array selectors are just object selectors, no checks for isArray. So the rhs isn't an Array, but the lhs isn't one, either, we just get a sequence of selections, and 'z' will be '2'.

This is way too confusing to be nice, or expected. If the spec hard-codes the just-selectors-no-matching intuition, this version of the spec will have fewer surprises, but yet another syntax will be needed when moving to refutable matching in later versions..

perhaps that's not too bad (except for the fact that I'd like to separate objects and arrays more clearly..).

So, do we have to rename 'destructuring' to 'selector shorthand', with the braces and brackets only defining selection-by-name vs selection-by-index? Or am I way off?-)

Claus

# John J Barton (13 years ago)

On Sat, Nov 5, 2011 at 10:24 AM, Allen Wirfs-Brock <allen at wirfs-brock.com> wrote:

On Nov 5, 2011, at 9:59 AM, John J Barton wrote:

(Oh great more features that turn programming into little brain teasers :-( )

Devs need an algorithm:

let _lhs = []; let _rhs =  {0:0, 1:1, length: 2, 2:2}; _lhs[0] = _rhs[0]; _lhs[1] = _rhs[1]; _lhs[2] = _rhs[2]; let z = _lhs[0]; let y = _lhs[1]; let z = _lhs[2];

So z would be 2.

I guess the algorithm can be shorter: let _rhs =  {0:0, 1:1, length: 2, 2:2}; let z = _rhs[0]; let y = _rhs[1]; let z = _rhs[2];

I don't understand what " the source object is only has two "array-like" elements" can mean.

If you used any of the array functions to process _rhs, they would ignore element 2.  For example:

_rhs.forEach(function(v) {print(v)});

would print: 0 1

I can see why my version is wrong: I am interpreting square brackets on the RHS like JS devs would. let z = _rhs[0]; // LHS was array, so get zeroth elt,but RHS is object so property access.

But the feature does not do this. Rather, since LHS is array, it coerces the RHS to an array: let z = coerceToArray(_rhs)[0]; and we don't know what that operation means.

This example illustrates (again) why implicit type conversions sucks (except to strings ;-).

Can't this just be a error?

jjb

# Till Schneidereit (13 years ago)

On Sat, Nov 5, 2011 at 18:10, Allen Wirfs-Brock <allen at wirfs-brock.com> wrote:

What if the RHS doesn't have a length property at all? Or it has one with a value that isn't convertible to a number? No need for that complexity.

This case is consistently handled throughtout the ES spec:    ToInteger( obj.[Get]) evaluates to 0 if length is missing or has a bogus value.

So in your favored solution, would the following example result in x, y, and z all being undefined? let [x,y,z] = {0:0, 1:1, 2:2};

It should, as no length is assumed to mean "length === 0", IIUC, and that seems so unintuitive to me that it sways my opinion towards not imposing array-ness on the RHS.

Thus: let [x,y,z] = {0:0, 1:1, length:2, 2:2}; results in x === 0, y === 1, z === 2;

and let [x,y,...r] = {0:0, 1:1, 2:2, length: 3, 3:3,4:4}; results in x === 0, y === 1, r === [2, 3, 4];

The last example hinges on the exact behavior of rest, of course. It feels most natural to me to let it mean some thing along the lines of "collect all not-yet destructured numeric properties into array 'r'".

# Allen Wirfs-Brock (13 years ago)

On Nov 5, 2011, at 10:57 AM, Claus Reinke wrote:

In the following declaration, what should be the value of z? let [z,y,z] = {0:0, 1:1, length: 2, 2:2};

and there I was, thinking I understood more than half of JS..

Since this was new to me, I referred to the spec. Here <strike>is my reading</strike> are my readings:

The rhs is an Object, but not an Array - in spite of duck typing, the spec's isArray and Object initializer seem clear on this.

correct

If the rhs was an Array, ..

but it isn't

object literal instantiation appears to be defined as going from left to right, so the '2:2' would increment 'length'.

I haven't checked the destructuring proposal, but suspect if this was valid, the second match for 'z' would just overwrite the first, leading to 'z' being '2'.

That's why I'm brought up the issue. The proposal doesn't say. Figuring out edge cases like this is the job of your friendly specification writer (me...)

However, .. since the rhs isn't an Array, but the lhs matches Arrays, I would (now) expect this to fail (where is real pattern matching with fall-through when you need it?-).

In general, JS, allows an object with integer property names to be used in any context where an "Array" is expect. This seems like such a context.

Except.. destructuring isn't even structural matching, it is just a concise description of object/array selectors. And array selectors are just object selectors, no checks for isArray. So the rhs isn't an Array, but the lhs isn't one, either, we just get a sequence of selections, and 'z' will be '2'.

except, the LHS selectors are implicit so we can define the rules for determining them. Also, we allow ... on the LHS and that also implies that something is going on besides simple sequential integer selector assignment.

This is way too confusing to be nice, or expected. If the spec hard-codes the just-selectors-no-matching intuition, this version of the spec will have fewer surprises, but yet another syntax will be needed when moving to refutable matching in later versions.. perhaps that's not too bad (except for the fact that I'd like to separate objects and arrays more clearly..).

I favor consistency. array-like objects should behave consistently in all contexts. to me that says z should get the value undefined.

So, do we have to rename 'destructuring' to 'selector shorthand', with the braces and brackets only defining selection-by-name vs selection-by-index? Or am I way off?-)

I think that is pretty much what the proposed JS destructing really is. I'm fine with calling it "destructuring"

# Allen Wirfs-Brock (13 years ago)

On Nov 5, 2011, at 11:01 AM, John J Barton wrote:

I can see why my version is wrong: I am interpreting square brackets on the RHS like JS devs would. let z = _rhs[0]; // LHS was array, so get zeroth elt,but RHS is object so property access.

But the feature does not do this. Rather, since LHS is array, it coerces the RHS to an array: let z = coerceToArray(_rhs)[0]; and we don't know what that operation means.

What do you think this returns:

Array.prototype.pop.call({0:0, 1:1, length: 2, 2:2})
# Allen Wirfs-Brock (13 years ago)

On Nov 5, 2011, at 11:11 AM, Till Schneidereit wrote:

On Sat, Nov 5, 2011 at 18:10, Allen Wirfs-Brock <allen at wirfs-brock.com> wrote:

What if the RHS doesn't have a length property at all? Or it has one with a value that isn't convertible to a number? No need for that complexity.

This case is consistently handled throughtout the ES spec: ToInteger( obj.[Get]) evaluates to 0 if length is missing or has a bogus value.

So in your favored solution, would the following example result in x, y, and z all being undefined? let [x,y,z] = {0:0, 1:1, 2:2};

yes

It should, as no length is assumed to mean "length === 0", IIUC, and that seems so unintuitive to me that it sways my opinion towards not imposing array-ness on the RHS.

but that is exactly how it works everywhere else length is used in ES.

Thus: let [x,y,z] = {0:0, 1:1, length:2, 2:2}; results in x === 0, y === 1, z === 2;

and let [x,y,...r] = {0:0, 1:1, 2:2, length: 3, 3:3,4:4}; results in x === 0, y === 1, r === [2, 3, 4];

The last example hinges on the exact behavior of rest, of course. It feels most natural to me to let it mean some thing along the lines of "collect all not-yet destructured numeric properties into array 'r'".

So you would be fine with the fact that

var r = Array.prototype.slice.call(0:0, 1:1, length: 2, 2:2}, 0);

produces [0,1]

but

var [...rx] = {0:0, 1:1, length: 2, 2:2};

produces [0,1,2]

Alalen

# Brendan Eich (13 years ago)

On Nov 5, 2011, at 9:59 AM, John J Barton wrote:

On Sat, Nov 5, 2011 at 9:28 AM, Allen Wirfs-Brock <allen at wirfs-brock.com> wrote:

In the following declaration, what should be the value of z?

let [z,y,z] = {0:0, 1:1, length: 2, 2:2};

should it be be 2 or undefined

undefined might be reasonable because it is an array pattern, and the source object is only has two "array-like" elements 2 might be reasonable because the source object actually has a property named "2"

Which alternative will be least surprising to JS programmers?

(Oh great more features that turn programming into little brain teasers :-( )

Devs need an algorithm:

Agreed!

let _lhs = []; let _rhs = {0:0, 1:1, length: 2, 2:2}; _lhs[0] = _rhs[0]; _lhs[1] = _rhs[1]; _lhs[2] = _rhs[2]; let z = _lhs[0]; let y = _lhs[1]; let z = _lhs[2];

So z would be 2.

I guess the algorithm can be shorter: let _rhs = {0:0, 1:1, length: 2, 2:2}; let z = _rhs[0]; let y = _rhs[1]; let z = _rhs[2];

This is exactly the simple, local transformation we specified for ES4 destructuring (wiki.ecmascript.org/doku.php?id=proposals:destructuring_assignment#notes), first implemented for array patterns by Opera and I hope it carries over. Anything else is more complex, requiring a 'length' check before rhs[i] get.

# Till Schneidereit (13 years ago)

On Sat, Nov 5, 2011 at 19:27, Allen Wirfs-Brock <allen at wirfs-brock.com> wrote:

ToInteger( obj.[Get]) evaluates to 0 if length is missing or has a bogus value.

So in your favored solution, would the following example result in x, y, and z all being undefined? let [x,y,z] = {0:0, 1:1, 2:2};

yes

It should, as no length is assumed to mean "length === 0", IIUC, and that seems so unintuitive to me that it sways my opinion towards not imposing array-ness on the RHS.

but that is exactly how it works everywhere else length is used in ES.

Thus: let [x,y,z] = {0:0, 1:1, length:2, 2:2}; results in x === 0, y === 1, z === 2;

and let [x,y,...r] = {0:0, 1:1, 2:2, length: 3, 3:3,4:4}; results in x === 0, y === 1, r === [2, 3, 4];

The last example hinges on the exact behavior of rest, of course. It feels most natural to me to let it mean some thing along the lines of "collect all not-yet destructured numeric properties into array 'r'".

So you would be fine with the fact that

var r = Array.prototype.slice.call(0:0, 1:1, length: 2, 2:2}, 0);

produces  [0,1]

but

var [...rx] = {0:0, 1:1, length: 2, 2:2};

produces [0,1,2]

Hmm, no. I agree with you that while your proposed destructuring behavior in isolation seem unintuitive (my words, not yours, of course), seen in full context of other functionality's behavior, it would be more unintuitive for it to behave as I proposed.

It still seems pretty unfortunate to me not to be able to give this intuitive, yet consistent with the rest of the spec, behavior.

# Brendan Eich (13 years ago)

On Nov 5, 2011, at 9:38 AM, Allen Wirfs-Brock wrote:

In a similar vain, what is the value of r in:

let [z,y,...r] = {0:0, 1:1, 2:2, length: 3, 3:3,4:4};

should it be [2] or [2,3,4] (and if the latter how is that determined)?

The inspiration for ... in the past came from (among other sources) Successor ML:

successor-ml.org/index.php?title=Functional_record_extension_and_row_capture

It seems to me that [2] is the right answer

The issue with ... in an array destructuring pattern is different from the case without. We have a choice, as you say. It's not obvious that doing a "get" of 'length'

# Erik Arvidsson (13 years ago)

On Sat, Nov 5, 2011 at 11:55, Brendan Eich <brendan at mozilla.com> wrote:

The issue with ... in an array destructuring pattern is different from the case without. We have a choice, as you say. It's not obvious that doing a "get" of 'length' on the RHS (once per ...) is the right answer. It's plausible in my view that ... captures all indexed properties (index as defined by ECMA-262).

// A var [x, y, z] = {0: 0, 1: 1, 2: 2, length: 0}; $x $y $z // '0 1 2'

// B var [x, ...xs] = {0: 0, 1: 1, 2: 2, length: 2}; $x [$xs] // '0 [1]'

I think A is clear. It is just syntactic sugar for x = $tmp[0] etc. No need to check length. In this case the lhs drives this decision.

B IMO must check the length or it would have to iterate over all own(?) properties in rhs instead of just iterating over startIndex to length - 1. It is also more consistent with other operations that work on array like objects (slice, apply, splice...).

In Traceur we do the following:

var x, xs; (function($0) { x = $0[0]; xs = Array.prototype.slice.call($0, 1); return $0; }).call(this, { 0: 0, 1: 1, 2: 2, length: 2 });

I believe this is simpler to understand than to say that all indexed properties are used.

# Brendan Eich (13 years ago)

On Nov 5, 2011, at 10:24 AM, Allen Wirfs-Brock wrote:

On Nov 5, 2011, at 9:59 AM, John J Barton wrote:

(Oh great more features that turn programming into little brain teasers :-( )

Devs need an algorithm:

let _lhs = []; let _rhs = {0:0, 1:1, length: 2, 2:2}; _lhs[0] = _rhs[0]; _lhs[1] = _rhs[1]; _lhs[2] = _rhs[2]; let z = _lhs[0]; let y = _lhs[1]; let z = _lhs[2];

So z would be 2.

I guess the algorithm can be shorter: let _rhs = {0:0, 1:1, length: 2, 2:2}; let z = _rhs[0]; let y = _rhs[1]; let z = _rhs[2];

I don't understand what " the source object is only has two "array-like" elements" can mean.

If you used any of the array functions to process _rhs, they would ignore element 2. For example:

_rhs.forEach(function(v) {print(v)});

would print: 0 1

Array.prototype.forEach.call(_rhs, function(v){...})

This has too many moving parts to be a desugaring.

Also a mandatory [[Get]] of 'length' from _rhs before the evaluation of the array pattern is too much.

I don't see why 'length' needs to come into play unless there's a ... in the pattern, or even then. The alternative is to enumerate keys of _rhs and consider all for which key == ToString(ToUint32(key)).

# Brendan Eich (13 years ago)

On Nov 5, 2011, at 11:01 AM, John J Barton wrote:

I can see why my version is wrong

Your interpretion is not wrong.

But the feature does not do this. Rather, since LHS is array, it coerces the RHS to an array: let z = coerceToArray(_rhs)[0]; and we don't know what that operation means.

Definitely agree on that interpretation being mysterious!

This example illustrates (again) why implicit type conversions sucks

Yup.

(except to strings ;-).

Even then (the + operator).

Can't this just be a error?

It could but it need not be the tail that wags the dog. Your previous mail showed the obvious desugaring, which was what Lars Thomas Hansen pioneered at Opera, which fed into ES4 and SpiderMonkey JS1.7 and up, and which is still what the harmony:destructuring proposal specs.

The addition of ... into patterns requires an interpretation. Error, all index properties ignoring any 'length' in the RHS, all index properties up to the length value, other we haven't thought of. Whatever we do this doesn't break the huge benefits of destructuring.

# Brendan Eich (13 years ago)

On Nov 5, 2011, at 11:27 AM, Allen Wirfs-Brock wrote:

It should, as no length is assumed to mean "length === 0", IIUC, and that seems so unintuitive to me that it sways my opinion towards not imposing array-ness on the RHS.

but that is exactly how it works everywhere else length is used in ES.

Who says 'length' is used by an array destructuring pattern. That should not be assumed.

So you would be fine with the fact that

var r = Array.prototype.slice.call(0:0, 1:1, length: 2, 2:2}, 0);

produces [0,1]

but

var [...rx] = {0:0, 1:1, length: 2, 2:2};

produces [0,1,2]

Even if Till would not (and it's clear you would not be fine with this outcome ;-), we have a choice. We can make array destructuring patterns that do not use ... ignore any RHS 'length' property. We could even forbid this kind of "row capture" -- I'm not in favor.

Row capture in object patterns requires enumeration of RHS properties. Have you worked on that case too?

# Brendan Eich (13 years ago)

On Nov 5, 2011, at 12:38 PM, Erik Arvidsson wrote:

On Sat, Nov 5, 2011 at 11:55, Brendan Eich <brendan at mozilla.com> wrote:

The issue with ... in an array destructuring pattern is different from the case without. We have a choice, as you say. It's not obvious that doing a "get" of 'length' on the RHS (once per ...) is the right answer. It's plausible in my view that ... captures all indexed properties (index as defined by ECMA-262).

// A var [x, y, z] = {0: 0, 1: 1, 2: 2, length: 0}; $x $y $z // '0 1 2'

// B var [x, ...xs] = {0: 0, 1: 1, 2: 2, length: 2}; $x [$xs] // '0 [1]'

I think A is clear. It is just syntactic sugar for x = $tmp[0] etc. No need to check length. In this case the lhs drives this decision.

Yay, agreed.

B IMO must check the length or it would have to iterate over all own(?) properties in rhs instead of just iterating over startIndex to length - 1.

That's what row capture must do when ... is used in an object pattern. Are you worried about performance?

It is also more consistent with other operations that work on array like objects (slice, apply, splice...).

True.

In Traceur we do the following:

var x, xs; (function($0) { x = $0[0]; xs = Array.prototype.slice.call($0, 1); return $0; }).call(this, { 0: 0, 1: 1, 2: 2, length: 2 });

I believe this is simpler to understand than to say that all indexed properties are used.

Good, but more important than the exception building upon an existing understanding is your willingness to avoid getting 'length' if the array pattern has no ... in it. That is a kind of consistency argument that Allen made, which I don't prioritize very high among countervailing arguments.

We have:

  1. Should an array pattern always query 'length'?

  2. If the answer to (1) is "no", then should ... in an array pattern query 'length'?

On reflection and at this point in the thread, with your reply in mind, my prefs in order: [no, yes], [no, no]. In no case do I favor [yes]. I'm refutably matching [no, _] :-P.

# Erik Arvidsson (13 years ago)

On Sat, Nov 5, 2011 at 14:41, Brendan Eich <brendan at mozilla.com> wrote:

  1. Should an array pattern always query 'length'?

  2. If the answer to (1) is "no", then should ... in an array pattern query 'length'?

On reflection and at this point in the thread, with your reply in mind, my prefs in order: [no, yes], [no, no]. In no case do I favor [yes]. I'm refutably matching [no, _] :-P.

If we don't do start index to length - 1 then we have to determine the enumeration method. Is it?

A) own properties B) all properties C) objectiterator and ignore the items before start index

Maybe C is preposterous but just putting it out there for consistency.

# Allen Wirfs-Brock (13 years ago)

On Nov 5, 2011, at 12:38 PM, Erik Arvidsson wrote:

On Sat, Nov 5, 2011 at 11:55, Brendan Eich <brendan at mozilla.com> wrote:

The issue with ... in an array destructuring pattern is different from the case without. We have a choice, as you say. It's not obvious that doing a "get" of 'length' on the RHS (once per ...) is the right answer. It's plausible in my view that ... captures all indexed properties (index as defined by ECMA-262).

// A var [x, y, z] = {0: 0, 1: 1, 2: 2, length: 0}; $x $y $z // '0 1 2'

// B var [x, ...xs] = {0: 0, 1: 1, 2: 2, length: 2}; $x [$xs] // '0 [1]'

I think A is clear. It is just syntactic sugar for x = $tmp[0] etc. No need to check length. In this case the lhs drives this decision.

B IMO must check the length or it would have to iterate over all own(?) properties in rhs instead of just iterating over startIndex to length - 1. It is also more consistent with other operations that work on array like objects (slice, apply, splice...).

In Traceur we do the following:

var x, xs; (function($0) { x = $0[0]; xs = Array.prototype.slice.call($0, 1); return $0; }).call(this, { 0: 0, 1: 1, 2: 2, length: 2 });

I believe this is simpler to understand than to say that all indexed properties are used.

Just to be explicit about this point, the above use of slice is dependent upon length in exactly the way I'm suggesting...

# Allen Wirfs-Brock (13 years ago)

On Nov 5, 2011, at 2:28 PM, Brendan Eich wrote:

On Nov 5, 2011, at 10:24 AM, Allen Wirfs-Brock wrote:

On Nov 5, 2011, at 9:59 AM, John J Barton wrote:

...

I don't understand what " the source object is only has two "array-like" elements" can mean.

If you used any of the array functions to process _rhs, they would ignore element 2. For example:

_rhs.forEach(function(v) {print(v)});

would print: 0 1

Array.prototype.forEach.call(_rhs, function(v){...})

This has too many moving parts to be a desugaring.

the forEach example isn't part of destructuringing. It is a response to John saying he didn't know what "array-like" meant.

In general, destructuring already has too many moving parts to simply be a simple desugaring. array/object distinctions, rests on the LHS, default value specifiers, etc. I'm specifying it like any other feature in the language.

Also a mandatory [[Get]] of 'length' from _rhs before the evaluation of the array pattern is too much.

That's what lead me to post the original questions. I realized that I needed to carefully manage access to the "length" property. I now have it specified so that exactly one "length" access is need for each "array" destructuring, regardless of the number of elements that are assigned or whether a rest is involved. That seems potentially reasonable.

I don't see why 'length' needs to come into play unless there's a ... in the pattern, or even then. The alternative is to enumerate keys of _rhs and consider all for which key == ToString(ToUint32(key)).

Do you want a consistent set of rules for dealing with "array-like" objects or do you think it is ok to make up rules for each new function or operation we invent. Right now, we have a very consistent processing pattern that is followed by all the array related functions that process multiple elements. That pattern is driven off of an initial length determination at the beginning of the algorithm.

Destructuring needs to look at both inherited and own properties (If it doesn't then the behavior of something like {a,b,c} = obj would be dependent upon the inheritance factoring of obj. That's bad. obj's inheritance structure should be one of its implementation details and not something that is so easily observable.

The algorithm that would need to be specified to make destructuring rest work with inherited properties but without using length is not simple. We have a model for what it means to slice an object from a starting index to its "end". That's the model we should also apply here rather than inventing something new.

# Allen Wirfs-Brock (13 years ago)

On Nov 5, 2011, at 2:34 PM, Brendan Eich wrote:

On Nov 5, 2011, at 11:27 AM, Allen Wirfs-Brock wrote:

It should, as no length is assumed to mean "length === 0", IIUC, and that seems so unintuitive to me that it sways my opinion towards not imposing array-ness on the RHS.

but that is exactly how it works everywhere else length is used in ES.

Who says 'length' is used by an array destructuring pattern. That should not be assumed.

just to clarify, everywhere else a range of "array" elements is used, the range is limited by the value of the "length" property. Regardless of where or not the object in question is a "real" array.

So you would be fine with the fact that

var r = Array.prototype.slice.call(0:0, 1:1, length: 2, 2:2}, 0);

produces [0,1]

but

var [...rx] = {0:0, 1:1, length: 2, 2:2};

produces [0,1,2]

Even if Till would not (and it's clear you would not be fine with this outcome ;-), we have a choice. We can make array destructuring patterns that do not use ... ignore any RHS 'length' property. We could even forbid this kind of "row capture" -- I'm not in favor.

I was fine with it to, until ... came along. If ... uses length (and I think it probably should, as I touched on in another reply) then element access also should)

Row capture in object patterns requires enumeration of RHS properties. Have you worked on that case too?

No, row capture was never accepted as part of the proposal.

# Brendan Eich (13 years ago)

On Nov 5, 2011, at 3:50 PM, Erik Arvidsson wrote:

On Sat, Nov 5, 2011 at 14:41, Brendan Eich <brendan at mozilla.com> wrote:

  1. Should an array pattern always query 'length'?

  2. If the answer to (1) is "no", then should ... in an array pattern query 'length'?

On reflection and at this point in the thread, with your reply in mind, my prefs in order: [no, yes], [no, no]. In no case do I favor [yes]. I'm refutably matching [no, _] :-P.

If we don't do start index to length - 1 then we have to determine the enumeration method. Is it?

A) own properties B) all properties

For {x, y, ...r} = o the answer would be (B) -- if o = {w:0} <| {x:1, y:2, z:3}, then r would be {w:0, z:3}.

C) objectiterator and ignore the items before start index

Iteration is for the "value domain", so I wouldn't mix it in here in key-land.

# Brendan Eich (13 years ago)

On Nov 5, 2011, at 5:40 PM, Allen Wirfs-Brock wrote:

In general, destructuring already has too many moving parts to simply be a simple desugaring. array/object distinctions, rests on the LHS, default value specifiers, etc. I'm specifying it like any other feature in the language.

Take away "rests" and what remains is desugaring or local transformation, no new semantics. Let's look:

[x, y] = a => t = a, x = t[0], y = t[1]

{p:x, q:y} = o => t = o, x = t.p, y = t.q

[z = w] = a => t = a, (z = (0 in t) ? t[0] : w)

{r: z = w] = o => t = o, (z = ('r' in t) ? t.r : w)

These compose straightforwardly. The {x} shorthand for {x: x} is even simpler.

That's what lead me to post the original questions. I realized that I needed to carefully manage access to the "length" property. I now have it specified so that exactly one "length" access is need for each "array" destructuring, regardless of the number of elements that are assigned or whether a rest is involved. That seems potentially reasonable.

I don't think it is good to get 'length' if there's no rest. It's pure overhead for no reason. If 'length' has an effect-ful getter on some arraylike, there's no win in saying all array patterns invoke it. Rather, such a nasty array-like would be better avoided, but if it is used, the pay-only-for-what-you-take argument applies. IOW, I agree with Arv.

I don't see why 'length' needs to come into play unless there's a ... in the pattern, or even then. The alternative is to enumerate keys of _rhs and consider all for which key == ToString(ToUint32(key)).

Do you want a consistent set of rules for dealing with "array-like" objects or do you think it is ok to make up rules for each new function or operation we invent. Right now, we have a very consistent processing pattern that is followed by all the array related functions that process multiple elements. That pattern is driven off of an initial length determination at the beginning of the algorithm.

Only when length is needed. It's not if there's no use for it. In the current controversy, it shouldn't be got if there's no rest in the enclosing array pattern.

# Brendan Eich (13 years ago)

On Nov 5, 2011, at 5:52 PM, Allen Wirfs-Brock wrote:

On Nov 5, 2011, at 2:34 PM, Brendan Eich wrote:

On Nov 5, 2011, at 11:27 AM, Allen Wirfs-Brock wrote:

It should, as no length is assumed to mean "length === 0", IIUC, and that seems so unintuitive to me that it sways my opinion towards not imposing array-ness on the RHS.

but that is exactly how it works everywhere else length is used in ES.

Who says 'length' is used by an array destructuring pattern. That should not be assumed.

just to clarify, everywhere else a range of "array" elements is used, the range is limited by the value of the "length" property. Regardless of where or not the object in question is a "real" array.

Ok, but perhaps we agree, since I've thrown in with Arv on getting 'length' for an array pattern directly containing a ... special form. But not for other array patterns!

So you would be fine with the fact that

var r = Array.prototype.slice.call(0:0, 1:1, length: 2, 2:2}, 0);

produces [0,1]

but

var [...rx] = {0:0, 1:1, length: 2, 2:2};

produces [0,1,2]

Even if Till would not (and it's clear you would not be fine with this outcome ;-), we have a choice. We can make array destructuring patterns that do not use ... ignore any RHS 'length' property. We could even forbid this kind of "row capture" -- I'm not in favor.

I was fine with it to, until ... came along. If ... uses length (and I think it probably should, as I touched on in another reply) then element access also should)

That "should" doesn't follow. It's a choice and the way you chose violates pay-for-what-you-take. Not something I want in my favored dining establishments. Salad buyer pays for steak? Hmph!

Row capture in object patterns requires enumeration of RHS properties. Have you worked on that case too?

No, row capture was never accepted as part of the proposal.

Well, Arv added ... in array patterns under "issues" and I don't remember working through the ramifications -- indeed, here we are. Why rule out row capture? Arrays are "easier" because of 'length' but we enumerate object keys in many places in ES5 already. What's one more?

# Till Schneidereit (13 years ago)

On Sun, Nov 6, 2011 at 03:02, Brendan Eich <brendan at mozilla.com> wrote:

On Nov 5, 2011, at 5:52 PM, Allen Wirfs-Brock wrote:

On Nov 5, 2011, at 2:34 PM, Brendan Eich wrote:

On Nov 5, 2011, at 11:27 AM, Allen Wirfs-Brock wrote:

It should, as no length is assumed to mean "length === 0", IIUC, and that seems so unintuitive to me that it sways my opinion towards not imposing array-ness on the RHS.

but that is exactly how it works everywhere else length is used in ES.

Who says 'length' is used by an array destructuring pattern. That should not be assumed.

just to clarify, everywhere else a range of "array" elements is used, the range is limited by the value of the "length" property.  Regardless of where or not the object in question is a "real" array.

Ok, but perhaps we agree, since I've thrown in with Arv on getting 'length' for an array pattern directly containing a ... special form. But not for other array patterns!

So you would be fine with the fact that

var r = Array.prototype.slice.call(0:0, 1:1, length: 2, 2:2}, 0);

produces  [0,1]

but

var [...rx] = {0:0, 1:1, length: 2, 2:2};

produces [0,1,2]

Even if Till would not (and it's clear you would not be fine with this outcome ;-), we have a choice. We can make array destructuring patterns that do not use ... ignore any RHS 'length' property. We could even forbid this kind of "row capture" -- I'm not in favor.

I was fine with it to, until ... came along.  If ... uses length (and I think it probably should, as I touched on in another reply) then element access also should)

That "should" doesn't follow. It's a choice and the way you chose violates pay-for-what-you-take. Not something I want in my favored dining establishments. Salad buyer pays for steak? Hmph!

After thinking about this some more, I think I'd be fine with the above outcome after all. It seems to me as though the array destructuring pattern really only uses the syntax of arrays, not necessarily their semantics. Once row capture comes into play, not even the syntax matches completely, anymore.

The important difference is that, as opposed to the slice example above, at no point during array destructuring an array object is created or modified. Given that, isn't the array destructuring pattern really a convenient way of saying "in this destructuring, all keys are numeric indices, starting at 0 and increasing linearly by 1"? I.e., it's mostly syntax for another form of shorthand, pretty similar to {x} = {x:1}.

Assuming that, I don't see why length should be used, anymore.

I'm still with Allen on one point: Length should either not be used at all, or in all array destructuring operations. The alternative would turn refactoring to ... into a potentially surprising operation for destructuring, compared to its other uses. Consider [a, b, c] = {length:0, 0:0, 1:1, 2:2} //results in a = 0, b = 1, c = 2

Now if that's refactored to [a, ...r]

if length is used, the result is a = 0, r = []

if length isn't used, the result is a = 0, r = [1, 2]

The latter is exactly the behavior of using ... in function signatures

  • and, I posit, much less surprising in itself.

I can't say anything about implementation hardships, but from a usability point of view, I'd say the usage of length should be [no, no] or [yes, yes], with a strong preference for the former.

till

# Axel Rauschmayer (13 years ago)

One more thought: Should the following assignments produce the same result?

 [x, y, z] = {length:0, 0:0, 1:1, 2:2}
 [x, y, z] = [].slice.call({length:2, 0:0, 1:1, 2:2})

Then the decision boils down to whether an array conversion happens (however implicitly) on the rhs or whether the lhs is syntactic sugar for x = rhs[0], y = rhs[1], z = rhs[2]

# John J Barton (13 years ago)

On Sat, Nov 5, 2011 at 11:16 AM, Allen Wirfs-Brock <allen at wirfs-brock.com> wrote:

On Nov 5, 2011, at 11:01 AM, John J Barton wrote:

I can see why my version is wrong: I am interpreting square brackets on the RHS like JS devs would.  let z = _rhs[0]; // LHS was array, so get zeroth elt,but RHS is object so property access.

But the feature does not do this. Rather, since LHS is array, it coerces the RHS to an array:  let z = coerceToArray(_rhs)[0]; and we don't know what that operation means.

What do you think this returns:     Array.prototype.pop.call({0:0, 1:1, length: 2, 2:2})

The answer depends upon the definition of pop(). For an array argument we can predict based on our experience with pop(). Otherwise we cannot predict without more information. That's the nature of dynamically typed language.

Consequently this example does not bear upon the thread in my opinion. In the example let [z,y,z] = {0:0, 1:1, length: 2, 2:2}; the language is inserting some operation to map the RHS to the LHS. The answer depends upon the operation inserted; it need not be related to pop().

jjb

# Allen Wirfs-Brock (13 years ago)

On Nov 6, 2011, at 8:03 AM, John J Barton wrote:

On Sat, Nov 5, 2011 at 11:16 AM, Allen Wirfs-Brock <allen at wirfs-brock.com> wrote:

What do you think this returns: Array.prototype.pop.call({0:0, 1:1, length: 2, 2:2})

The answer depends upon the definition of pop(). For an array argument we can predict based on our experience with pop(). Otherwise we cannot predict without more information. That's the nature of dynamically typed language.

The definition of Array.prototype.pop is precisely provided by section 15.4.4.6 of the the ECMAScript specification. That definition applies whether or not the "this value" of the function is a "real" array or just an ordinary object.

Consequently this example does not bear upon the thread in my opinion. In the example let [z,y,z] = {0:0, 1:1, length: 2, 2:2}; the language is inserting some operation to map the RHS to the LHS. The answer depends upon the operation inserted; it need not be related to pop().

The relevance is that pop is an example of how the ES specification consistently defines apparent array operations and functions in a manner that applies to all objects, not just instances of the Array constructor.

You said

But the feature does not do this. Rather, since LHS is array, it coerces the RHS to an array: let z = coerceToArray(_rhs)[0]; and we don't know what that operation means.

I an showing, via pop, that we do know what it means to treat a regular object as an array.

Also, There is no actual "coerceToArray" operation in the ES specification and such a coercion would not occur for destructuring. Instead, very operation/function that might access/operate upon an "array" is defined in such a way that it result is well defined for any object.

The ES spec. must define what happens in all cases. That is exactly the exercise we are engaged in here. How do we specify destructuring so that its behavior is both well defined in all situations and consistent with both other related ports of the language and with reasonable user expectations.

# Allen Wirfs-Brock (13 years ago)

On Nov 6, 2011, at 6:32 AM, Axel Rauschmayer wrote:

One more thought: Should the following assignments produce the same result?

 [x, y, z] = {length:0, 0:0, 1:1, 2:2}
 [x, y, z] = [].slice.call({length:2, 0:0, 1:1, 2:2})

did you intend to use the same "length" value in both of the above lives?

Then the decision boils down to whether an array conversion happens (however implicitly) on the rhs or whether the lhs is syntactic sugar for x = rhs[0], y = rhs[1], z = rhs[2]

As I've mentioned in a reply to another message. There are currently no "array conversions" in ES. Instead, Array operations/functions are always defined to in a manner that applies to all objects, not just instances of the Array constructor.

Here is another example of why desugaring to [ ] is to simplistic a view of destructuring:

let [a=0, b=1] = [,"one"];

Note the default value initializers to the right of the declared identifiers. Such initializers provide the value when the RHS of the initializer for the entire destructuring pattern does not.

What is the value of the variable 'a' after this statement? Is it undefined or is it 0?

If destructuring initialization is a simple desugaring to [ ] access then the above means the same as:

let _rhs = [,"one"]; let a=_rhs[0], b=_rhs[1];

and the value of 'a' will be undefined.

Note that there was no reason to include the =0 or =1 default initializers anywhere because _rhs[X] yields a value for all possible values of X. For most values of X, that will be the value undefined.

If we want such default value initializers to actually mean anything we have to use a more complex definition of destructuring. Approximately:

function hasProperty(obj,p) { // code that is equivalent to the ES5 [[HasProperty]] internal method // return obj.[HasProperty] }

let _rhs = [,"one"]; let a= (hasProperty(_rhs,0) ? _rhs[0] : 0), b=(hasProperty(_rhs,1) ? _rhs[0] : 1);

The spec. I'm writing already takes care of all of these cases. My main point above, is that thinking of desugaring as just a simple assignment is a naive view that doesn't take into account all the bells and whistles that have been added to it since Lars' original implementation at Opera.

# John J Barton (13 years ago)

On Sun, Nov 6, 2011 at 9:09 AM, Allen Wirfs-Brock <allen at wirfs-brock.com> wrote:

On Nov 6, 2011, at 8:03 AM, John J Barton wrote:

On Sat, Nov 5, 2011 at 11:16 AM, Allen Wirfs-Brock <allen at wirfs-brock.com> wrote:

What do you think this returns:     Array.prototype.pop.call({0:0, 1:1, length: 2, 2:2})

The answer depends upon the definition of pop(). For an array argument we can predict based on our experience with pop(). Otherwise we cannot predict without more information. That's the nature of dynamically typed language.

The definition of Array.prototype.pop is precisely provided by section 15.4.4.6 of the the ECMAScript specification.  That definition applies whether or not the "this value" of the function is a "real" array or just an ordinary object.

Yes sure, I do understand that it has a precise and generic definition. But the question you posed as "what do you think", and I responded as a JS dev: I don't know without reading 15.4.4.6. I expect I have to know the details because pop(), like real JS methods, applies to all arguments not just Arrays.

Consequently this example does not bear upon the thread in my opinion. In the example  let [z,y,z] = {0:0, 1:1, length: 2, 2:2}; the language is inserting some operation to map the RHS to the LHS. The answer depends upon the operation inserted; it need not be related to pop().

The relevance is that pop is an example of how the ES specification consistently defines apparent array operations and functions in a manner that applies to all objects, not just instances of the Array constructor.

I'd say this somewhat differently: the built-in Array operations could be defined to work only on Array objects, doing run-time type testing. But, staying in the spirit of ES generally, they are not. They just do their work and let the cards land where they fall.

You said

But the feature does not do this. Rather, since LHS is array, it coerces the RHS to an array:  let z = coerceToArray(_rhs)[0]; and we don't know what that operation means.

I an showing, via pop, that we do know what it means to treat a regular object as an array.

Yes, I understood your goal, but I don't agree that you know what it means to treat a regular object as an array. Rather you know how to implement pop() so it works on some non-Array objects. That knowledge may or may not help you define other operations. Knowing that pop() is generic and works on some non-Array objects does not, however, give me -- the JS dev -- any hints on other methods.

If ES defined a coerceToArray, then we would know what it means to treat an regular object as an array.

Also, There is no actual "coerceToArray" operation in the ES specification and such a coercion would not occur for destructuring.  Instead, very operation/function that might access/operate upon an "array" is defined in such a way that it result is well defined for any object.

I think you are agreeing with me ;-). You are saying there is no common operation 'coerceToArray'. Coerce makes sense in a system of types: to apply an Array op to any object, first coerceToArray then apply the Array op. We don't do that in JS.

Instead each operation applies some sub-operations to the argument. If the sub-operations succeed and make sense, then the operation succeeds and makes sense.

Thus the answer from pop() does not tell us anything about destructuring. We need to know the sub-operations of destructuring.

Now it is possible that you know an implicit coerceToArray and you know it will tell us how to implement pop() and destructuring. But it would news to me, and I guess most other devs.

The ES spec. must define what happens in all cases.  That is exactly the exercise we are engaged in here.  How do we specify destructuring so that its behavior is both well defined in all situations and consistent with both other related ports of the language and with reasonable user expectations.

Yes, I total understand the challenge you face. My initial user expectation was that destructuring was a kind of assignment. It's a shorthand for individual assignments. So mapping the object property at name '2' is the answer I expect.

jjb

# Axel Rauschmayer (13 years ago)

One more thought: Should the following assignments produce the same result?

[x, y, z] = {length:2, 0:0, 1:1, 2:2}
[x, y, z] = [].slice.call({length:2, 0:0, 1:1, 2:2})

did you intend to use the same "length" value in both of the above lives?

Yes, corrected above.

Then the decision boils down to whether an array conversion happens (however implicitly) on the rhs or whether the lhs is syntactic sugar for x = rhs[0], y = rhs[1], z = rhs[2]

As I've mentioned in a reply to another message. There are currently no "array conversions" in ES. Instead, Array operations/functions are always defined to in a manner that applies to all objects, not just instances of the Array constructor.

I would argue that [].slice.call(value) is the de-facto way of converting a value (such as arguments) to an array.

Here is another example of why desugaring to [ ] is to simplistic a view of destructuring:

let [a=0, b=1] = [,"one"];

Note the default value initializers to the right of the declared identifiers. Such initializers provide the value when the RHS of the initializer for the entire destructuring pattern does not.

What is the value of the variable 'a' after this statement? Is it undefined or is it 0?

If destructuring initialization is a simple desugaring to [ ] access then the above means the same as:

let _rhs = [,"one"]; let a=_rhs[0], b=_rhs[1];

and the value of 'a' will be undefined.

Note that there was no reason to include the =0 or =1 default initializers anywhere because _rhs[X] yields a value for all possible values of X. For most values of X, that will be the value undefined.

If we want such default value initializers to actually mean anything we have to use a more complex definition of destructuring. Approximately:

function hasProperty(obj,p) { // code that is equivalent to the ES5 [[HasProperty]] internal method // return obj.[HasProperty] }

let _rhs = [,"one"]; let a= (hasProperty(_rhs,0) ? _rhs[0] : 0), b=(hasProperty(_rhs,1) ? _rhs[0] : 1);

The spec. I'm writing already takes care of all of these cases. My main point above, is that thinking of desugaring as just a simple assignment is a naive view that doesn't take into account all the bells and whistles that have been added to it since Lars' original implementation at Opera.

Very interesting. Still seems orthogonal to the decision whether or not to coerce the rhs to array (whatever that means). If []= is considered an “array operator” then coercion seems in line with how other operators behave.

If I wanted to ignore the length property, I would use: let { 0: a, 1: b } = someValue

# Allen Wirfs-Brock (13 years ago)

On Nov 6, 2011, at 10:01 AM, Axel Rauschmayer wrote:

...

Very interesting. Still seems orthogonal to the decision whether or not to coerce the rhs to array (whatever that means). If []= is considered an “array operator” then coercion seems in line with how other operators behave.

If I wanted to ignore the length property, I would use: let { 0: a, 1: b } = someValue

This is pretty much my thinking too. A programmer's choice to use [ ] instead of {} the destructuring pattern means something. By choosing to use [ ] as the pattern they are expressing the intent to view someValue using "array element" access semantics. If they don't care about such things, they can use the { } pattern.

# Axel Rauschmayer (13 years ago)

If I wanted to ignore the length property, I would use: let { 0: a, 1: b } = someValue

This is pretty much my thinking too. A programmer's choice to use [ ] instead of {} the destructuring pattern means something. By choosing to use [ ] as the pattern they are expressing the intent to view someValue using "array element" access semantics. If they don't care about such things, they can use the { } pattern.

One would have to rethink all of these issues if the namespaces of array indices and of property names became disjoint in the future. But it’s just an edge case, anyway.

# Allen Wirfs-Brock (13 years ago)

On Nov 6, 2011, at 3:53 AM, Till Schneidereit wrote:

...

After thinking about this some more, I think I'd be fine with the above outcome after all. It seems to me as though the array destructuring pattern really only uses the syntax of arrays, not necessarily their semantics. Once row capture comes into play, not even the syntax matches completely, anymore.

I'm skeptical of ES every supporting row capture. It is essentially an shallow copy (with exclusions) operation and it brings into to play all the copy semantics issues that up to this point we have been unwilling to try to take on.

The important difference is that, as opposed to the slice example above, at no point during array destructuring an array object is created or modified.

Well, a (real) array object is created is a rest pattern is used.

Given that, isn't the array destructuring pattern really a convenient way of saying "in this destructuring, all keys are numeric indices, starting at 0 and increasing linearly by 1"? I.e., it's mostly syntax for another form of shorthand, pretty similar to {x} = {x:1}.

or does the use of the [ ] pattern also imply some deeper user intent about applying array-like semantics.

Assuming that, I don't see why length should be used, anymore.

I'm still with Allen on one point: Length should either not be used at all, or in all array destructuring operations. The alternative would turn refactoring to ... into a potentially surprising operation for destructuring, compared to its other uses. Consider [a, b, c] = {length:0, 0:0, 1:1, 2:2} //results in a = 0, b = 1, c = 2

Now if that's refactored to [a, ...r]

if length is used, the result is a = 0, r = []

if length isn't used, the result is a = 0, r = [1, 2]

The latter is exactly the behavior of using ... in function signatures

  • and, I posit, much less surprising in itself.

I'm not sure the comparison to function parameters is valid for this case. A function parameter list can be modeled as a destructuring over the "arguments object" (that's actually how I'm in the process of specifying it) and for extended code (the only code that can use rest parameters) the "arguments object" is a real array with a valid "length" property. The RHS in your above example, is not a well formed array and could never appear as such an arguments object.

I can't say anything about implementation hardships, but from a usability point of view, I'd say the usage of length should be [no, no] or [yes, yes], with a strong preference for the former.

I do have concerns about the implementation implications of [no,no]

For [yes, yes] ... can be implemented as an iteration from the last accessed index position up to the value of the "length" property, checking for the existence of each index property key. For most non-array objects on the RHS, there won't be a "length" property so its integer value is consider to be 0 and no iteration occurs.

For [no, no] to implement ... we have to assume that the iteration limit is the max possible array index (currently Uint32 max - 1, but we are considering relaxing the Uint32 restriction on arrays). Since such an iteration bound is impractical, what implementations will instead have to do is inspect every property (including inherited properties with proper application of shadowing rules) of the RHS object to determine whether they are "array index" properties whose keys are greater than the last accessed index. Note that this even has to be done for "real" array objects because they can have inherited integer keyed properties whose value is greater than the "length" value of he array. Since this is a property inspection, not a integer ranged loop, we will ultimately have to sort the identified properties into index order before populating the result array.

Note that in most cases there will be no such integer keyed properties to gather but we still have to inspect every property to see if it is one. So, we have turned destructuring ... into a potentially quite expensive operation, even for simple cases. Perhaps implementations can apply some of their already implemented "array" indexed property optimizations to limit the search space but I still think that we should think about such implementation considerations as we make these semantic decisions.

# Brendan Eich (13 years ago)

On Nov 6, 2011, at 11:18 AM, Allen Wirfs-Brock wrote:

On Nov 6, 2011, at 3:53 AM, Till Schneidereit wrote:

...

After thinking about this some more, I think I'd be fine with the above outcome after all. It seems to me as though the array destructuring pattern really only uses the syntax of arrays, not necessarily their semantics. Once row capture comes into play, not even the syntax matches completely, anymore.

I'm skeptical of ES every supporting row capture. It is essentially an shallow copy (with exclusions) operation and it brings into to play all the copy semantics issues that up to this point we have been unwilling to try to take on.

We have JS code such as Prototype's Object.extend that do for-in loops copying (own and inherited) enumerable properties from src to dest. It's not a problem.

Why wouldn't row capture work similarly? Perhaps only own enumerable properties. Or all own properties, but the point is there's no deep clone complexity.

The important difference is that, as opposed to the slice example above, at no point during array destructuring an array object is created or modified.

Well, a (real) array object is created is a rest pattern is used.

Why?

Given that, isn't the array destructuring pattern really a convenient way of saying "in this destructuring, all keys are numeric indices, starting at 0 and increasing linearly by 1"? I.e., it's mostly syntax for another form of shorthand, pretty similar to {x} = {x:1}.

or does the use of the [ ] pattern also imply some deeper user intent about applying array-like semantics.

Not in our experience, or Opera's, or Rhino's.

Maybe this is an edge case and I should quit beefing. However I do not like the aesthetics or the economics of a spec that gets 'length' for all array patterns, even if not needed.

For [no, no] to implement ... we have to assume that the iteration limit is the max possible array index (currently Uint32 max - 1,

No. We only need look at the properties that are actually present. I think you are dismissing row capture in object patterns and therefore focusing too narrowly

# Lasse Reichstein (13 years ago)

On Sat, Nov 5, 2011 at 10:41 PM, Brendan Eich <brendan at mozilla.com> wrote:

We have:

  1. Should an array pattern always query 'length'?

  2. If the answer to (1) is "no", then should ... in an array pattern query 'length'?

On reflection and at this point in the thread, with your reply in mind, my prefs in order: [no, yes], [no, no]. In no case do I favor [yes]. I'm refutably matching [no, _] :-P.

My initial intuition was [no, ?], since that was without considering rest-matching. With rest-matching, and me being a sucker for consistency, I'm now leaning towards [yes, yes].

The possibilities are [no, no] [no, yes] [yes, no] (since [yes, no] makes absolutely no sense).

[no, no] is definitly possible. It needs to be defined which properties are included by the ... in, say [x,y,...r], but since the result in z must be an array, it would seem that any array index property of the RHS where you can subtract 2 and still be an array index, is a candidate.

[no, yes] is also possible, but seems inconsistent if [x,y,z] = {0:0,1:1,2:2, length:2} makes z be 2, but [x,y,...z] = {0:0,1:1,2:2, length:2} doesn't make z be [2].

[yes, yes] means always treating the RHS as an array-like object, respecting the length property.

Both [no, no] and [yes, yes] are consistent. Either the result always depends on "array-like-ness" of the RHS or it never does, i.e., either it reads the length and converts it to an UInt32 (or Integer), or it doesn't.

If the object being destructured is in fact a plain Array, with no inherited elements above the length, then there is no difference. This is most likely the (very) common use case. This is what the ES programmers' intuition will be based one.

So the question is which behavior we would want for something that breaks the array-like-contract - treat it as a plain object (ignore length), or treat it as an array-like object (ignoring properties above length). We must do one of them, and not both, because they are mutually exclusive.

Both can cause errors if the programmer does it wrong, if you have an almost-array-like object. Which is the correct behavior depends on what was intended: to be array-like or not.

The original question was what an ES programmer would expect. I think he will probably expect array-like deconstructors to treat the RHS as an array(-like object). I.e., [yes,yes].

That also have the advantage of actually providing otherwise unavailable functionality. You can write {0:x, 1:y, 2:z} instead of [x,y,z] if you want object-behavior, but if they are the same, you can't get array-like behavior.

Arrays are just an abstraction in ECMAScript, which all the Array.prototype methods that are "intentionally generic" proves. If it quacks like an Array and swims like an Array, we allow ourselves to treat it like an Array.

I.e., I think the most easily comprehensible behavior is to make array destructuring treat the RHS as an Array. It matches the common use-case (actual arrays), it is consistent (does the same whether you use ... or not), and is easily explainable.

# Andreas Rossberg (13 years ago)

On 5 November 2011 17:44, Brendan Eich <brendan at mozilla.com> wrote:

Destructuring is "irrefutable" in that it desugars to assignments from properties of the RHS. It is not typed; it is not refutable

I don't think that's true, at least not in the usual sense of "irrefutable pattern". Because you can write

let {x} = 666

which will be refuted, by raising a TypeError.

Of course, the real question is, what does this do:

let {} = 666

# Andreas Rossberg (13 years ago)

On 5 November 2011 19:55, Brendan Eich <brendan at mozilla.com> wrote:

On Nov 5, 2011, at 9:38 AM, Allen Wirfs-Brock wrote:

In a similar vain, what is the value of r in:

let [z,y,...r] = {0:0, 1:1, 2:2, length: 3, 3:3,4:4};

should it be [2] or [2,3,4]  (and if the latter how is that determined)?

The inspiration for ... in the past came from (among other sources) Successor ML:

successor-ml.org/index.php?title=Functional_record_extension_and_row_capture

Since I actually wrote half of that, I feel obliged to say that it does not answer the questions raised here. ML is a typed language, and contrary to popular belief, many language design problems are much easier to solve in a typed setting.

However, there is some inspiration in the way SML treats tuples as special cases of records, very much like arrays are a special case of objects in JS. In particular, all of SML's pattern matching rules for tuples follow just from the way they desugar into records with numeric labels.

For Harmony, this kind of equivalence would imply that

let [x, y, z] = e

is simply taken to mean

let {0: x, 1: y, 2: z} = e

and the rest follows from there. The only problem is rest patterns. One possible semantics could be treating

let [x, y, z, ...r] = e

as equivalent to

let {0: x, 1: y, 2: z, ..._r} = e let r = [].slice.call(_r, 3)

where I assume the "canonical" matching semantics for object rest patterns that would make _r an ordinary object (not an array) accumulating all properties of e not explicitly matched (even if e itself is an array, in which case _r includes a copy of e's length property). Of course, engines would optimize properly.

(But yes, row capture for objects introduces a form of object cloning, as Allen points out.)

# Till Schneidereit (13 years ago)

I.e., I think the most easily comprehensible behavior is to make array destructuring treat the RHS as an Array. It matches the common use-case (actual arrays), it is consistent (does the same whether you use ... or not), and is easily explainable.

I agree with the consistency argument. The reason I'm in favor of [no, no] is that otherwise [x,y,z] = {0:0, 1:1, 2:2} would result in x=undefined,y=undefined,z=undefined

That doesn't seem desirable to me.

# Allen Wirfs-Brock (13 years ago)

On Nov 7, 2011, at 2:18 AM, Andreas Rossberg wrote:

On 5 November 2011 17:44, Brendan Eich <brendan at mozilla.com> wrote:

Destructuring is "irrefutable" in that it desugars to assignments from properties of the RHS. It is not typed; it is not refutable

I don't think that's true, at least not in the usual sense of "irrefutable pattern". Because you can write

let {x} = 666

which will be refuted, by raising a TypeError.

No,

It does ToObject(666) and then looks for the "x" property of the resulting wrapper object. Assume it does find one (it could, for example because Number.prototype.x = 42, for example) it assigns the value to x. If it doesn't find the property it assigns undefined.
For let {x=5} = 666;

It would assign 5 to x if the "x" property of the wrapper was not found,

Of course, the real question is, what does this do:

let {} = 666

It does ToObject(666)

# Andreas Rossberg (13 years ago)

On 7 November 2011 17:07, Allen Wirfs-Brock <allen at wirfs-brock.com> wrote:

let {x} = 666

which will be refuted, by raising a TypeError.

No,

It does ToObject(666) and then looks for the "x" property of the resulting wrapper object.

Ouch, really? I don't see that in the proposal (harmony:destructuring), and to be honest, it sounds like a horrible idea. It is just another way to silently inject an `undefined' that is tedious to track down. We already have too many of those...

When would this ever be useful behaviour instead of just obfuscating bugs?

# Allen Wirfs-Brock (13 years ago)

On Nov 7, 2011, at 8:23 AM, Andreas Rossberg wrote:

On 7 November 2011 17:07, Allen Wirfs-Brock <allen at wirfs-brock.com> wrote:

let {x} = 666

which will be refuted, by raising a TypeError.

No,

It does ToObject(666) and then looks for the "x" property of the resulting wrapper object.

Ouch, really? I don't see that in the proposal (harmony:destructuring), and to be honest, it sounds like a horrible idea.

Proposal typically don't cover this level of detail. These are the sort of things that we have to sort out when I write the specification and it's why I bring them up here.

It is just another way to silently inject an `undefined' that is tedious to track down. We already have too many of those...

It is how the language currently behaves in all situations where an object is needed but a primitive values is provided. We want consistency in language design, not a hodgepodge of special cases and different rules.

When would this ever be useful behaviour instead of just obfuscating bugs?

let {toFixed, toExponential} = 42;

# Allen Wirfs-Brock (13 years ago)

On Nov 6, 2011, at 9:13 PM, Brendan Eich wrote:

On Nov 6, 2011, at 11:18 AM, Allen Wirfs-Brock wrote:

On Nov 6, 2011, at 3:53 AM, Till Schneidereit wrote:

...

After thinking about this some more, I think I'd be fine with the above outcome after all. It seems to me as though the array destructuring pattern really only uses the syntax of arrays, not necessarily their semantics. Once row capture comes into play, not even the syntax matches completely, anymore.

I'm skeptical of ES every supporting row capture. It is essentially an shallow copy (with exclusions) operation and it brings into to play all the copy semantics issues that up to this point we have been unwilling to try to take on.

We have JS code such as Prototype's Object.extend that do for-in loops copying (own and inherited) enumerable properties from src to dest. It's not a problem.

Why wouldn't row capture work similarly? Perhaps only own enumerable properties. Or all own properties, but the point is there's no deep clone complexity.

If you believe that there is complexity in cloning objects (and you have in other threads) then this seems like essentially the same clone operation but with some properties properties excluded.

In other words,

let {x, ...nobj} = obj;

seems equivalent to

let {x} = obj; let nobj = obj.cloneExcluding("x");

What is the [[Prototype]] of nobj; Does it get any non-default internal properties of obj. Are private named properties copied? Are any internal invariants maintained or reestablished? What if obj is a Proxy or hsot object?

The important difference is that, as opposed to the slice example above, at no point during array destructuring an array object is created or modified.

Well, a (real) array object is created is a rest pattern is used.

Why?

let [...restObj] = obj;

restObj is a real Array object.

Given that, isn't the array destructuring pattern really a convenient way of saying "in this destructuring, all keys are numeric indices, starting at 0 and increasing linearly by 1"? I.e., it's mostly syntax for another form of shorthand, pretty similar to {x} = {x:1}.

or does the use of the [ ] pattern also imply some deeper user intent about applying array-like semantics.

Not in our experience, or Opera's, or Rhino's.

Maybe this is an edge case and I should quit beefing. However I do not like the aesthetics or the economics of a spec that gets 'length' for all array patterns, even if not needed.

i think it is an edge case and Opera/FF doesn't have rest destructuring which is where the issues are most apparent. Given both of these points I not sure how informative the previous experience is.

As an edge case, I'm just trying to ensure that edge behavior is generally consistent across the language so people can reason about such cases without have to know and apply arbitrarily different rules in each such situation. This is where I think the general behavior of arrays and array functions pretty directly tells us what the edge case behavior should be.

I actually think that [no, yes] is the closest to actual Array behavior. Eg:

let weirdArray = Array.prototype <| {3: 3} <| [0,1,2;

let a=weirdArray[0], b=weirdArray[2], r = weirdArray.slice(2); //a=01,b==2,r==[3] let c = weirdArray[3]; //c==3 array [ ] access doesn't check the length...

We could make destructuring work this way

For [no, no] to implement ... we have to assume that the iteration limit is the max possible array index (currently Uint32 max - 1,

No. We only need look at the properties that are actually present. I think you are dismissing row capture in object patterns and therefore focusing too narrowly on the "array" word in array patterns. If we want row capture in both, it will entail enumeration of actual properties, not wild integer counting till 2^32 or 2^53.

I actually addressed this in the rest of the paragraph this quote comes from. Of course, you would only look at the actual properties but you have to select out the array-indexed properties and order them appropriately. This is significantly complicated by property inheritance. The bottom line is that what seemingly should be a simple and cheap destructuring such as:

let [a,...rest] = sup <| [0,1]; //should we just have to iterate from 1 to 1 to create rest

can be arbitrarily complex depending upon the shape of sup. Even if it or its ancestors doesn't define any array indexed properties that still needs to be checked for. You don't even need to have the sup <| in order to have the issue because Array.prototype and Object.prototype can contribute array index properties that lay beyond the length bound.

# Allen Wirfs-Brock (13 years ago)

On Nov 7, 2011, at 3:50 AM, Till Schneidereit wrote:

I.e., I think the most easily comprehensible behavior is to make array destructuring treat the RHS as an Array. It matches the common use-case (actual arrays), it is consistent (does the same whether you use ... or not), and is easily explainable.

I agree with the consistency argument. The reason I'm in favor of [no, no] is that otherwise [x,y,z] = {0:0, 1:1, 2:2} would result in x=undefined,y=undefined,z=undefined

That doesn't seem desirable to me.

Yes, as I mentioned in another reply array [ ] access doesn't check length. But "slicing" operations including Array.prototype.slice and Function.prototype.apply (slices from 0 to length) all do use length.

That is way [no, yes] may actually be the most consistent approach.

# Andreas Rossberg (13 years ago)

On 7 November 2011 17:34, Allen Wirfs-Brock <allen at wirfs-brock.com> wrote:

It is just another way to silently inject an `undefined' that is tedious to track down.  We already have too many of those...

It is how the language currently behaves in all situations where an object is needed but a primitive values is provided.  We want consistency in language design, not a hodgepodge of special cases and different rules.

Hm, I don't quite buy that. There are plenty of places in ES today where we don't convert but throw, e.g. "in", "instanceof", various methods of Object, etc. Destructuring arguably is closely related to operators like "in". Implicit conversion would violate the principle of least surprise for either, IMHO.

I agree that consistency is a nice goal, but it seems like that train is long gone for ES. Also, if consistency implies proliferating an existing design mistake then I'm not sure it should have the highest priority.

When would this ever be useful behaviour instead of just obfuscating bugs?

let {toFixed, toExponential} = 42;

OK, I guess "useful" is a flexible term. Would you recommend using that style as a feature?

# Axel Rauschmayer (13 years ago)

How about:

let {length} = "abc";

I think the conversion keeps the illusion alive that every value in JS is an object.

# Allen Wirfs-Brock (13 years ago)

On Nov 7, 2011, at 9:32 AM, Axel Rauschmayer wrote:

How about:

let {length} = "abc";

or let [first,second] = "abc";

# Allen Wirfs-Brock (13 years ago)

On Nov 7, 2011, at 9:21 AM, Andreas Rossberg wrote:

On 7 November 2011 17:34, Allen Wirfs-Brock <allen at wirfs-brock.com> wrote:

It is just another way to silently inject an `undefined' that is tedious to track down. We already have too many of those...

It is how the language currently behaves in all situations where an object is needed but a primitive values is provided. We want consistency in language design, not a hodgepodge of special cases and different rules.

Hm, I don't quite buy that. There are plenty of places in ES today where we don't convert but throw, e.g. "in", "instanceof", various methods of Object, etc. Destructuring arguably is closely related to operators like "in". Implicit conversion would violate the principle of least surprise for either, IMHO.

True, "in" and "instanceof" don't follow the rules. I note that they were added in ES3 and I have to wonder if they aren't another case of features being added without sufficient thought being given to maintaining consistency of behavior throughout the language. I don't know, I wasn't there.

The same can be said for a few cases in the Object functions that were added for ES5. If I had the same depth of understanding of the internals of the language as I do now, I probably would have objected to those variances.

I agree that consistency is a nice goal, but it seems like that train is long gone for ES. Also, if consistency implies proliferating an existing design mistake then I'm not sure it should have the highest priority.

Perhaps not the highest priority, but still a priority.

As the specification writer, I have in my head (yes, it would be good to write the down) a set of routine and consistent behaviors that I apply as I compose the specification algorithms. I think this is similar to the conceptual understanding of the language that an expert JS programmer uses as they write code. Whenever sometime deviates from that norm, it has to be given special consideration. For a new feature, my starting assumption is always that it will follow the norm. Increasing the number of deviations from the norm doesn't necessarily make the language better but it certainly makes it less internally consistent and harder to reason about.

Whether or not a particular consistent behavior was a "design mistake" is usually a subjective evaluation and I'm not sure if it is particularly relevant. The core language is what it was and that is what we have to work with. Most such "mistakes" can't be pervasively fixed. It isn't at all clear to me that spot fixing only new occurrences of such "mistakes" makes JS a better language.

# Brendan Eich (13 years ago)

On Nov 7, 2011, at 12:59 AM, Lasse Reichstein wrote:

If the object being destructured is in fact a plain Array, with no inherited elements above the length, then there is no difference. This is most likely the (very) common use case. This is what the ES programmers' intuition will be based one.

Agreed.

The original question was what an ES programmer would expect. I think he will probably expect array-like deconstructors to treat the RHS as an array(-like object). I.e., [yes,yes].

js> a = [0,1] [0, 1] js> Array.prototype[2] = 2

2 js> a.length

2 js> a[2]

2

We do not filter Array [[Get]] of an index that happens to name an inherited property based on length.

This still doesn't mean a whole lot, as you say. It's the very uncommon case. But it makes that with inherited indexed properties, length is not always overriding.

That also have the advantage of actually providing otherwise unavailable functionality. You can write {0:x, 1:y, 2:z} instead of [x,y,z] if you want object-behavior, but if they are the same, you can't get array-like behavior.

This is a good point. Allen made it too, IIRC.

Arrays are just an abstraction in ECMAScript, which all the Array.prototype methods that are "intentionally generic" proves. If it quacks like an Array and swims like an Array, we allow ourselves to treat it like an Array.

See above for a meow from the array-with-inherited-indexed-properties-not-below-length duck. But that's an edge case, I agree.

I.e., I think the most easily comprehensible behavior is to make array destructuring treat the RHS as an Array. It matches the common use-case (actual arrays), it is consistent (does the same whether you use ... or not), and is easily explainable.

The destructuring becomes a bit more complicated, with a temporary for rhs.length and a one-time up-front "get" of that property, and lhs positional index tests against that length temporary. Still bugs me, probably as an implementor but also just in terms of more complicated desugaring.

# Brendan Eich (13 years ago)

On Nov 7, 2011, at 2:18 AM, Andreas Rossberg wrote:

On 5 November 2011 17:44, Brendan Eich <brendan at mozilla.com> wrote:

Destructuring is "irrefutable" in that it desugars to assignments from properties of the RHS. It is not typed; it is not refutable

I don't think that's true, at least not in the usual sense of "irrefutable pattern". Because you can write

let {x} = 666

which will be refuted, by raising a TypeError.

Nope. You get undefined. That's why it's irrefutable -- you can't build refutable matching on this (you'd need an OOB value other than undefined, or exceptions).

js> let {x} = 666

js> x

js>

Of course, the real question is, what does this do:

let {} = 666

No-op. We worked this out for ES4, I had originally made it an early error, but Lars Hansen argued for the 0-identifier basis case:

js> let {} = 666

js>

This can simplify code generators slightly. It's not a big deal but I agree with Lars, there should be no error case here.

# Brendan Eich (13 years ago)

On Nov 7, 2011, at 3:04 AM, Andreas Rossberg wrote:

On 5 November 2011 19:55, Brendan Eich <brendan at mozilla.com> wrote:

On Nov 5, 2011, at 9:38 AM, Allen Wirfs-Brock wrote:

In a similar vain, what is the value of r in:

let [z,y,...r] = {0:0, 1:1, 2:2, length: 3, 3:3,4:4};

should it be [2] or [2,3,4] (and if the latter how is that determined)?

The inspiration for ... in the past came from (among other sources) Successor ML:

successor-ml.org/index.php?title=Functional_record_extension_and_row_capture

Since I actually wrote half of that, I feel obliged to say that it does not answer the questions raised here. ML is a typed language, and contrary to popular belief, many language design problems are much easier to solve in a typed setting.

Absolutely. Remember, we sought inspiration there back in ES4 days, with optional types of some sort hovering (and eventually flying away, presumably to Denmark ;-).

However, there is some inspiration in the way SML treats tuples as special cases of records, very much like arrays are a special case of objects in JS. In particular, all of SML's pattern matching rules for tuples follow just from the way they desugar into records with numeric labels.

Yes, this was our thinking for destructuring, which first appeard in Opera, got some ES4 wiki-level spec drafting, and fed into the SpiderMonkey and Rhino implementations.

One possible semantics could be treating

let [x, y, z, ...r] = e

as equivalent to

let {0: x, 1: y, 2: z, ..._r} = e let r = [].slice.call(_r, 3)

where I assume the "canonical" matching semantics for object rest patterns that would make _r an ordinary object (not an array) accumulating all properties of e not explicitly matched (even if e itself is an array, in which case _r includes a copy of e's length property). Of course, engines would optimize properly.

Right, but why the 3 passed to slice.call if _r captured all enumerable properties except those with ids 0, 1, and 2 (stringified, of course)?

Anyway, you've hit what I was advocating over the weekend as the answer to the pair of questions I posed: [no, no]. Lasse makes a good case for [yes, yes]. I still think we should argue about row capture in object patterns a bit before concluding. What do you think?

(But yes, row capture for objects introduces a form of object cloning, as Allen points out.)

Shallow, though. No closure cloning, e.g. Clone as curse-word shouldn't shoot this down without specific argument.

# Brendan Eich (13 years ago)

On Nov 7, 2011, at 10:03 AM, Allen Wirfs-Brock wrote:

True, "in" and "instanceof" don't follow the rules. I note that they were added in ES3 and I have to wonder if they aren't another case of features being added without sufficient thought being given to maintaining consistency of behavior throughout the language. I don't know, I wasn't there.

I was not there for those, either. I talked with the Netscape folks who were, though. The 'in' mismatch is even more vexing because IE was forcing at the time, ultimately to great success in ES5, all other implementations to treat for (i in null); and for (i in undefined); loops as zero-iteration non-errors.

'instanceof' is broken in a number of ways.

Let's not get into the ES3-era Object.prototype extensions, especially propertyIsEnumerable (which does not climb the prototype chain, whereas for/in and in do) or isPrototypeOf. Ok, I named 'em. Shutting up now.

Really I think this is more committee selection bias shift and a failure to review the whole to check various kinds of consistency. We need to do better, not saying we will or throwing stones backward in time here.

The same can be said for a few cases in the Object functions that were added for ES5. If I had the same depth of understanding of the internals of the language as I do now, I probably would have objected to those variances.

Yup. Evolution is like that.

# Andreas Rossberg (13 years ago)

On 7 November 2011 18:42, Allen Wirfs-Brock <allen at wirfs-brock.com> wrote:

or    let [first,second] = "abc";

Yes, that's a more convincing example -- although we should probably be aware that users will then also do

let [x, y, ...s] = somestring

and expect it to slice a string efficiently.

# Andreas Rossberg (13 years ago)

On 7 November 2011 22:46, Brendan Eich <brendan at mozilla.com> wrote:

On Nov 7, 2011, at 3:04 AM, Andreas Rossberg wrote:

One possible semantics could be treating

let [x, y, z, ...r] = e

as equivalent to

let {0: x, 1: y, 2: z, ..._r} = e  let r = [].slice.call(_r, 3)

where I assume the "canonical" matching semantics for object rest patterns that would make _r an ordinary object (not an array) accumulating all properties of e not explicitly matched (even if e itself is an array, in which case _r includes a copy of e's length property). Of course, engines would optimize properly.

Right, but why the 3 passed to slice.call if _r captured all enumerable properties except those with ids 0, 1, and 2 (stringified, of course)?

I was assuming that we want

let [x, y,z, ...r] = [1, 2, 3, 4, 5]

to bind r to [4, 5]. For that to hold, you have to shift down the numeric indices in _r by 3, which is what the slice call was intended to do.

At least that's the behaviour I'd expect from an array rest pattern, and Allen's earlier example in this thread seems consistent with this assumption. But looking at the proposal, I now see that it does not actually do the shifting. So now I'm confused about what the intended semantics actually is.

Anyway, you've hit what I was advocating over the weekend as the answer to the pair of questions I posed: [no, no]. Lasse makes a good case for [yes, yes].

The call to .slice implicitly reads the length, so it rather seems to implement [no, yes].

Using [no, no] would work, too, but requires a somewhat non-standard form of slicing. I have a slight preference for being consistent with the existing slicing semantics.

I don't like [yes, yes] that much. I prefer to view array patterns merely as straightforward sugar for object matching, and [yes, yes] kind of breaks that and puts more special cases into the language. So I'd actually turn around Lasse's argument. :)

I still think we should argue about row capture in object patterns a bit before concluding. What do you think?

Well, I think that row capture in object patterns is indeed a useful feature, esp for record-like use cases. I agree that shallow cloning isn't a big problem -- in any case, it's no worse than doing the same sort of cloning for array-like objects.

It also seems more consistent to me to have rest patterns in both forms. If object rows enable maintaining the "syntactic sugar" explanation for array patterns, then the overall result might even be a slightly simpler language.

# Brendan Eich (13 years ago)

On Nov 8, 2011, at 4:01 AM, Andreas Rossberg wrote:

I was assuming that we want

let [x, y,z, ...r] = [1, 2, 3, 4, 5]

to bind r to [4, 5]. For that to hold, you have to shift down the numeric indices in _r by 3, which is what the slice call was intended to do.

Gotcha, my mistake -- thanks.

Using [no, no] would work, too, but requires a somewhat non-standard form of slicing. I have a slight preference for being consistent with the existing slicing semantics.

Agreed, you're right again. (I am 0 for 2!)

I don't like [yes, yes] that much. I prefer to view array patterns merely as straightforward sugar for object matching, and [yes, yes] kind of breaks that and puts more special cases into the language. So I'd actually turn around Lasse's argument. :)

[yes, yes] does put more special casing and length-capturing-in-a-temp and bounds-checking, and that still makes me rebel against its consistency.

There's consistency in [no, no] too. Lasse's point that {0:x, 1:y} and [x, y] being the same is not a mark against consistency, but redundancy or lack of new semantics for the [x, y] pattern (viz, length filtering of the RHS properties). But that seems better to me on balance than the length sampling and bounding.

I still think we should argue about row capture in object patterns a bit before concluding. What do you think?

Well, I think that row capture in object patterns is indeed a useful feature, esp for record-like use cases. I agree that shallow cloning isn't a big problem -- in any case, it's no worse than doing the same sort of cloning for array-like objects.

That's my thinking too.

We see people building such things with for-in loops in libraries today. That's a signal amid the noise.

It also seems more consistent to me to have rest patterns in both forms. If object rows enable maintaining the "syntactic sugar" explanation for array patterns, then the overall result might even be a slightly simpler language.

Agreed. Thanks,

# David Herman (13 years ago)

Late to the party, but I've brought more booze.

On Nov 5, 2011, at 2:41 PM, Brendan Eich wrote:

We have:

  1. Should an array pattern always query 'length'?

  2. If the answer to (1) is "no", then should ... in an array pattern query 'length'?

On reflection and at this point in the thread, with your reply in mind, my prefs in order: [no, yes], [no, no]. In no case do I favor [yes]. I'm refutably matching [no, _] :-P.

I feel strongly that the appropriate semantics is [no, yes].

Here's my reasoning. Arrays are a multi-purpose data structure in JS. Sometimes they are used for fixed-size tuples, and sometimes they are used for dynamic length arrays. (Similarly, objects are used both for fixed-size records and for dynamic size dictionaries.)

When you use a fixed-length tuple in JS, you do not query the .length property. When you use a dynamic-length array, you do.

When you use a fixed-size record in JS, you do not use object enumeration. When you use a dynamic-size dictionary in JS, you do.

Destructuring is meant to provide elegant syntax for all of these use cases. The syntax of [] destructuring is for fixed-length tuples if there is no ellipsis, and for dynamic-length arrays if there is an ellipsis. That's what the ellipsis is good for; distinguishing the case where you know statically how many elements you expect from the case where you don't.

More concretely, here's the rough desugaring I expect. I'll use 〰〰 as meta-ellipsis (thanks, Unicode!). I'll just specify the special case where each element is an identifier. It's straightforward to generalize to arbitrary nested destructuring patterns and hole patterns.

A pattern of the form

[a0, a1, 〰〰, ak]

desugars to

a0 = %v[0];
a1 = %v[1];
〰〰
ak = %v[k];

A pattern of the form

[a0, a1, 〰〰, ak, ...r]

desugars to

a0 = %v[0];
a1 = %v[1];
〰〰
ak = %v[k];
let %length = %v.length;
r = [ %v[i] for i of [k+1, 〰〰, %length - 1] if (i in %v) ];

This can be generalized further to allow a fixed number of patterns after the ellipsis as well:

A pattern of the form

[a0, a1, 〰〰, ak, ...r, bn, bn-1, 〰〰, b0]

desugars to

a0 = %v[0];
a1 = %v[1];
〰〰
ak = %v[k];
let %length = %v.length;
r = [ %v[i] for i of [k+1, 〰〰, %length - n - 2] if (i in %v) ];
bn = %v[%length - n - 1];
bn-1 = %v[%length - (n - 1) - 1];
〰〰
b0 = %v[%length - 0 - 1];
# Brendan Eich (13 years ago)

On Nov 11, 2011, at 3:17 PM, David Herman wrote:

Late to the party, but I've brought more booze.

On Nov 5, 2011, at 2:41 PM, Brendan Eich wrote:

We have:

  1. Should an array pattern always query 'length'?

  2. If the answer to (1) is "no", then should ... in an array pattern query 'length'?

On reflection and at this point in the thread, with your reply in mind, my prefs in order: [no, yes], [no, no]. In no case do I favor [yes]. I'm refutably matching [no, _] :-P.

I feel strongly that the appropriate semantics is [no, yes].

Here's my reasoning. Arrays are a multi-purpose data structure in JS. Sometimes they are used for fixed-size tuples, and sometimes they are used for dynamic length arrays. (Similarly, objects are used both for fixed-size records and for dynamic size dictionaries.)

When you use a fixed-length tuple in JS, you do not query the .length property. When you use a dynamic-length array, you do.

When you use a fixed-size record in JS, you do not use object enumeration. When you use a dynamic-size dictionary in JS, you do.

This is a really good point. It's the kind of consistency (among two pattern kinds and two use-cases) that we need to attend to, not a foolish consistency to always [[Get]] 'length' (or not).

I change my preferred answer from [no, no] or [no, _] to [no, yes].

# Allen Wirfs-Brock (13 years ago)

On Nov 11, 2011, at 3:17 PM, David Herman wrote:

Late to the party, but I've brought more booze.

On Nov 5, 2011, at 2:41 PM, Brendan Eich wrote:

We have:

  1. Should an array pattern always query 'length'?

  2. If the answer to (1) is "no", then should ... in an array pattern query 'length'?

On reflection and at this point in the thread, with your reply in mind, my prefs in order: [no, yes], [no, no]. In no case do I favor [yes]. I'm refutably matching [no, _] :-P.

I feel strongly that the appropriate semantics is [no, yes].

Pretty much the conclusion I also came to: On Nov 7, 2011, at 8:53 AM, Allen Wirfs-Brock wrote:

Yes, as I mentioned in another reply array [ ] access doesn't check length. But "slicing" operations including Array.prototype.slice and Function.prototype.apply (slices from 0 to length) all do use length.

That is way [no, yes] may actually be the most consistent approach.

...

A pattern of the form

[a0, a1, 〰〰, ak, ...r]

desugars to

a0 = %v[0]; a1 = %v[1]; 〰〰 ak = %v[k]; let %length = %v.length;

do we sample the length here or at the very beginning? It presumably only matter if a %v[n] is an accessor with side-effects that modify %v. Generally, the array functions sample length at the beginning before processing any elements.

r = [ %v[i] for i of [k+1, 〰〰, %length - 1] if (i in %v) ];

This can be generalized further to allow a fixed number of patterns after the ellipsis as well:

A pattern of the form

[a0, a1, 〰〰, ak, ...r, bn, bn-1, 〰〰, b0]

We currently haven't specified this syntactic form. I'm not sure if it adds enough value to justify the added conceptual complexity.

# Axel Rauschmayer (13 years ago)

A pattern of the form

[a0, a1, 〰〰, ak, ...r, bn, bn-1, 〰〰, b0]

We currently haven't specified this syntactic form. I'm not sure if it adds enough value to justify the added conceptual complexity.

Using this pattern for accessing elements at the end would be useful. For example:

[...r, b0, b1, b2] = arr means: assign the three last elements of arr to b0, b1, b2 (and assign everything until these elements to r)

It would be nice if r was optional: [..., b0, b1, b2] = arr

# David Herman (13 years ago)

On Nov 11, 2011, at 3:36 PM, Allen Wirfs-Brock wrote:

On Nov 11, 2011, at 3:17 PM, David Herman wrote:

A pattern of the form

[a0, a1, 〰〰, ak, ...r]

desugars to

a0 = %v[0]; a1 = %v[1]; 〰〰 ak = %v[k]; let %length = %v.length;

do we sample the length here or at the very beginning? It presumably only matter if a %v[n] is an accessor with side-effects that modify %v. Generally, the array functions sample length at the beginning before processing any elements.

Beginning seems fine to me.

This can be generalized further to allow a fixed number of patterns after the ellipsis as well:

A pattern of the form

[a0, a1, 〰〰, ak, ...r, bn, bn-1, 〰〰, b0]

We currently haven't specified this syntactic form. I'm not sure if it adds enough value to justify the added conceptual complexity.

I think it's a pretty big win, and I'd argue it's totally intuitive. The great thing about destructuring is that you can intuit the semantics without actually having to understand the details of the desugaring/semantics.

Also: we'll definitely want to allow it for splicing, so the grammar will have to allow it already, and symmetry/consistency argue for allowing it in destructuring too. Likewise for function formals and actuals.

# David Herman (13 years ago)

On Nov 11, 2011, at 4:23 PM, Axel Rauschmayer wrote:

It would be nice if r was optional: [..., b0, b1, b2] = arr

Agreed. Pure win, no downside.

# Axel Rauschmayer (13 years ago)

[a0, a1, 〰〰, ak, ...r, bn, bn-1, 〰〰, b0]

We currently haven't specified this syntactic form. I'm not sure if it adds enough value to justify the added conceptual complexity.

I think it's a pretty big win, and I'd argue it's totally intuitive. The great thing about destructuring is that you can intuit the semantics without actually having to understand the details of the desugaring/semantics.

Also: we'll definitely want to allow it for splicing, so the grammar will have to allow it already, and symmetry/consistency argue for allowing it in destructuring too. Likewise for function formals and actuals.

Using it for splicing suggests a construction analog:

let r = [2,3,4] let arr = [0,1,..r, 5, 6, 7]

The grammar seems to support this, but I’ve never seen it in an example.

I might also be useful in parameter lists:

function foo(first, ...middle, last) { }

# David Herman (13 years ago)

On Nov 11, 2011, at 5:31 PM, Axel Rauschmayer wrote:

Also: we'll definitely want to allow it for splicing, so the grammar will have to allow it already, and symmetry/consistency argue for allowing it in destructuring too. Likewise for function formals and actuals.

Using it for splicing suggests a construction analog:

let r = [2,3,4] let arr = [0,1,..r, 5, 6, 7]

That's what I meant by splicing.

The grammar seems to support this, but I’ve never seen it in an example.

I might also be useful in parameter lists:

function foo(first, ...middle, last) { }

That's what I meant by function formals and actuals.

# Brendan Eich (13 years ago)

On Nov 11, 2011, at 5:06 PM, David Herman <dherman at mozilla.com> wrote:

Also: we'll definitely want to allow it for splicing,

s/splicing/spread/

# Allen Wirfs-Brock (13 years ago)

On Nov 11, 2011, at 5:06 PM, David Herman wrote:

On Nov 11, 2011, at 3:36 PM, Allen Wirfs-Brock wrote:

... We currently haven't specified this syntactic form. I'm not sure if it adds enough value to justify the added conceptual complexity.

I think it's a pretty big win, and I'd argue it's totally intuitive. The great thing about destructuring is that you can intuit the semantics without actually having to understand the details of the desugaring/semantics.

Also: we'll definitely want to allow it for splicing, so the grammar will have to allow it already, and symmetry/consistency argue for allowing it in destructuring too. Likewise for function formals and actuals.

Embedded spreads is already in the ES6 draft for both array literals and argument lists. I'm not at all sure that embedded rests in destructurings are such a good idea. Just to start with, what does this mean:

[a,b,...r,c,d = [1,2,3]

Similarly for formal parameters:

(function (a,b,...r,c,d) {return [a,b,c,d]})(1,2,3);

If it isn't obvious what it means it probably should be in the language.

I believe implementers will generally want positional parameters to be a fixed stack offsets so constant offsets can be emitted into the instruction stream for referencing them. Allowing the above, means that the offset of c and d need to be dynamically determined for each invocation.

This sort of destructuring is not real pattern matching. Non-righthand side trailing rests seem like it is pushing things too. It goes beyond reasonable utility and comprehensibility into something whose semantics is non-obvious and which may negatively impact implementations.

# Claus Reinke (13 years ago)

Embedded spreads is already in the ES6 draft for both array literals and argument lists. I'm not at all sure that embedded rests in destructurings are such a good idea. Just to start with, what does this mean:

[a,b,...r,c,d = [1,2,3]

.. This sort of destructuring is not real pattern matching. ..

I think this is the important point. Many of us who like "destructuring" are still being mislead by the name, and by our experience with proper pattern matching constructs.

  • please rename the current state to "selection shorthand"
  • no matter the name, the differences between construction and deconstruction cannot be good

Btw, I've known at least one language where one could write

[...,['key',value],...] = some_dictionary_array

to get key-value lookups. Again, this depends crucially on proper pattern-matching semantics (this used match failure and fallback within a single pattern), even though it is "just" a lhs variant of what is permitted as on the rhs.

By specifying only a simplified form of selection shorthand, while occupying the syntactic grounds belonging to pattern matching, we are moving on shaky ground.

Claus clausreinke.github.com, clausreinke.github.com/js-tools

# Axel Rauschmayer (13 years ago)

Embedded spreads is already in the ES6 draft for both array literals and argument lists. I'm not at all sure that embedded rests in destructurings are such a good idea.

I think it would be nice to express “last element(s)", e.g. in function parameters (where the callback is usually at the end):

 foo(arg1, arg2, ..., argn, callback)

Just to start with, what does this mean:

[a,b,...r,c,d = [1,2,3]

That seems related to [a, b, c, d] = [1,2,3] and to [a,b,...r,c,d = [1,2,3,4]

Hence, I would guess: a === 1 b === 2 r === [] c === 3 d === undefined

I agree with Claus Reinke that naming is tricky: Is destructuring assignment the reverse of invoking a constructor? Is it a desctructor, then? But that clashes with C++ terminology.

# Brendan Eich (13 years ago)

On Nov 12, 2011, at 12:48 AM, Claus Reinke wrote:

Embedded spreads is already in the ES6 draft for both array literals and argument lists. I'm not at all sure that embedded rests in destructurings are such a good idea. Just to start with, what does this mean:

[a,b,...r,c,d = [1,2,3]

.. This sort of destructuring is not real pattern matching. ..

I think this is the important point. Many of us who like "destructuring" are still being mislead by the name, and by our experience with proper pattern matching constructs.

You just used a different term, "pattern matching", than "destructuring" -- so why are you misled?

  • please rename the current state to "selection shorthand"

No thanks.

# Brendan Eich (13 years ago)

On Nov 12, 2011, at 2:07 AM, Axel Rauschmayer wrote:

Embedded spreads is already in the ES6 draft for both array literals and argument lists. I'm not at all sure that embedded rests in destructurings are such a good idea.

I think it would be nice to express “last element(s)", e.g. in function parameters (where the callback is usually at the end):

 foo(arg1, arg2, ..., argn, callback)

Yes.

Just to start with, what does this mean:

[a,b,...r,c,d = [1,2,3]

That seems related to [a, b, c, d] = [1,2,3] and to [a,b,...r,c,d] = [1,2,3,4]

Hence, I would guess: a === 1 b === 2 r === [] c === 3 d === undefined

Why guess? There is no requirement that ...r consume 1 element, but there is a requirement in Dave's desugaring that trailing non-rest element patterns consume elements up to the one indexed by rhs.length - 1.

I agree with Claus Reinke that naming is tricky: Is destructuring assignment the reverse of invoking a constructor?

No, and there's no antonym there: construct <=> destruct, not destructure.

Is it a desctructor, then? But that clashes with C++ terminology.

Of course.

Structuring <=> Destructuring. When you write object literals, which desugared in ES3 to assignments following a construction via new Object or new Array (with all the vulnerabilities that looking up Object and Array in the current scope, and running prototype setters, entail). In ES5 the initialisers use [[DefineOwnProperty]], important fix. There's no good jargon for the ensemble effect, though: "object / array literal" or "object / array initialiser" (UK spelling) is too much. "Structuring" is what "destructuring" suggests and it works for me.

# Claus Reinke (13 years ago)

This sort of destructuring is not real pattern matching. ..

I think this is the important point. Many of us who like "destructuring" are still being mislead by the name, and by our experience with proper pattern matching constructs.

You just used a different term, "pattern matching", than "destructuring" -- so why are you misled?

These terms are not synonyms.

"pattern matching" usually involves matching patterns against something, where matching may succeed or fail. If matching succeeds, we get to "destructuring", which is looking at a structure and taking it apart.

The ES.next destructuring proposal does neither matching (patterns are irrefutable, cf the pattern matching strawman) nor does it look at the structure (for something to match an Array pattern, it can be an Array, it can be Array-like, it may just have some numeric fields; primitive constant patterns are not permitted, and structural mismatches such as missing object fields just lead to default assignments, ..). It only does the taking it apart bit - so it is a shorthand for property selectors and slices.

Those of us who are used to pattern matching and its long history in programming languages are tempted to see more in this feature. As are those of us who find symmetry between expressions and patterns appealing from a language design perspective. Because of this, we are tempted to make suggestions that do not fit the scope of the current proposal.

As long as the pattern matching strawman is not on the table, we may have to restrain ourselves, though it is important that the currently accepted destructuring proposal is spec-ed out in a form that remains future-compatible with pattern matching.

  • please rename the current state to "selection shorthand" No thanks.

Other suggestions? We've already seen destructuring being confused with object deconstructors, we are talking about destructuring as if we had plain data structures (see data structures strawmen) instead of objects with prototype chain. And we agree (see pattern matching strawman) that destructuring is not pattern matching.

The fact that we have talked about prototype chains being unobservable to destructuring is a clear hint that it is not looking at the structure, but is using abstract object property selection APIs instead (objects, not records or object literals).

Claus clausreinke.github.com, clausreinke.github.com/js-tools

# Axel Rauschmayer (13 years ago)

Just to start with, what does this mean:

[a,b,...r,c,d = [1,2,3]

That seems related to [a, b, c, d] = [1,2,3] and to [a,b,...r,c,d] = [1,2,3,4]

Hence, I would guess: a === 1 b === 2 r === [] c === 3 d === undefined

Why guess?

I’m not sure I understand.

There is no requirement that ...r consume 1 element,

It consumes 0 elements above.

but there is a requirement in Dave's desugaring that trailing non-rest element patterns consume elements up to the one indexed by rhs.length - 1.

But that doesn’t help if there are more trailing elements in the pattern than in the array to be destructured.

My definition would be:

[a1, ⋯ , an, ...r, b1, ⋯, bm] = [c1, ⋯, cl]

  • n+m > l: r is bound to [], the non-rest pattern variables are bound as if there was no rest element: [a1, ⋯ , an, b1, ⋯, bm] = [c1, ⋯, cl]

  • Otherwise r is bound to an array [r1, ⋯, rk] (with k = l-(n+m)) whose elements are determined via [a1, ⋯ , an, r1, ⋯, rk, b1, ⋯, bm] = [c1, ⋯, cl]

# Brendan Eich (13 years ago)

On Nov 12, 2011, at 4:25 PM, Axel Rauschmayer wrote:

There is no requirement that ...r consume 1 element,

It consumes 0 elements above.

Sorry, misread your example -- we seem to agree that if there aren't enough elements, r gets a fresh empty array.

# Allen Wirfs-Brock (13 years ago)

On Nov 12, 2011, at 12:02 PM, Brendan Eich wrote:

On Nov 12, 2011, at 2:07 AM, Axel Rauschmayer wrote:

Embedded spreads is already in the ES6 draft for both array literals and argument lists. I'm not at all sure that embedded rests in destructurings are such a good idea.

I think it would be nice to express “last element(s)", e.g. in function parameters (where the callback is usually at the end):

foo(arg1, arg2, ..., argn, callback)

Yes.

aside from the "what does it mean" issue (more below) there is still the issue that I believe implementations will continue to prefer to have statically assignable stack offsets for formal parameters. Eg in pseudo assembly language (with infinite registers), you would normally access the parameters of function(arg1,arg2,...rest) something like:

load Rp,(framePointer)+arg1Offset  //load value of arg1
load Rq,(framePointer)+arg2Offset  //load value of arg2
load Rr(framePointer)-restLocalOffset  //rest is a local var probably pointing to a array, initialized by function prolog or maybe some other optimization

allowing non-trailing rest parameters forces something more like: parameters of function(arg1,...rest, arg2,arg3) something like:

load Rp,(framePointer)+arg1Offset  //load value of arg1
load Rr(framePointer)-restLocalOffset  //rest is a local var, initialized by function prolog or maybe some other optimization

// next 3 instructions required to access value of arg2 load Rx, (framePointer)+argCountOffset //get length of actual arg list lea Ry,(framePoint,Rx)+arg1Offset //computer address of last argument load Rq,(Ry)-arg2ReverseOffset //load value of arg2

It's doable, but an additional implementation complication with a runtime cost

Just to start with, what does this mean:

[a,b,...r,c,d = [1,2,3]

That seems related to [a, b, c, d] = [1,2,3] and to [a,b,...r,c,d] = [1,2,3,4]

Hence, I would guess: a === 1 b === 2 r === [] c === 3 d === undefined

Why guess? There is no requirement that ...r consume 1 element, but there is a requirement in Dave's desugaring that trailing non-rest element patterns consume elements up to the one indexed by rhs.length - 1.

I originally said "If it isn't obvious what it means it probably should be in the language."

Guessing comes into play when there are multiple reasonable interpretations and one has been "arbitrarily" chosen by the language designers. As a new user of the feature or even just a infrequent user I can't reliably reason out what it means, I have to learn and remember the language designer's choice.

If rests only come at the end, then it is easy to learn, remember, or even reason out what it means.

If rests come in the middle, I have to know "Dave's Algorithm" to understand what the code means. There are other reasonable algorithms so if I don't specifically remember Dave's I have to "guess". I might guess wrong.

It isn't obvious to me, that the additional language complexity carries it weight in utility. Trailing rests are easy to understand and undoubtably the most frequent use case. leading rests are also fairly easy to understand (put note the formal parameter impl issue) and are probably the next most common. Interior rests makes things complicated and have even fewer uses.

# Brendan Eich (13 years ago)

On Nov 14, 2011, at 9:49 AM, Allen Wirfs-Brock wrote:

It's doable, but an additional implementation complication with a runtime cost

I felt this pain too, and even for the [a, ...r, b] case it's more pain. But especially for function parameters.

We could leave it out. Symmetry fans (I'm one but there are competing dimensions) will have to wait.

Other implementors (Oliver!) should weigh in.