Set iterators

# Peter Michaux (12 years ago)

In the proposal, iterators for Set are listed as todo. If engine implementers have decided to start moving forward implementing Sets, then it would be great if they could get iteration going sooner than later.

harmony:simple_maps_and_sets

Looking at the Array iterators...

It doesn't seem very difficult to specify "forEach" as a proposal sketch. It just has to do the following, doesn't it?

var s = new Set(); s.add('alpha'); s.add('beta'); s.forEach(function(element){});

"reduce", "every", and "some" seem similarly easy. "reduceRight" is unnecessary as a set has no order.

"filter" and "map" are similarly easy but would return set objects.

Peter

# Adam Shannon (12 years ago)

Why not include iterators like zip, zipWith, foldLeft, and the like as well?

# Brendan Eich (12 years ago)

Adam Shannon wrote:

Why not include iterators like zip, zipWith, foldLeft, and the like as well?

foldLeft is called reduce around these parts ;-).

We do need a good suite of iterators including zip, etc. Let's have a proposal here, we can refine it quickly. Thanks,

# Dmitry Soshnikov (12 years ago)

On Feb 12, 2012, at 9:17 PM, Brendan Eich wrote:

Adam Shannon wrote:

Why not include iterators like zip, zipWith, foldLeft, and the like as well?

foldLeft is called reduce around these parts ;-).

We do need a good suite of iterators including zip, etc. Let's have a proposal here, we can refine it quickly.

True; a small note, Erlang has good standard library for lists including iterators: www.erlang.org/doc/man/lists.html, we can consider it and borrow some.

Dmitry

# Axel Rauschmayer (12 years ago)

We do need a good suite of iterators including zip, etc. Let's have a proposal here, we can refine it quickly.

True; a small note, Erlang has good standard library for lists including iterators: www.erlang.org/doc/man/lists.html, we can consider it and borrow some.

Python has itertools: docs.python.org/py3k/library/itertools.html

This falls into the general category of “rounding out the standard library”. I would love to have most of what Underscore has, too.

# Rick Waldron (12 years ago)

As a starting point, would it make sense to provide an API that matches Array.prototype methods (where applicable)?

# Domenic Denicola (12 years ago)

As a starting point, would it make sense to provide an API that matches Array.prototype methods (where applicable)?

I think this ties in to my earlier desire to make Array.prototype methods work with iterators in general 1. That is, I think the correct way to do this would be:

  1. To specify that "set is an iterator that, when iterated over, returns its values."
  2. To specify that all iterators get (at least): every, filter, forEach, map, reduce, reduceRight, some

One potential sticking point is that we might want filter and map to return sets themselves.

# Rick Waldron (12 years ago)

On Mon, Feb 13, 2012 at 10:55 AM, Domenic Denicola < domenic at domenicdenicola.com> wrote:

As a starting point, would it make sense to provide an API that matches Array.prototype methods (where applicable)?

I think this ties in to my earlier desire to make Array.prototype methods work with iterators in general [1]. That is, I think the correct way to do this would be:

  1. To specify that "set is an iterator that, when iterated over, returns its values."
  2. To specify that all iterators get (at least): every, filter, forEach, map, reduce, reduceRight, some

+1 This is very intuitive and the sort of thing that real-world developers will be expecting

One potential sticking point is that we might want filter and map to return sets themselves.

Not sure if this is a "sticking point" or just a good idea. I'm leaning towards the latter :)

# Allen Wirfs-Brock (12 years ago)

On Feb 12, 2012, at 4:52 PM, Peter Michaux wrote:

In the proposal, iterators for Set are listed as todo. If engine implementers have decided to start moving forward implementing Sets, then it would be great if they could get iteration going sooner than later.

harmony:simple_maps_and_sets

Before getting too deep into iteration protocol for Sets (and Maps) there is a more fundamental issues: Will Set define a standard, implementation independent ordering of elements? If so, what is the basis for the ordering?

Is it iteration order? Is so this will add likely add space overhead to the internal representation of Set and Map and/or time overhead to insert/delete operations. Also, for specializations of Set such as Integer Sets insertion order may not be the most desirable iteration ordering.

# Mark S. Miller (12 years ago)

[+tjclose]

On Mon, Feb 13, 2012 at 5:32 PM, Allen Wirfs-Brock <allen at wirfs-brock.com>wrote:

Before getting too deep into iteration protocol for Sets (and Maps) there is a more fundamental issues: Will Set define a standard, implementation independent ordering of elements? If so, what is the basis for the ordering?

Yes. Insertion order.

Is it iteration order?

Well yes by definition ;). Did you mean to ask if it is insertion order? Yes.

Is so this will add likely add space overhead to the internal representation of Set and Map and/or time overhead to insert/delete operations.

Tyler Close previously posted a deterministic hash table implementation, with no extra space or time overhead. Before I saw it I thought it impossible.

Also, for specializations of Set such as Integer Sets insertion order may not be the most desirable iteration ordering.

That's a good point. Perhaps we can say that the abstract Map and Set contract merely demands that each concrete kind of Map or Set specify how their iteration order depends on their inputs. The default Maps and Sets can use insertion order (and typically be implemented using Tyler's algorithm.)

# Allen Wirfs-Brock (12 years ago)

On Feb 13, 2012, at 6:03 PM, Mark S. Miller wrote:

[+tjclose]

On Mon, Feb 13, 2012 at 5:32 PM, Allen Wirfs-Brock <allen at wirfs-brock.com> wrote:

Before getting too deep into iteration protocol for Sets (and Maps) there is a more fundamental issues: Will Set define a standard, implementation independent ordering of elements? If so, what is the basis for the ordering?

Yes. Insertion order.

Is it iteration order?

Well yes by definition ;). Did you mean to ask if it is insertion order? Yes.

yes

Is so this will add likely add space overhead to the internal representation of Set and Map and/or time overhead to insert/delete operations.

Tyler Close previously posted a deterministic hash table implementation, with no extra space or time overhead. Before I saw it I thought it impossible.

Also, for specializations of Set such as Integer Sets insertion order may not be the most desirable iteration ordering.

do you have the link to that post?

That's a good point. Perhaps we can say that the abstract Map and Set contract merely demands that each concrete kind of Map or Set specify how their iteration order depends on their inputs. The default Maps and Sets can use insertion order (and typically be implemented using Tyler's algorithm.)

I suspect that requiring a consistent or implementation independent iteration order for user written concrete Maps or Sets would be an unenforcable

# Jason Orendorff (12 years ago)

On Mon, Feb 13, 2012 at 8:03 PM, Mark S. Miller <erights at google.com> wrote:

Tyler Close previously posted a deterministic hash table implementation, with no extra space or time overhead. Before I saw it I thought it impossible.

This email went to a small group rather than to es-discuss. It would be great to open this up and test the performance claims.

# Adam Shannon (12 years ago)

I thought that Set wasn't going to even have insertion order as a "possible". The idea behind any Set (outside of ES even) is that it is just a collection of elements, unordered.

# Peter Michaux (12 years ago)

I expected a set would have an undefined iteration order to give implementations the opportunity to make optimizations that maintaining order would not allow.

# Allen Wirfs-Brock (12 years ago)

On Feb 13, 2012, at 8:14 PM, Peter Michaux wrote:

I expected a set would have an undefined iteration order to give implementations the opportunity to make optimizations that maintaining order would not allow.

I would normally agree with you. ECMAScript for-in property enumeration order is defined in exactly that manner. However, the experience with JavaScript in browsers is that there is a significant amount of code that has been developed that depends upon a particular enumeration order that is widely used in browsers. People write code with dependencies (often unintentional) upon the enumeration order used by their favorite browser. If their code breaks when it is run on other browsers, they complain. Over time, this is a force that push browsers with less market share to match the implementation decisions of browser with more market share. This happened with for-in enumeration and I expect it will happen with Set/Map iteration order if we leave it unspecified.

# Michael A. Smith (12 years ago)

On Mon, Feb 13, 2012 at 11:59 PM, Allen Wirfs-Brock <allen at wirfs-brock.com> wrote:

I would normally agree with you. ECMAScript for-in property enumeration order is defined in exactly that manner. However, the experience with JavaScript in browsers is that there is a significant amount of code that has been developed that depends upon a particular enumeration order that is widely used in browsers. People write code with dependencies (often unintentional) upon the enumeration order used by their favorite browser. If their code breaks when it is run on other browsers, they complain. Over time, this is a force that push browsers with less market share to match the implementation decisions of browser with more market share. This happened with for-in enumeration and I expect it will happen with Set/Map iteration order if we leave it unspecified.

Browsers are not the only target here. Are you saying you expect this behavior from non-browser implementations as well? Even if you do, I am not very convinced this is a solid reason to specify ordering at the implementation level for a data type that does not fundamentally imply an ordering.

But hypothetically let's say we do want to go with insertion-order. In that case, what happens to the order when you attempt to insert an element that already exists in the set?

OrderedSet('a', 'b', 'c').insert('b')

Is that OrderedSet('a', 'b', 'c') or is it OrderedSet('a', 'c', 'b')?

-Michael A. Smith

# Jason Orendorff (12 years ago)

On Mon, Feb 13, 2012 at 10:59 PM, Allen Wirfs-Brock <allen at wirfs-brock.com> wrote:

I would normally agree with you.  ECMAScript for-in property enumeration order is defined in exactly that manner.  However, the experience with JavaScript in browsers is that there is a significant amount of code that has been developed that depends upon a particular enumeration order that is widely used in browsers.  People write code with dependencies (often unintentional) upon the enumeration order used by their favorite browser.  If their code breaks when it is run on other browsers, they complain.  Over time, this is a force that push browsers with less market share to match the implementation decisions of browser with more market share.  This happened with for-in enumeration and I expect it will happen with Set/Map iteration order if we leave it unspecified.

That is too pessimistic. With for-in enumeration, the initial implementations chose insertion order, which was (a) not arbitrary, (b) easy for users to observe and understand, and (c) consistent across browsers.

Unless TC39 specifies otherwise, the enumeration order of Map and Set will be arbitrary, it will certainly be inconsistent across browsers, and it will most likely even include a random component per Map/Set object.

It is very hard to see how any code could depend on a particular implementation's enumeration order if it is randomized.

# Erik Arvidsson (12 years ago)

On Mon, Feb 13, 2012 at 21:25, Jason Orendorff <jason.orendorff at gmail.com> wrote:

Unless TC39 specifies otherwise, the enumeration order of Map and Set will be arbitrary, it will certainly be inconsistent across browsers, and it will most likely even include a random component per Map/Set object.

It is very hard to see how any code could depend on a particular implementation's enumeration order if it is randomized.

I think we should be careful not to specify the iteration order and we should make sure that the first two implementations intentionally do not agree on the ordering.

This is our one time chance to get this right and we don't want to paint us into another corner with Map/Set iteration order.

# Mark S. Miller (12 years ago)

[+tjclose]

There are many benefits to determinism. E started with non-deterministic iteration order, which opens a covert channel hazard. I initially changed to deterministic order merely to plug this leak. Having done so, I found it had many software engineering benefits. For example, it becomes much easier to write regression tests and to reproduce bugs by re-execution. In my implementation, it also had a minor additional space and time cost. Tyler's Waterken tables show that even the minor runtime costs I was paying were unnecessary.

Let's not introduce yet another source of non-determinism for the sake of an unmeasured efficiency. Let's measure, and if the cost turns out to be high after all, then let's reconsider determinism.

Premature optimization is....

# Andrea Giammarchi (12 years ago)

Map and Set do not use an index to be accessed then the iterator send the index of the key/value as last argument ... I would say either this index is not sent at all or, since present, should be specified somehow in the spec.

talking about this

Map#iterate(callback:Function, context:void*):void ==>

callback.call(context, key, value, index)

br

# David Bruant (12 years ago)

Le 14/02/2012 07:22, Erik Arvidsson a écrit :

On Mon, Feb 13, 2012 at 21:25, Jason Orendorff <jason.orendorff at gmail.com> wrote:

Unless TC39 specifies otherwise, the enumeration order of Map and Set will be arbitrary, it will certainly be inconsistent across browsers, and it will most likely even include a random component per Map/Set object.

It is very hard to see how any code could depend on a particular implementation's enumeration order if it is randomized. I think we should be careful not to specify the iteration order and we should make sure that the first two implementations intentionally do not agree on the ordering.

This is our one time chance to get this right and we don't want to paint us into another corner with Map/Set iteration order.

It cannot be a "one time chance". This property (implementations not agreeing) would need to be followed, because if at some point, some implementations do agree, we'll be back to where we started and people will start relying on enumeration order.

Moreover, maybe the 2 first implementations will disagree, but maybe some implementations will agree (either in all cases or in some observable cases that people will use in their code).

For instance, what if Firefox and Chrome disagree, but iPhone safari and Android Webkit agree? Also, some products (Node.js (V8), MongoDB (SpiderMonkey), etc.) rely only on one JS engine, so JS code written for these could rely on the particular order of the given implementation. If it is the case, it will force these implementations to keep their order for backward-compat sake. Worst case, 2 non-compatible implementations are forced to keep their different order for some products built on top of them each relying on the particular order, making it impossible later to specify a given order.

Determinism makes JavaScript code more interoperable.

# Andreas Rossberg (12 years ago)

On 14 February 2012 09:47, David Bruant <bruant.d at gmail.com> wrote:

For instance, what if Firefox and Chrome disagree, but iPhone safari and Android Webkit agree? Also, some products (Node.js (V8), MongoDB (SpiderMonkey), etc.) rely only on one JS engine, so JS code written for these could rely on the particular order of the given implementation. If it is the case, it will force these implementations to keep their order for backward-compat sake. Worst case, 2 non-compatible implementations are forced to keep their different order for some products built on top of them each relying on the particular order, making it impossible later to specify a given order.

To be sure, this is assuming that iteration order is fixed for a given implementation. If order is not specified, then I don't see why that should be required either. I.e., a completely randomized order (per iteration) should be valid, too.

And I see potential reasons why order might differ for separate iterations over the same collection.

Determinism makes JavaScript code more interoperable.

I tend to agree. Underspecification in language definitions is a great source for sleeper bugs.

# David Bruant (12 years ago)

Le 14/02/2012 11:23, Andreas Rossberg a écrit :

On 14 February 2012 09:47, David Bruant<bruant.d at gmail.com> wrote:

For instance, what if Firefox and Chrome disagree, but iPhone safari and Android Webkit agree? Also, some products (Node.js (V8), MongoDB (SpiderMonkey), etc.) rely only on one JS engine, so JS code written for these could rely on the particular order of the given implementation. If it is the case, it will force these implementations to keep their order for backward-compat sake. Worst case, 2 non-compatible implementations are forced to keep their different order for some products built on top of them each relying on the particular order, making it impossible later to specify a given order. To be sure, this is assuming that iteration order is fixed for a given implementation. If order is not specified, then I don't see why that should be required either.

It is not required, but it's what experience tells us from the for-in loop. Spec said it was impl-specific, but implementations mostly implemented iteration order as insertion order.

Regardless of requirement, if an implementation gets to a point where in cases observed by people looks deterministic, then they may assume it is, start relying on it and force the implementation to keep this order.

Requiring fixed iteration order and same order for all implementations saves us from these issues.

I.e., a completely randomized order (per iteration) should be valid, too.

And I see potential reasons why order might differ for separate iterations over the same collection.

I'm interested in hearing more :-)

# Andreas Rossberg (12 years ago)

On 14 February 2012 12:02, David Bruant <bruant.d at gmail.com> wrote:

Le 14/02/2012 11:23, Andreas Rossberg a écrit :

To be sure, this is assuming that iteration order is fixed for a given implementation. If order is not specified, then I don't see why that should be required either.

It is not required, but it's what experience tells us from the for-in loop.

Doesn't experience rather tell us that people expect a specific enumeration order, not just some fixed one?

And I see potential reasons why order might differ for separate iterations over the same collection.

I'm interested in hearing more :-)

Dynamic changes of representation, for example. V8 does things like that all the time. And it currently goes to some length to make for-in deterministic.

(But just to be clear, I'm still in favour of fully specified behaviour.)

# Allen Wirfs-Brock (12 years ago)

On Feb 14, 2012, at 3:45 AM, Andreas Rossberg wrote:

On 14 February 2012 12:02, David Bruant <bruant.d at gmail.com> wrote:

Le 14/02/2012 11:23, Andreas Rossberg a écrit :

And I see potential reasons why order might differ for separate iterations over the same collection.

I'm interested in hearing more :-)

Dynamic changes of representation, for example. V8 does things like that all the time. And it currently goes to some length to make for-in deterministic.

Good hash table designs typically rehash (reorganize) themselves when they reach a certain percentage of their total capacity or are experiencing too many hash collisions. One of the simplest iteration strategies for a hash table is often physical placement order. Rehashing will typically change the physical placement of entries and hence that ordering.

# Mark S. Miller (12 years ago)

I just happened to see < lists.w3.org/Archives/Public/public-script-coord/2012JanMar/0157.html>

which shows that other web standards efforts continue to struggle to identify and quash other sources of non-determinism in web standards. If unspecified iteration order is so good, wouldn't the same reasoning apply to bugs like that?

# Andreas Rossberg (12 years ago)

On 14 February 2012 18:15, Allen Wirfs-Brock <allen at wirfs-brock.com> wrote:

On Feb 14, 2012, at 3:45 AM, Andreas Rossberg wrote:

Dynamic changes of representation, for example. V8 does things like that all the time. And it currently goes to some length to make for-in deterministic.

Good hash table designs typically rehash (reorganize) themselves when they reach a certain percentage of their total capacity or are experiencing too many hash collisions. One of the simplest  iteration strategies for a hash table is often physical placement order. Rehashing will typically change the physical placement of entries and hence that ordering.

Indeed. But I had in mind more radical changes like going from e.g. a dense vector representation to a hashtable or a binary search tree.

# Allen Wirfs-Brock (12 years ago)

On Feb 14, 2012, at 9:35 AM, Andreas Rossberg wrote:

On 14 February 2012 18:15, Allen Wirfs-Brock <allen at wirfs-brock.com> wrote:

On Feb 14, 2012, at 3:45 AM, Andreas Rossberg wrote:

Dynamic changes of representation, for example. V8 does things like that all the time. And it currently goes to some length to make for-in deterministic.

Good hash table designs typically rehash (reorganize) themselves when they reach a certain percentage of their total capacity or are experiencing too many hash collisions. One of the simplest iteration strategies for a hash table is often physical placement order. Rehashing will typically change the physical placement of entries and hence that ordering.

Indeed. But I had in mind more radical changes like going from e.g. a dense vector representation to a hashtable or a binary search tree.

Yes, all quite reasonable internal transformations for any identify-keyed data structure.

# Jason Orendorff (12 years ago)

On Tue, Feb 14, 2012 at 5:02 AM, David Bruant <bruant.d at gmail.com> wrote:

To be sure, this is assuming that iteration order is fixed for a given implementation. If order is not specified, then I don't see why that should be required either.

It is not required, but it's what experience tells us from the for-in loop.

All right, but let's not make the mistake of only learning from our own experience.

Many other language communities have very specific relevant experience. For example, there is a single dominant Java implementation and a single dominant Python implementation. Have the other implementations been forced to duplicate the dominant implementation's hash table iteration order, because existing code depends on it? Have the dominant implementations been forced to back out memory management changes or hash table optimizations that would affect iteration order?

I don't think that has ever happened. Python hash codes differ from version to version and from OS to OS. Jython has a completely different hashing function from CPython, even for strings. Keep in mind that hash tables are one of two core data structures in Python, so if code could sanely depend on iteration order, it would.

Of course code will depend on unspecified behavior when that unspecified behavior is actually intelligible and useful. Experience suggests hash table iteration order is neither.

Mark Miller is holding up the argument for determinism really well. I'm not sure anyone disagrees with his points. We should look into deterministic data structures and measure the performance.

# David Bruant (12 years ago)

Le 14/02/2012 07:31, Mark S. Miller a écrit :

[+tjclose]

There are many benefits to determinism. E started with non-deterministic iteration order, which opens a covert channel hazard. I initially changed to deterministic order merely to plug this leak. Having done so, I found it had many software engineering benefits. For example, it becomes much easier to write regression tests and to reproduce bugs by re-execution. In my implementation, it also had a minor additional space and time cost. Tyler's Waterken tables show that even the minor runtime costs I was paying were unnecessary.

Do you have a link to this implementation? If it's not obvious from the source code, can you give some insight on why Tyler's Waterken tables show that the minor costs were unnecessary?

Thanks,

# David Bruant (12 years ago)

Le 14/02/2012 19:28, Jason Orendorff a écrit :

On Tue, Feb 14, 2012 at 5:02 AM, David Bruant <bruant.d at gmail.com> wrote:

To be sure, this is assuming that iteration order is fixed for a given implementation. If order is not specified, then I don't see why that should be required either. It is not required, but it's what experience tells us from the for-in loop. All right, but let's not make the mistake of only learning from our own experience.

Many other language communities have very specific relevant experience. For example, there is a single dominant Java implementation and a single dominant Python implementation. Have the other implementations been forced to duplicate the dominant implementation's hash table iteration order, because existing code depends on it? Have the dominant implementations been forced to back out memory management changes or hash table optimizations that would affect iteration order?

I don't think that has ever happened. Python hash codes differ from version to version and from OS to OS. Jython has a completely different hashing function from CPython, even for strings. Keep in mind that hash tables are one of two core data structures in Python, so if code could sanely depend on iteration order, it would.

Of course code will depend on unspecified behavior when that unspecified behavior is actually intelligible and useful. Experience suggests hash table iteration order is neither.

Interesting. I have to admit that I'm almost entirely ignorant of Python, but I have some basic knowledge of Java or at least know some heavy Java users, so I'll ask them if they know something on that topic and specifically if there was an impact on how they wrote code and transitionned from version to version.

It seems to me that web browser JavaScript has a particularity in deployment that no other language has. When a JS engine feature is deployed, people use it. Then, the engine may want to do some changes. If it does, the new engine must be able to run code using the changed engine as well as old code. If it fails on the later, it's what we call "breaking the web".

It seems to me that no other programming language has such a constraint. Maybe some things were made not backward compatible in Java 7, but people have the choice to say "all right, I'll stay on Java 6 until I'm ready to move to Java 7". They have the time to study the changes, add new unit tests in their own code if they think things can break between 6 and 7. They can gradually change their code in some cases. This is not a choice web authors have. The consequence we've seen so far was web authors relying on underspecified behaviors and the need to later specify a de facto behavior.

I understand the point about hashcodes, but what garantees that all implementations will choose hashcode based iteration forever? Maybe, one day, V8 will decide that it's better to iterate keys in reverse insertion order, because they'll have found some awesome algo/data structure. Maybe Node.js users will start relying on this order, forcing V8 to be stuck. Then, maybe some "best viewed on Chrome" websites (we all know they exist) will rely on this iteration order (because they won't care testing on other browsers), forcing de facto standardization (to prevent breaking the web)

Maybe it won't happen, maybe it will. Is the risk worth taking?

If, in the end, there is a de facto standardization of iteration order, maybe one engine will have the good performance benefit that is praised here, but we'll see what the performance in engines for which this order wasn't the first choice.

Once again, is the risk worth taking?

Mark Miller is holding up the argument for determinism really well.

And of course, determinism from the beginning would prevent this risk.

I'm not sure anyone disagrees with his points. We should look into deterministic data structures and measure the performance.

Indeed. I'm looking forward to have an eye on Tyler Close Waterken tables.

# Brendan Eich (12 years ago)

Jason Orendorff wrote:

I don't think that has ever happened. Python hash codes differ from version to version and from OS to OS. Jython has a completely different hashing function from CPython, even for strings. Keep in mind that hash tables are one of two core data structures in Python, so if code could sanely depend on iteration order, it would.

Is this comparable with JS? Interop on the web is a harsh mistress. The C-Python vs. IronPython vs. PyPy vs. etc. situation is more of a "porting model" with one-way forks.

Mark Miller is holding up the argument for determinism really well. I'm not sure anyone disagrees with his points. We should look into deterministic data structures and measure the performance.

Agreed.

# John Tamplin (12 years ago)

On Tue, Feb 14, 2012 at 10:13 PM, Brendan Eich <brendan at mozilla.org> wrote:

Is this comparable with JS? Interop on the web is a harsh mistress. The C-Python vs. IronPython vs. PyPy vs. etc. situation is more of a "porting model" with one-way forks.

Java seems comparable, in that compiled code is expected to run on different JVMs. I'm not aware of any Java code that relies on a particular iteration order where it is unspecified in the API.

# Axel Rauschmayer (12 years ago)

FWIW: I like that you can expressly opt in to ordered iteration in Java, by using LinkedHashSet (which is a subclass of HashSet which implements the interface Set): docs.oracle.com/javase/6/docs/api/java/util/LinkedHashSet.html

# Gavin Barraclough (12 years ago)

Perhaps I am overly fatalistic here, but if we don't specify an iteration order I think the web will just go and specify it for us, as it has for object property iteration order. We can bet against history repeating itself if we wish.

On Feb 14, 2012, at 7:13 PM, Brendan Eich wrote:

Mark Miller is holding up the argument for determinism really well. I'm not sure anyone disagrees with his points. We should look into deterministic data structures and measure the performance.

Agreed.

*nod.

On Feb 13, 2012, at 6:03 PM, Mark S. Miller wrote:

Before getting too deep into iteration protocol for Sets (and Maps) there is a more fundamental issues: Will Set define a standard, implementation independent ordering of elements? If so, what is the basis for the ordering?

Yes. Insertion order.

Mark,

Do you think we want strict insertion order, including for numeric index properties? I guess I'm thinking the obvious here ( strawman:enumeration ), but it would seem a simplest for users if there were a single story for iteration order.

# Mark S. Miller (12 years ago)

On Tue, Feb 14, 2012 at 10:05 PM, Gavin Barraclough <barraclough at apple.com>wrote: [...]

Mark,

Do you think we want strict insertion order, including for numeric index properties? I guess I'm thinking the obvious here ( strawman:enumeration ), but it would seem a simplest for users if there were a single story for iteration order.

Great question. I hadn't thought about that in this context, but I do find it attractive. Ironically, this might be less efficient than pure insertion order. But worrying about that too early without measurement or evidence would again be premature. It also accommodates Allen's specialized collection as simply a degenerate case. Cool.

# Allen Wirfs-Brock (12 years ago)

On Feb 14, 2012, at 10:41 PM, Mark S. Miller wrote:

On Tue, Feb 14, 2012 at 10:05 PM, Gavin Barraclough <barraclough at apple.com> wrote: [...] Mark,

Do you think we want strict insertion order, including for numeric index properties? I guess I'm thinking the obvious here ( strawman:enumeration ), but it would seem a simplest for users if there were a single story for iteration order.

Great question. I hadn't thought about that in this context, but I do find it attractive. Ironically, this might be less efficient than pure insertion order. But worrying about that too early without measurement or evidence would again be premature. It also accommodates Allen's specialized collection as simply a degenerate case. Cool.

I think we need to be clear what we are talking about here. The title of this thread is "Set iterators" and the ordering discussion started in the context of iterator order for the default Set iterator. The issue of iterating "numeric indexed properties" doesn't arise for the default Set iterator.

The numeric indexed properties questions only arises where we are talking about the default iterator for plain vanilla objects which iterates over properties.

In general, objects can provide their own default iterator and defining the scope and order of iteration should be part of defining any object based abstraction. Iteration needs to make sense in terms of the abstraction. The specific choices about iteration made for Sets, Maps, or plain vanilla objects shouldn't be biasing the iteration design of other abstractions.