Nailing object property order
On Wed, Apr 15, 2015 at 6:39 PM, <a.d.bergi at web.de> wrote:
Hello! Why does ES6 specify the order of keys in objects, maps and sets? Specifically section 9.1.12 [[OwnPropertyKeys]] says the result list must be "integer indices in ascending numeric, then strings in property creation order, then symbols in property creation order". Similarly, 23.1.3.5 Map.prototype.forEach and 23.2.3.6 Set.prototype.forEach use the "original insertion order" of keys for their callbacks, and also their respective @@iterators use the ordered "entries" lists.
What was the motivation to pin these down in ES6?
Because, for objects at least, all implementations used approximately the same order (matching the current spec), and lots of code was inadvertently written that depended on that ordering, and would break if you enumerated it in a different order. Since browsers have to implement this particular ordering to be web-compatible, it was specced as a requirement.
There was some discussion about breaking from this in Maps/Sets, but doing so would require us to specify an order that is impossible for code to depend on; in other words, we'd have to mandate that the ordering be random, not just unspecified. This was deemed too much effort, and creation-order is reasonable valuable (see OrderedDict in Python, for example), so it was decided to have Maps and Sets match Objects.
For what it's worth, forcing an enumeration order does make polyfilling harder, assuming there's an engine out there that doesn't already use that ordering.
Tab Atkins Jr. schrieb:
On Wed, Apr 15, 2015 at 6:39 PM, <a.d.bergi at web.de> wrote:
What was the motivation to pin these down in ES6?
Because, for objects at least, all implementations used approximately the same order (matching the current spec), and lots of code was inadvertently written that depended on that ordering, and would break if you enumerated it in a different order. Since browsers have to implement this particular ordering to be web-compatible, it was specced as a requirement.
I see, that makes some sense at least. But why was the default object
[[enumerate]] algorithm not specced to match the [[OwnPropertyKeys]]
order then?
Most, if not all, of the codes I've seen that unfortunately depend on
this insertion-order ordering do use simple for in
loops (sometimes
accompanied by a obj.hasOwnProperty
check, very rarely an
Object.prototype.hasOwnProperty.call
check). None iterated over the
Object.keys
or even Object.getOwnPropertyNames
array.
Shouldn't we add a guarantee to [[enumerate]] that the subset of
enumerated own properties comes in insertion order as well? That would
still leave open to engines how they deal with inherited properties.
Similarly, we remove step 6 ("Order the elements of names so they are
in the same relative order as would be produced by the Iterator that
would be returned if the [[Enumerate]] internal method was invoked on
O.") from EnumerableOwnNames
(Object.keys
)?
Bergi
I don't have pointers to other es-discuss threads where this was mentioned, but I do have a reason of my own for wanting this.
I started and help maintain Ramda [1], a functional programming
library for ES. It is a collection of utility functions similar in
scope to Underscore or lodash, although with a different underlying
philosophy and a somewhat different API design. One of the features
most requested is one that we've had requested over and over [2]: a
foldObj
function that works like a normal fold
/reduce
on a list,
but works instead on the properties of an object.
There are a few tricky questions about such a function, including what to do with properties of the prototypes, but the only sticking point is one of iteration order. Because this is underspecified, and because we want our library to be dependent on specified behavior and not simply de facto implementation decisions, we have resisted this popular feature request. At the moment there is no way to know if, say
reduce(concat, '"", keys({start: 0, end: 10}))
will yield startend
or endstart
. Granted, in most (all?) modern
implementations it will be the former, but we can't count on it.
This will not address a related issue I often discuss alongside this, which is that the lodash-style implementation has this somewhat surprising behavior:
var diff = (a, b) => a - b,
obj1 = {a: 1, b: 2, c: 3};
obj2 = {b: 2, a: 1, c: 3};
_.isEqual(obj1, obj2); //=> true
_.reduce(obj1, diff); //=> 2
_.reduce(obj2, diff); //=> 4
But if we had a specified key-iteration order, I would be more than
willing to live with and document such an oddity, if I could point to
the specification to show exactly how the objects differ. Heck, if
necessary, I could even write an isStrictlyEqual
that goes beyond
property equality and checks key order.
So, I wasn't part of the decision-making, but I do endorse this change.
Do you see good reasons not to do it, or are you just wondering if there actually were pros to this?
-- Scott
[1] ramda/ramda [2] ramda/ramda#257, ramda/ramda#364, ramda/ramda#546, ramda/ramda#625, ramda/ramda#656, and certainly others...
Scott Sauyet schrieb:
Do you see good reasons not to do it, or are you just wondering if there actually were pros to this?
Oh, I can see the use cases (you've given a good example, thanks).
Everyone needs ordered dictionaries/maps once in a while. Maybe even
sortable ones, including insertAt
or insertOrderedBy(comp)
methods?
I would have loved to get another new data structure (OrderedMap
?
SortableMap
?) for this purpose. What I am opposing is that objects -
originally meant as unordered records - are now "officially" abused for
this. I can see the argument for speccing backwards-compability, but now
the people who were originally writing bad (implementation-dependent)
code are supported by the spec and can claim that the end users had to
update their browsers to ES6-compatibility, instead of needing to fix
their bugs.
I could even write an
isStrictlyEqual
that goes beyond property equality and checks key order.
That's exactly what I am fearing. If such usage takes overhand, are we
going to get Object.prototype.reorderKeys
at some point? (Oh, wait,
Object.reorderKeys
of course).
I think there should be a clear distinction between ordered and
unordered dictionaries somewhere, with tooling supporting both versions.
Instead of stuffing everything into objects.
How does an algorithm know whether to use ==
, ===
, Object.is
,
Object.isSameOrdered
, Object.equals
…?
Bergi
(sorry if this came off as a rant. I know it cannot/will not be changed any more in ES6. I'm just missing a clear strategy statement)
I'm very much opposed to locking this down for general objects because it locks the implementation choices for generic objects down. What if the engine backing implementation was, say, some variation of a trie for instance? It cannot really be done today without adding extraneous data into the structure, because lookup in that case happens on a character by character basis, not on a whole string basis, so properties that use common prefixes would always end up adjacent and even if the keys weren't inserted in order by bit patterns into the trie as most implementations do, they would still be grouped by common prefix.
Developer productivity > hypothetical minor performance gains.
+1 to all steps to make the specified behavior more deterministic, including this one.
Also, it's too late. Engines are converging, inter-operation pressure points in one direction only: greater convergence and standardization.
It's true engines are not converging on the ancient insertion order, and that caused some interop stress, but we are over that hump now. See code.google.com/p/v8/issues/detail?id=164&can=1&q=enumeration&colspec=ID Type Status Priority Owner Summary HW OS Area Stars (a long, and long-resolved, V8 issue).
Bergi's frustration is understandable. Leaving things unspecified for too long was a failure on our part in tending the spec, or a trade-off (we had other things to do ;-). All water under the bridge, but we're not stepping back to unspecified behavior. Because engines aren't, because developers do not want.
And agree with Mark: POITROAE.
fwiw, tests for enumeration order now in compat table — kangax/compat-table/commit/a267f2233ce3b25dfbee876f4a4786ad85b31049
Results — kangax.github.io/compat-table/es6/#own_property_order
Thanks Juriy,
for writing a test for this. The problem in SpiderMonkey/Firefox is the line
Object.defineProperty(obj, '4', { value: true, enumerable: true });
which defines a non-writable/non-configurable element. We don't store those with "normal" elements and thus they fall into the insertion order case. We can probably fix this by actually reordering them when somebody calls Object.keys or such.
Btw I think we would have an other bug, the spec defines how to treat "integer index", but we will only do this order for non-sparse "array index" elements, i.e. ~< 2^32-1.
I was the one who authored that commit, but thanks.
Incidentally, I left off checks for the following behaviour which also uses the OwnPropertyKeys ordering, on the basis that they're generally too obscure to manifest in usual programs (but then, what do I know?):
- Object.freeze/Object.seal/Object.isFrozen/Object.isSealed (when given a proxy, springs its "defineProperty" trap for each element returned by OwnPropertyKeys (unless "ownKeys" is also trapped and the handler provides its own order))
- Object.defineProperties (when given a proxy as arg 1, springs its "getOwnPropertyDescriptor" trap for each element returned by OwnPropertyKeys (unless "ownKeys" is also trapped and the handler provides its own order))
Presumably, Object.getOwnPropertyDescriptors in ES7 will also fire "defineProperty" in OwnPropertyKeys order. (It'd be kind of odd if it didn't, especially since such internal details are now exposed to user code.)
(I meant "getOwnPropertyDescriptor" instead of "defineProperty" in that last paragraph.)
Hello! Why does ES6 specify the order of keys in objects, maps and sets? Specifically section 9.1.12 [[OwnPropertyKeys]] says the result list must be "integer indices in ascending numeric, then strings in property creation order, then symbols in property creation order". Similarly, 23.1.3.5 Map.prototype.forEach and 23.2.3.6 Set.prototype.forEach use the "original insertion order" of keys for their callbacks, and also their respective @@iterators use the ordered "entries" lists.
What was the motivation to pin these down in ES6?
In ES5, objects were intrinsically unordered. ES3 was explicit about that an "object is an unordered collection of properties", the same thing is still in JSON for example, where objects are defined as "unordered set of name/value pairs". ES5 did only specify that for-in and Object.keys should use the same order ("if an implementation specifies one" at all). ES6 didn't even tighten this, it only describes it with a new [[enumerate]]-mechanism.
But would it be reasonable to expect that every implementation will use the same order in for-in loops and Object.keys as in Object.getOwnPropertyNames/Symbols property listings? I can't imagine how a different ordering would be helpful for an implementation.
To me, a fixed order sounds like an arbitrary restriction. There would always be a little overhead in remembering the order, and it would prevent optimisations that could treat {x:1, y:2} and {y:2, x:1} as having the same structure. And while ordered maps/sets are a good thing, they are not really sortable by ones own criteria. If I wanted to insert a value at a certain position, I'd need to first clear the map/set and then re-append all entries in the desired order. There will surely be people who want to use maps/sets like that, and I wonder whether it was deliberately made that complicated/inperformant to support this use case.
Does anyone share my concerns? The only thing I've found online was "deterministic enumeration" esdiscuss/2013-April/030204, pointers to other discussions are welcome.
Bergi