Hello,
I've been thinking about our IEEE-inherited positive and negative zero and the proposed ES6 Object.is and new collections semantics. I haven't followed any discussions about this so I'm basing my understanding purely on the Harmony wiki and toying around with the preliminary support in v8. Bear with me if I'm misunderstanding badly or raising a topic that has been discussed at length already. From what I can understand, it has been decided that there need to be a convenient way to distinguish between +0 and -0 but also that this new form of sameness (Object.is) should be used in the new collections, such as Maps and Sets. I don't mind having a convenient way of testing which zero but I'm a bit concerned that the added expressiveness of being able to store the two different zeroes in the same Set/Map (seems like a rare use case) is hugely dwarfed by the issues it may cause in the common case.
var s = new Set();
s.add(-0);
s.add(0);
The set now contains two items. That may seem reasonable from the code above since after all I explicitly added those two items. Then consider the cases where the (integer) keys are calculated at runtime. With integers-implemented-with-floating-point it's quite easy to end up with a negative zero in ways that wouldn't happen in a language with integers-implemented-with-integers. Such as -100 * 0. In an IEEE world (so in JS) that's -0. I'm guessing most JS programmers don't know that, for good reasons.
To further add to the confusion, now consider that for i = 0 and j = -0: i >= j && i <= j && i === j but s.has(i) !== s.has(j). WAT?
There are plenty of ways to end up with negative zeroes unexpectedly. Here's an abs that I guess most programmers would think of as correct (assuming integer usage) but it isn't because unlike Math.abs we're not using fabs() and we didn't add the extra code to handle negative zero. That has pretty dramatic consequences considering Object.is and new data structures.
// abs(-0) is broken and we used to get away with it
function abs(x) { return x < 0 ? -x : x; }
var m = new Map();
[-1, -0, 0, 1].forEach(function(v) { m.set(abs(v), v); });
// Oops. m has three, not two, items
So how are others doing? In Python, Ruby and Lua, a dict/hash/table doesn't distinguish between +0 and -0 keys. Same goes for a std::map<double, std::string> in C++. A Java HashMap<Double, String> does, because of Double.equals semantics. But even so, the Java user would rather use a HashMap<Integer, String> when using integer keys, in which there is only one zero. JS users don't have that choice, will use Maps/Sets with integer keys far more often than with floating point keys, yet I feel that we're optimizing for a corner case of the latter.
I'm thinking that this is a pretty high tax to put on JS users. Is it worth it?
Hello,
I've been thinking about our IEEE-inherited positive and negative zero and the proposed ES6 Object.is and new collections semantics. I haven't followed any discussions about this so I'm basing my understanding purely on the Harmony wiki and toying around with the preliminary support in v8. Bear with me if I'm misunderstanding badly or raising a topic that has been discussed at length already. From what I can understand, it has been decided that there need to be a convenient way to distinguish between +0 and -0 but also that this new form of sameness (Object.is) should be used in the new collections, such as Maps and Sets. I don't mind having a convenient way of testing which zero but I'm a bit concerned that the added expressiveness of being able to store the two different zeroes in the same Set/Map (seems like a rare use case) is hugely dwarfed by the issues it may cause in the common case.
var s = new Set();
s.add(-0);
s.add(0);
The set now contains two items. That may seem reasonable from the code above since after all I explicitly added those two items. Then consider the cases where the (integer) keys are calculated at runtime. With integers-implemented-with-floating-point it's quite easy to end up with a negative zero in ways that wouldn't happen in a language with integers-implemented-with-integers. Such as -100 * 0. In an IEEE world (so in JS) that's -0. I'm guessing most JS programmers don't know that, for good reasons.
To further add to the confusion, now consider that for i = 0 and j = -0: i >= j && i <= j && i === j but s.has(i) !== s.has(j). WAT?
There are plenty of ways to end up with negative zeroes unexpectedly. Here's an abs that I guess most programmers would think of as correct (assuming integer usage) but it isn't because unlike Math.abs we're not using fabs() and we didn't add the extra code to handle negative zero. That has pretty dramatic consequences considering Object.is and new data structures.
// abs(-0) is broken and we used to get away with it
function abs(x) { return x < 0 ? -x : x; }
var m = new Map();
[-1, -0, 0, 1].forEach(function(v) { m.set(abs(v), v); });
// Oops. m has three, not two, items
So how are others doing? In Python, Ruby and Lua, a dict/hash/table doesn't distinguish between +0 and -0 keys. Same goes for a std::map<double, std::string> in C++. A Java HashMap<Double, String> does, because of Double.equals semantics. But even so, the Java user would rather use a HashMap<Integer, String> when using integer keys, in which there is only one zero. JS users don't have that choice, will use Maps/Sets with integer keys far more often than with floating point keys, yet I feel that we're optimizing for a corner case of the latter.
I'm thinking that this is a pretty high tax to put on JS users. Is it worth it?
/Olov
Hello, I've been thinking about our IEEE-inherited positive and negative zero and the proposed ES6 Object.is and new collections semantics. I haven't followed any discussions about this so I'm basing my understanding purely on the Harmony wiki and toying around with the preliminary support in v8. Bear with me if I'm misunderstanding badly or raising a topic that has been discussed at length already. From what I can understand, it has been decided that there need to be a convenient way to distinguish between +0 and -0 but also that this new form of sameness (Object.is) should be used in the new collections, such as Maps and Sets. I don't mind having a convenient way of testing which zero but I'm a bit concerned that the added expressiveness of being able to store the two different zeroes in the same Set/Map (seems like a rare use case) is hugely dwarfed by the issues it may cause in the common case.
var s = new Set(); s.add(-0); s.add(0);
The set now contains two items. That may seem reasonable from the code above since after all I explicitly added those two items. Then consider the cases where the (integer) keys are calculated at runtime. With integers-implemented-with-floating-point it's quite easy to end up with a negative zero in ways that wouldn't happen in a language with integers-implemented-with-integers. Such as -100 * 0. In an IEEE world (so in JS) that's -0. I'm guessing most JS programmers don't know that, for good reasons.
To further add to the confusion, now consider that for i = 0 and j = -0: i >= j && i <= j && i === j but s.has(i) !== s.has(j). WAT?
There are plenty of ways to end up with negative zeroes unexpectedly. Here's an abs that I guess most programmers would think of as correct (assuming integer usage) but it isn't because unlike Math.abs we're not using fabs() and we didn't add the extra code to handle negative zero. That has pretty dramatic consequences considering Object.is and new data structures.
// abs(-0) is broken and we used to get away with it function abs(x) { return x < 0 ? -x : x; } var m = new Map(); [-1, -0, 0, 1].forEach(function(v) { m.set(abs(v), v); }); // Oops. m has three, not two, items
So how are others doing? In Python, Ruby and Lua, a dict/hash/table doesn't distinguish between +0 and -0 keys. Same goes for a std::map<double, std::string> in C++. A Java HashMap<Double, String> does, because of Double.equals semantics. But even so, the Java user would rather use a HashMap<Integer, String> when using integer keys, in which there is only one zero. JS users don't have that choice, will use Maps/Sets with integer keys far more often than with floating point keys, yet I feel that we're optimizing for a corner case of the latter.
I'm thinking that this is a pretty high tax to put on JS users. Is it worth it?