Es-discuss - several decimal discussions
On Sat, Aug 23, 2008 at 1:49 AM, Mike Cowlishaw <MFC at uk.ibm.com> wrote:
Finally, I'd like to take a poll: Other than people working on decimal at IBM and people on the EcmaScript committee, is there anyone on this list who thinks that decimal adds significant value to EcmaScript? If so, please speak up. Thanks.
Decimal arithmetic is sufficiently important that it is already available in all the 'Really Important' languages except ES (including C, Java, Python, C#, COBOL, and many more). EcmaScript is the 'odd one out' here, and not having decimal support makes it terribly difficult to move commercial calculations to the browser for 'cloud computing' and the like.
Decimals in Java are implemented at the library level, as java.math.BigDecimal. There is no expectation that intrinsic math operators work on them. Is this approach valid for ES; if not, then why not?
Implementing decimals at the library level has the advantage that they can be deployed today, as functional (if slower) ES code, and optimized later on by a native implementation with no loss of compatibility. After all, it will be several years before the next ES version becomes reliably available on consumers' browsers. Does this manner of easing migration inform the approach being taken?
Conversely, if one is to add support for the intrinsic math operators on decimals, does the required work generalize easily to arithmetic on complex numbers and matrices? Will the addition of complex numbers and matrices require more difficult work about how they interoperate with existing number representations (including, at that point, decimal numbers)? How, if at all, does this inform the present discussion?
Ihab
On Sat, Aug 23, 2008 at 8:44 AM, <ihab.awad at gmail.com> wrote:
Decimals in Java are implemented at the library level, as java.math.BigDecimal. There is no expectation that intrinsic math operators work on them. Is this approach valid for ES; if not, then why not?
I should add that one approach for doing math formulae on Decimals that look like intrinsic arithmetic operations is to build, at the library level, a compiler for functions on decimals. So I could imagine something like --
var valueAddedTaxCalc = /* ... /; var shippingCostsCalc = / ... */;
var totalCost = Decimal.compile( { vat: valueAddedTaxCalc, shipping: shippingCostsCalc }, "function (itemCost, zipCode) { itemCost + vat(itemCost) + shipping(zipCode); }");
var basePrice = /* some Decimal number */; var total = totalCost(basePrice);
My hypothesis is that decimal calculations are sufficiently localized that this approach would work; and also, by enforcing strict isolation between Decimal objects and other number forms, (i.e., not providing implicit conversions), the necessary hygiene for (say) Sarbanes-Oxley compliance would be more verifiable by looking at the code.
Ihab
On Sat, Aug 23, 2008 at 11:44 AM, <ihab.awad at gmail.com> wrote:
On Sat, Aug 23, 2008 at 1:49 AM, Mike Cowlishaw <MFC at uk.ibm.com> wrote:
Finally, I'd like to take a poll: Other than people working on decimal at IBM and people on the EcmaScript committee, is there anyone on this list who thinks that decimal adds significant value to EcmaScript? If so, please speak up. Thanks.
Decimal arithmetic is sufficiently important that it is already available in all the 'Really Important' languages except ES (including C, Java, Python, C#, COBOL, and many more). EcmaScript is the 'odd one out' here, and not having decimal support makes it terribly difficult to move commercial calculations to the browser for 'cloud computing' and the like.
Decimals in Java are implemented at the library level, as java.math.BigDecimal. There is no expectation that intrinsic math operators work on them. Is this approach valid for ES; if not, then why not?
Decimal implemented as a library would be sufficient for a 3.1 release. The problem is an interoperable definition for what infix operators is required for completeness. Taking no other action, the default behavior for the result of a "+" operator given a Number and a library provided Decimal would be to convert both to string representations and concatenate the results.
This was discussed at the last ECMA TC39 meeting in Oslo, and was found to be unusable and would create a backwards compatibility issue for Harmony. An assertion was made (reportedly by Waldemar and Brendan -- I wasn't present) that spec'ing the operators would not be all that difficult.
To be sure, I then proceeded to implement such functionality using the then current SpiderMonkey (i.e. pre TraceMonkey, meaning I'm facing a significant merge -- the joys of DVCS), and found that it wasn't all that difficult. Based on the results, I've updated the 3.1 spec, and again found it wasn't all that difficult -- precisely as Waldemar and Brendan thought.
At this point, I don't think the issue is infix operators. A few don't seem to like the IEEE standard (Waldemar in particular tends to use rather "colorful" language when referring to that spec), some have expressed vague "size" concerns when at this point it seems to me that we should be able to express such in measurable terms, and finally there are some usability concerns relating to mixed mode operations that we need to work through. More about this in a separate email.
Implementing decimals at the library level has the advantage that they can be deployed today, as functional (if slower) ES code, and optimized later on by a native implementation with no loss of compatibility. After all, it will be several years before the next ES version becomes reliably available on consumers' browsers. Does this manner of easing migration inform the approach being taken?
Conversely, if one is to add support for the intrinsic math operators on decimals, does the required work generalize easily to arithmetic on complex numbers and matrices? Will the addition of complex numbers and matrices require more difficult work about how they interoperate with existing number representations (including, at that point, decimal numbers)? How, if at all, does this inform the present discussion?
Judging by other programming languages, the next form for Number that is likely to be required is arbitrary precision integers. While we can never be sure until we do the work, I do believe that Decimal will pave a path for such additions, if there is a desire by the committee to address such requirements.
Ihab
-- Ihab A.B. Awad, Palo Alto, CA
- Sam Ruby
On Sat, Aug 23, 2008 at 10:04 AM, Sam Ruby <rubys at intertwingly.net> wrote:
Decimal implemented as a library would be sufficient for a 3.1 release.
Would it not be sufficient forever? It seems like that's the strategy that Java, Python and C are taking, as far as the Really Important Languages go. I'd be more comfortable getting experience based on an available library implementation before standardizing it, let alone standardizing a new native numeric type, but I'm not likely to move the committee needle.
Are people with critical decimal correctness needs using things like code.google.com/p/gwt-math today? Other ports of BigDecimal? Other libraries that may be more "JS-like" than a straight port of Java?
To be sure, I then proceeded to implement such functionality using the then current SpiderMonkey (i.e. pre TraceMonkey, meaning I'm facing a significant merge -- the joys of DVCS),
I don't think you'll find it a hard merge; the tracer is (by design) pretty well isolated from the rest of the engine.
Mike
On Sat, Aug 23, 2008 at 4:49 AM, Mike Cowlishaw <MFC at uk.ibm.com> wrote:
So I think this comes down to 'what to do for Decimal.parse'? There are two obvious choices:
Sam's proposal for Decimal.parse is a perfectly good one: if the source is a binary floating point Number then use the existing binary64 -> string conversion in ES to get a (decimal) string, and then convert from that to decimal128 (which should always be exact).
This has the advantage that it is simple to define (given that binary64 -> string already has a definition) and to implement, and in the ES3.1 timeframe that may be important.
It has the disadvantage that different implementations can give different results for the conversion. This is 'ugly', but may not be important given that one is starting from a binary64.
The alternative is to do a correctly-rounded conversion. This is equally simple to define but a little harder to implement (in that it does not already exist in ES implementations). However, Java already has this in the BigDecimal class, and there are also now hardware implementations, so it's not a showstopper. Also, this could also provide or use a correctly-rounded (or exact) and sign-preserving binary64 -> string conversion.
The advantage of the second approach is that ES would have 754-compliant binary -> decimal conversions from day 1, and at the same time could provide the enhanced conversion for binary -> string.
The first approach would need the addition of a different method later (Decimal.correctparse?) to comply. But the first approach may be more in the spirit of ES.
There clearly is a requirement for a string to decimal128 conversion, if for no other reason than it would be common to want to create a decimal value based on a form field in a DOM. It is not uncommon for functions in ES which expect a string as input to perform a conversion ToString prior to proceeding.
A "correctly-rounded" conversion does seem hand to have.
Both could be provided. I'd suggest Decimal.parse and Number.toDecimal as names for these two, but I'm not hung up on the names.
Sleeping on it overnight, having access to such a correctly-rounded conversion can solve a number of usability concern with the current approach I've been pursing with respect to mixed mode decimals. It factors in Mike's use of the word "contagious" and quite possibly could eliminate the need for Doug's suggestion for "use decimal".
If decimal128 were "contagious" instead of binary64, and all implicit conversions were correctly and precisely rounded, then no failures or data loss would occur; instead some excruciatingly precise answers may result. The fact that decimal operations will tend retain such precision will contribute to the "contagiousness" of the result. The programmer could then chose where and when to apply the appropriate rounding. Or, instead, motivate someone to track down an original constant which is in dire need of an 'm' suffix.
Such an approach has the nice characteristic that integers whose absolute value is less than 2**53 would convert exactly.
It is this last observation that would seem to me to dramatically reduces the the need for Doug's "use decimal" suggestion.
- Sam Ruby
On Sat, Aug 23, 2008 at 1:12 PM, Mike Shaver <mike.shaver at gmail.com> wrote:
On Sat, Aug 23, 2008 at 10:04 AM, Sam Ruby <rubys at intertwingly.net> wrote:
Decimal implemented as a library would be sufficient for a 3.1 release.
Would it not be sufficient forever? It seems like that's the strategy that Java, Python and C are taking, as far as the Really Important Languages go. I'd be more comfortable getting experience based on an available library implementation before standardizing it, let alone standardizing a new native numeric type, but I'm not likely to move the committee needle.
I don't believe we know the answer to that question. In any case, we need to decide what the following will produce:
Decimal.parse('2') + 3
Furthermore, it seems inevitable to me that this topic will come up at the next TC 39 meeting in Redmond. A minimal proposal, much along the lines as the one you described above, was available for review in time for the Oslo meeting, and resulted in a number of usability concerns being expressed such as the one I described above.
Given this, the way I would like to proceed is towards a full and complete proposal to be ready in time for people to review for the Redmond meeting. It may very well need to be scaled back, but I would much rather be in a position where the edit requests that came out of that meeting were in the form of "prune this" rather than once again be presented with "investigate that and report back".
Are people with critical decimal correctness needs using things like code.google.com/p/gwt-math today? Other ports of BigDecimal? Other libraries that may be more "JS-like" than a straight port of Java?
To be sure, I then proceeded to implement such functionality using the then current SpiderMonkey (i.e. pre TraceMonkey, meaning I'm facing a significant merge -- the joys of DVCS),
I don't think you'll find it a hard merge; the tracer is (by design) pretty well isolated from the rest of the engine.
Cool. I'm mostly concerned with one source file (jsinterp.cpp), but I'm confident that I will be able to manage.
Mike
- Sam Ruby
On Sat, Aug 23, 2008 at 10:04 AM, Sam Ruby <rubys at intertwingly.net> wrote:
Decimal implemented as a library would be sufficient for a 3.1 release. The problem is an interoperable definition for what infix operators is required for completeness. Taking no other action, the default behavior for the result of a "+" operator given a Number and a library provided Decimal would be to convert both to string representations and concatenate the results.
In other words, the same as the "+" operator given a Number and a library provided Employee, Document, PopupWidget, ..., or any other user defined type.
This was discussed at the last ECMA TC39 meeting in Oslo, and was found to be unusable and would create a backwards compatibility issue for Harmony.
With respect, I wonder what these would be. It seems that the fail-fast behavior in this case is useful for (as I pointed out) stuff like helping establish Sarbanes-Oxley compliance. I understand that such requirements are why we need decimal arithmetic in the first place.
... spec'ing the operators would not be all that difficult. ... there are some usability concerns relating to mixed mode operations that we need to work through.
It is precisely these concerns that cause me to wonder if there's a foundational issue here. Are decimals indeed a different enough beast, and the pitfalls subtle enough (even if they can be specified completely), that mixed mode should simply not be supported (i.e., made to fail-fast in some backwards compatible way)?
And if mixed mode is not supported, does that mean the library approach is adequate?
Ihab
ihab.awad at gmail.com wrote:
On Sat, Aug 23, 2008 at 10:04 AM, Sam Ruby <rubys at intertwingly.net> wrote:
Decimal implemented as a library would be sufficient for a 3.1 release. The problem is an interoperable definition for what infix operators is required for completeness. Taking no other action, the default behavior for the result of a "+" operator given a Number and a library provided Decimal would be to convert both to string representations and concatenate the results.
In other words, the same as the "+" operator given a Number and a library provided Employee, Document, PopupWidget, ..., or any other user defined type.
Having Decimal.parse(2) + 3 produce "23" is not what I would call "fail fast", as that is a term I would typically use for throwing an exception or the like.
This was discussed at the last ECMA TC39 meeting in Oslo, and was found to be unusable and would create a backwards compatibility issue for Harmony.
With respect, I wonder what these would be. It seems that the fail-fast behavior in this case is useful for (as I pointed out) stuff like helping establish Sarbanes-Oxley compliance. I understand that such requirements are why we need decimal arithmetic in the first place.
At most, Sarbanes-Oxley might be an example of one source for requirements.
... spec'ing the operators would not be all that difficult. ... there are some usability concerns relating to mixed mode operations that we need to work through.
It is precisely these concerns that cause me to wonder if there's a foundational issue here. Are decimals indeed a different enough beast, and the pitfalls subtle enough (even if they can be specified completely), that mixed mode should simply not be supported (i.e., made to fail-fast in some backwards compatible way)?
I don't believe the concerns are that decimals are a "same enough" or a "different enough beast", but rather that "fail fast" is not the type of behavior one would expect from ES for numeric operations. And to be clear, this is less of a concern with arithmetic operators than with relational operators. It would not be wise for equality operators, in particular, to start throwing exceptions.
And if mixed mode is not supported, does that mean the library approach is adequate?
Ihab
- Sam Ruby
Sam Ruby wrote:
ihab.awad at gmail.com wrote:
In other words, the same as the "+" operator given a Number and a library provided Employee, Document, PopupWidget, ..., or any other user defined type.
Having Decimal.parse(2) + 3 produce "23" is not what I would call "fail fast", as that is a term I would typically use for throwing an exception or the like.
Would the following trick make mixed mode unnecessary?
Say your program uses Decimal, but imports several old libraries that know nothing about Decimal. Say you tell the compiler to compile these old libraries in an all-Decimal mode. You just force them into Decimal, ignoring the fact that they were written for binary.
If /everything/ becomes Decimal, the whole language, is it likely that the libraries will have a problem with this? Isn't it extremely likely that they will just work perfectly?
I've mentioned this before, but I'm not sure if I was clear and understandable. Forgive me if I'm repeating something that isn't useful. As a naive onlooker I think this /sounds/ simple.
On Aug 23, 2008, at 11:30 AM, Sam Ruby wrote:
ihab.awad at gmail.com wrote:
On Sat, Aug 23, 2008 at 10:04 AM, Sam Ruby <rubys at intertwingly.net>
wrote:Decimal implemented as a library would be sufficient for a 3.1 release. The problem is an interoperable definition for what infix operators is required for completeness. Taking no other action, the default behavior for the result of a "+" operator given a Number
and a library provided Decimal would be to convert both to string representations and concatenate the results.In other words, the same as the "+" operator given a Number and a library provided Employee, Document, PopupWidget, ..., or any other user defined type.
Having Decimal.parse(2) + 3 produce "23" is not what I would call
"fail fast", as that is a term I would typically use for throwing an
exception or the like.
One possible way to achieve "fail fast" in a library only solution
would be to specify that Decimal.prototype.valueOf always throws an
exception. This would make an attempt to add or multiply using the
operator throw, but would still allow decimal numbers to be printed,
and would allow for explicit conversion methods.
, Maciej
On Sat, Aug 23, 2008 at 11:30 AM, Sam Ruby <rubys at intertwingly.net> wrote:
Having Decimal.parse(2) + 3 produce "23" is not what I would call "fail fast", as that is a term I would typically use for throwing an exception or the like.
Point well taken. Does Maciej's followup regarding valueOf throwing solve the problem?
At most, Sarbanes-Oxley might be an example of one source for requirements.
It's even weaker than that -- my suspicion (admittedly in ignorance) is that SOX does not inform us directly regarding interoperation of numerics in ES or elsewhere. :) Yet as a programmer, I would feel much better if I know that my carefully controlled arithmetic automatically checks for the silly mistakes I could make (especially under ES's dynamic typing). Removing problematic implicit conversions and comparisons with native numerics could be one way to achieve that.
I don't believe the concerns are that decimals are a "same enough" or a "different enough beast", but rather that "fail fast" is not the type of behavior one would expect from ES for numeric operations.
Were ES designed, back in 1995, to be the language it has become, and were there time back then to have thought about how the builtin arithmetic and comparison generalizes to mixtures of decimals, arbitrary precision integers, complex numbers, matrices and polynomials, I would agree with you unequivocally. As matters stand, I'm not so sure.
It would not be wise for equality operators, in particular, to start throwing exceptions.
Could these operators simply be consistent with "===" for any user defined library type, and something else (a ".equals()" operator, say) be used to define value equality?
Once again, Java faced this same problem in adding BigDecimal -- and Java is strongly typed and so arguably had more leeway to add rules that make it clearer to programmers what was going on. Yet their solution was to simply create classes at the library level.
I think the Java analogue is instructive. In Java, primitives and objects are distinct. BigDecimal was apparently a clumsy enough entity that it was found to need implementation as a constructed object. In ES, everything is an object yet there are primitive types; this means that, when faced with a newly required constructed object type, we are tempted to slip it in as a new primitive. In the absence of extensive planning done back in the mid 1990s, I submit that the temptation is to be resisted.
Ihab
On Aug 23, 2008, at 8:44 AM, ihab.awad at gmail.com wrote:
On Sat, Aug 23, 2008 at 1:49 AM, Mike Cowlishaw <MFC at uk.ibm.com>
wrote:Finally, I'd like to take a poll: Other than people working on
decimal at IBM and people on the EcmaScript committee, is there anyone on
this list who thinks that decimal adds significant value to
EcmaScript? If so, please speak up. Thanks.Decimal arithmetic is sufficiently important that it is already available in all the 'Really Important' languages except ES (including C, Java, Python, C#, COBOL, and many more). EcmaScript is the 'odd one out' here, and not having decimal support makes it terribly difficult to move commercial calculations to the browser for 'cloud computing' and the like.
Decimals in Java are implemented at the library level, as java.math.BigDecimal. There is no expectation that intrinsic math operators work on them. Is this approach valid for ES; if not, then why not?
See
jroller.com/cpurdy/entry/the_seven_habits_of_highly1
(especially the first comment, by Bob McWhirter)
also see
bugzilla.mozilla.org/show_bug.cgi?id=5856
All linked from the still-useful
Implementing decimals at the library level has the advantage that they can be deployed today, as functional (if slower) ES code, and optimized later on by a native implementation with no loss of compatibility.
gwt-math.googlecode.com/svn/trunk/gwt-math/src/main/resources com/googlecode/gwt/math/public/js/bigdecimal.js
;-)
After all, it will be several years before the next ES version becomes reliably available on consumers' browsers.
Several years or less than a year from final spec, depending on the
vendor's commitments to standards.
Does this manner of easing migration inform the approach being taken?
Not really. If you want GWT, use it. If you want a library, they are
out there. The problem is making a new type in ES3.1's spec work,
both now and in the future when operators are added (so we can avoid
the laughable but totally predictable outcome Bob McWhirter cited in
Purdy's blog).
Conversely, if one is to add support for the intrinsic math operators on decimals, does the required work generalize easily to arithmetic on complex numbers and matrices?
We might hope to know by looking at other languages, but really,
doing the work in the context of backward JS compatibility is
required. So again, doing what Sam is doing in developing patches
against open source JS implementations seems at least as valuable as
trying to spec Decimal quickly, making it "law", and then hoping it
works out in the future and somehow leads to a taller numeric tower.
Will the addition of complex numbers and matrices require more difficult work about how they interoperate with existing number representations (including, at that point, decimal numbers)? How, if at all, does this inform the present discussion?
You might find
interesting. Obviously not a live proposal, but if we ever want
operators, I'm still in favor of multimethods instead of hardcoding.
The current untestable spec can't stand much more complexity in its
runtime semantics. If we ever get complex numbers, quaternions, etc.,
they should come by users defining operator multimethods after the
committee has retired in victory to the same Valhalla where Dylan and
Cecil are spoken :-/.
On Aug 23, 2008, at 10:47 AM, Sam Ruby wrote:
On Sat, Aug 23, 2008 at 1:12 PM, Mike Shaver
<mike.shaver at gmail.com> wrote:On Sat, Aug 23, 2008 at 10:04 AM, Sam Ruby
<rubys at intertwingly.net> wrote:Decimal implemented as a library would be sufficient for a 3.1 release.
Would it not be sufficient forever? It seems like that's the
strategy that Java, Python and C are taking, as far as the Really Important Languages go. I'd be more comfortable getting experience based on an available library implementation before standardizing it, let alone standardizing a new native numeric type, but I'm not likely to move the committee needle.I don't believe we know the answer to that question. In any case, we need to decide what the following will produce:
Decimal.parse('2') + 3
Furthermore, it seems inevitable to me that this topic will come up at the next TC 39 meeting in Redmond. A minimal proposal, much along the lines as the one you described above, was available for review in time for the Oslo meeting, and resulted in a number of usability concerns being expressed such as the one I described above.
Especially since there was no effort (correct me if I'm wrong) to
throw from ToPrimitive on the Decimal type. Either Decimal (an object
type) converts to number, which loses precision and expressiveness,
or it converts to string, which means + is forever after string
concatenation given Decimal operands. That seems future-hostile, some
of us said at the Oslo TC39 meeting.
IOW if 3.1 were only to have a new Decimal standard library object
type (like Date), it would have to be unusable except by method
calls. It couldn't convert implicitly, even to string. IIRC I called
that "unusably crufty" in Oslo; I hope I was quoted accurately :-P.
Given this, the way I would like to proceed is towards a full and complete proposal to be ready in time for people to review for the Redmond meeting. It may very well need to be scaled back, but I would much rather be in a position where the edit requests that came out of that meeting were in the form of "prune this" rather than once again be presented with "investigate that and report back".
How would a response of "take the time you need, and we'll track it
for Harmony; 3.1 is otherwise all but done" strike you?
On Aug 23, 2008, at 1:23 PM, Maciej Stachowiak wrote:
One possible way to achieve "fail fast" in a library only solution would be to specify that Decimal.prototype.valueOf always throws an exception. This would make an attempt to add or multiply using the operator throw, but would still allow decimal numbers to be printed, and would allow for explicit conversion methods.
We talked about that, and it seemed necessary to future-proof, but it
was not championed by anyone, and Adam Peller (of IBM, filling in for
Sam) was not able to say "yea" or "nay".
Throwing from valueOf (or a decimal [[DefaultValue]] override,
equivalently) still looks unusably crufty: relationals and equality
operators throw given a Decimal operand. Evaluation rules that pass a
"number" hint could be made to work, but no-hint cases like + and the
comparisons suffer.
I'd rather see operators specified, but without any rushing for 3.1,
or slipping 3.1 just for Decimal-with-operators.
On Sat, Aug 23, 2008 at 10:48 PM, Brendan Eich <brendan at mozilla.org> wrote:
On Aug 23, 2008, at 10:47 AM, Sam Ruby wrote:
On Sat, Aug 23, 2008 at 1:12 PM, Mike Shaver <mike.shaver at gmail.com> wrote:
On Sat, Aug 23, 2008 at 10:04 AM, Sam Ruby <rubys at intertwingly.net> wrote:
Decimal implemented as a library would be sufficient for a 3.1 release.
Would it not be sufficient forever? It seems like that's the strategy that Java, Python and C are taking, as far as the Really Important Languages go. I'd be more comfortable getting experience based on an available library implementation before standardizing it, let alone standardizing a new native numeric type, but I'm not likely to move the committee needle.
I don't believe we know the answer to that question. In any case, we need to decide what the following will produce:
Decimal.parse('2') + 3
Furthermore, it seems inevitable to me that this topic will come up at the next TC 39 meeting in Redmond. A minimal proposal, much along the lines as the one you described above, was available for review in time for the Oslo meeting, and resulted in a number of usability concerns being expressed such as the one I described above.
Especially since there was no effort (correct me if I'm wrong) to throw from ToPrimitive on the Decimal type. Either Decimal (an object type) converts to number, which loses precision and expressiveness, or it converts to string, which means + is forever after string concatenation given Decimal operands. That seems future-hostile, some of us said at the Oslo TC39 meeting.
IOW if 3.1 were only to have a new Decimal standard library object type (like Date), it would have to be unusable except by method calls. It couldn't convert implicitly, even to string. IIRC I called that "unusably crufty" in Oslo; I hope I was quoted accurately :-P.
The remainder of the quote that was relayed to me was that spec'ing the operators would not be hard. I took it upon myself to refamilarize myself with spider monkey and implement these operators, and despite it being a decade or so since I've done any serious C/C++ coding was able to complete that task in a long weekend.
I've since started updating the spec, and the changes are very straightforward and contained. You can see for yourself, most of the changes are already in there.
Given this, the way I would like to proceed is towards a full and complete proposal to be ready in time for people to review for the Redmond meeting. It may very well need to be scaled back, but I would much rather be in a position where the edit requests that came out of that meeting were in the form of "prune this" rather than once again be presented with "investigate that and report back".
How would a response of "take the time you need, and we'll track it for Harmony; 3.1 is otherwise all but done" strike you?
Frankly, it sounds like moving the goal posts.
When is spec freeze? When do we need to have implementations complete? Are we planning to have these implementations usability and field tested? Will adequate time be allowed to factor in feedback that is received from these tests?
I'm writing the spec. I've written one implementation -- or more precisely a binding to an implementation that has been well tested and made available under a very liberal license. I've run the performance tests you asked me to run. I'm ready and willing to convert the existing decNumber test suite over to whatever format is desired; I'm also ready and willing to write additional tests to cover the ES specific binding to these functions.
The meanings of operators when all of the operands are decimal are well specified by IEEE 754R. The meaning of equality in ES3 is a bit arcane, and we've worked through that. My SpiderMonkey implementation provides access to decNumber's compareTotal method, and Object.eq (something Doug and Mark feel is necessary for other reasons) can be defined in such a way as to make use of that function.
What's left is figuring out is the syntax for constructing decimals from strings (Decimal.parse sounds fine to me, how about you?), and from binary64 floating point numbers (Number.toDecimal sounds fine to me, how about you?). And to decide whether binary64 numbers get converted to decimal128 when they are mixed, or if it is the other way around. I was originally pursing the latter, but based on feedback, I'm seriously considering the former -- with the proviso that the conversions be exact and correct rounded, to the limits of the precision supported by decimal128. This approach not only has the advantage of being exact and correct, it also obviate the need for "use decimal".
/be
- Sam Ruby
On Aug 23, 2008, at 5:45 PM, ihab.awad at gmail.com wrote:
On Sat, Aug 23, 2008 at 11:30 AM, Sam Ruby <rubys at intertwingly.net>
wrote:Having Decimal.parse(2) + 3 produce "23" is not what I would call
"fail fast", as that is a term I would typically use for throwing an
exception or the like.Point well taken. Does Maciej's followup regarding valueOf throwing solve the problem?
It forces would-be Decimal users to call Decimal.add(Decimal.parse(s)
-
- (let's say s may or may not be 2 or "2"). Is that a solution for
ES3.1? It's user-hostile instead of future-hostile.
- (let's say s may or may not be 2 or "2"). Is that a solution for
Can we not do better, especially by not overconstraining the problem
by subjecting its solution to the ES3.1 goals and schedule?
Were ES designed, back in 1995, to be the language it has become, and were there time back then to have thought about how the builtin arithmetic and comparison generalizes to mixtures of decimals, arbitrary precision integers, complex numbers, matrices and polynomials, I would agree with you unequivocally. As matters stand, I'm not so sure.
I've been through as many what-if's and might-have-been's as anyone.
JS clearly had too much implicit magic -- Perl-envy, if you will --
to be as neatly extensible as we want, in hindsight.
EIBTI, the Pythonistas say. But this need not mean explicit method
calls instead of operators for any numeric extension, since that's
pretty much unusable (except under SOX mandate :-/).
We've often cited EIBTI in ES4 working group meetings. In general I
daresay the TC39 committee is more in favor of avoiding implicit
magic, especially conversions, now than ever (E4X, ECMA-357, is full
of it, and it's a mess). But this doesn't mean that throwing when a
Decimal is used in any not-guaranteed-to-be-numeric operand context
is good.
It would not be wise for equality operators, in particular, to start throwing exceptions.
Could these operators simply be consistent with "===" for any user defined library type, and something else (a ".equals()" operator, say) be used to define value equality?
That's incompatible, and the double-dispatched (for dyadic operators)
method mapping is something we'd like to avoid in the future
(wherefore multimethods).
Once again, Java faced this same problem in adding BigDecimal -- and Java is strongly typed and so arguably had more leeway to add rules that make it clearer to programmers what was going on. Yet their solution was to simply create classes at the library level.
BigMistake, as Cameron blogged. Java failed, C# ruled here.
I think the Java analogue is instructive. In Java, primitives and objects are distinct.
That's a botch, one I copied in a hurry in 1995 and regret. I should
have stuck to the Self and Smalltalk sources of inspiration, but JS
had to "look like Java" and it was easier both to implement, and to
think about JS/Java bridging (LiveConnect, so-called), if I
distinguished primitive string, boolean, and number from their object
wrapper types.
BigDecimal was apparently a clumsy enough entity that it was found to need implementation as a constructed object.
This "apparently" backward-reasoning phenomenology is flawed.
Implementors can work harder so programmers don't have to. There's no
technical reason that decimal has to be a non-primitive in any
language that distinguishes primitive types from first-class object
types, any more than there is for strings to be non-primitive.
In ES, everything is an object yet there are primitive types; this means that, when faced with a newly required constructed object type, we are tempted to slip it in as a new primitive. In the absence of extensive planning done back in the mid 1990s, I submit that the temptation is to be resisted.
What extensive planning do you mean? Oh, the lack of it when I
created JS.
Sorry, this is bogus arguing too :-|. Five-year plans are no
guarantee of success. Utopia is not an option. Lots of underused
languages have over-planned designs and nevertheless have big
problems (other than lack of users).
Anyway, languages that do succeed in gaining users inevitably grow.
Growing a language is hard but possible. Python grew, including in
its numeric model and operators. In spite of harsher backward
compatibility constraints, so too can JS.
IEEE double is not the only conceivable number type, and indeed 32-
bit int and unsigned int show through in places in JS (bitwise
operators, array length). I'm not in favor of more machine types. But
I think making more BigMistakes with methods only, no operators, is
guaranteed to keep IEEE double the only number type that's actually
used by most real-world JS programmers.
On Aug 23, 2008, at 8:23 PM, Sam Ruby wrote:
The remainder of the quote that was relayed to me was that spec'ing the operators would not be hard. I took it upon myself to refamilarize myself with spider monkey and implement these operators, and despite it being a decade or so since I've done any serious C/C++ coding was able to complete that task in a long weekend.
You know that I'm glad you're doing this. Doesn't mean it's going to
make 3.1, or that it must.
I've since started updating the spec, and the changes are very straightforward and contained. You can see for yourself, most of the changes are already in there.
I will look when time permits, before the next TC39 meeting.
Given this, the way I would like to proceed is towards a full and complete proposal to be ready in time for people to review for the Redmond meeting. It may very well need to be scaled back, but I
would much rather be in a position where the edit requests that came
out of that meeting were in the form of "prune this" rather than once again be presented with "investigate that and report back".How would a response of "take the time you need, and we'll track it for Harmony; 3.1 is otherwise all but done" strike you?
Frankly, it sounds like moving the goal posts.
You think so? I respectfully disagree, and I've been in all the face-
to-face meetings. ES3.1 started in earnest this past January, with a
limit scope. Decimal (already in ES4 as proposed, with operators as
well as literal syntax) was not being pushed into ES3.1 until spring-
time, at which time 3.1 was supposed to be technically "done" in
July, finalized including editorial fussing and typography in
September, etc., for standardization at the December GA meeting.
Seems to me the push for Decimal moved some posts. Why is Decimal-or-
a-No-vote being hung like a sword over the committee's head? What
goes wrong if Decimal is not in 3.1, but is (especially with user-
feedback-based improvements) in the successor Harmony edition within
1-2 years?
When is spec freeze? When do we need to have implementations complete? Are we planning to have these implementations usability and field tested? Will adequate time be allowed to factor in feedback that is received from these tests?
Let's get the implementations done and we'll see. There's no
deterministic top-down command -and-control schedule here. For the
open source implementations, the main thing is to get into nightly
builds, get docs and blogs educating users, gather bug reports.
I'm writing the spec. I've written one implementation -- or more precisely a binding to an implementation that has been well tested and made available under a very liberal license. I've run the performance tests you asked me to run. I'm ready and willing to convert the existing decNumber test suite over to whatever format is desired; I'm also ready and willing to write additional tests to cover the ES specific binding to these functions.
We should talk about landing your patch on Mozilla elsewhere. IIRC
Rob Sayre asked for a few things in the bug and I haven't seen your
response (I may have missed it; been busy lately).
The meanings of operators when all of the operands are decimal are well specified by IEEE 754R. The meaning of equality in ES3 is a bit arcane, and we've worked through that. My SpiderMonkey implementation provides access to decNumber's compareTotal method, and Object.eq (something Doug and Mark feel is necessary for other reasons) can be defined in such a way as to make use of that function.
What's the Object.eq issue? I must have missed an 8am 3.1 call. :-/
What's left is figuring out is the syntax for constructing decimals from strings (Decimal.parse sounds fine to me, how about you?), and from binary64 floating point numbers (Number.toDecimal sounds fine to me, how about you?).
I like those names, FWIW.
And to decide whether binary64 numbers get converted to decimal128 when they are mixed, or if it is the other way around.
That is important to get right. Igor Bukanov and I both sounded off
on this list as against any mixing of decimal128 and binary64 that
degraded toward binary64. So I'm pleased as punch with your new
thinking.
I was originally pursing the latter, but based on feedback, I'm seriously considering the former -- with the proviso that the conversions be exact and correct rounded, to the limits of the precision supported by decimal128. This approach not only has the advantage of being exact and correct, it also obviate the need for "use decimal".
Maybe, except writing 123m all over is enough of a drag that most
people won't. There still could be a case for 'use decimal'.
How's the new approach (contagious decimal128) working out in the code?
On Aug 23, 2008, at 11:14 PM, Brendan Eich wrote:
On Aug 23, 2008, at 5:45 PM, ihab.awad at gmail.com wrote:
Point well taken. Does Maciej's followup regarding valueOf throwing solve the problem?
It forces would-be Decimal users to call Decimal.add(Decimal.parse(s)
Er, Decimal.add(Decimal.parse(s), 3) of course -- BigMistake bites
again!
The + in my mistyped version above had better throw, if we must take
this low-usability road just to jam Decimal into ES3.1 in a hurry.
Better to do operators, and if it doesn't fit in the new schedule
(finish by next spring's Ecma GA meeting), move it to Harmony.
On Aug 23, 2008, at 11:14 PM, Brendan Eich wrote:
On Aug 23, 2008, at 5:45 PM, ihab.awad at gmail.com wrote:
On Sat, Aug 23, 2008 at 11:30 AM, Sam Ruby <rubys at intertwingly.net> wrote:
Having Decimal.parse(2) + 3 produce "23" is not what I would call "fail fast", as that is a term I would typically use for throwing an exception or the like.
Point well taken. Does Maciej's followup regarding valueOf throwing solve the problem?
It forces would-be Decimal users to call Decimal.add(Decimal.parse(s)
- (let's say s may or may not be 2 or "2"). Is that a solution for ES3.1? It's user-hostile instead of future-hostile.
Decimal.add is also effectively what other languages provide, and it
doesn't seem critical to do better for ES3.1, as long as a better
future solution is not blocked. The user-hostile approach seems
adequate in Java for the specialists who particularly care. Throwing
now would allow proper semantics for the operators to be defined later/
Can we not do better, especially by not overconstraining the problem by subjecting its solution to the ES3.1 goals and schedule?
I suspect we can, which is why I'd rather not have decimal in ES3.1 at
all, or if present as library-only.
Were ES designed, back in 1995, to be the language it has become, and were there time back then to have thought about how the builtin arithmetic and comparison generalizes to mixtures of decimals, arbitrary precision integers, complex numbers, matrices and polynomials, I would agree with you unequivocally. As matters stand, I'm not so sure.
I've been through as many what-if's and might-have-been's as anyone. JS clearly had too much implicit magic -- Perl-envy, if you will -- to be as neatly extensible as we want, in hindsight.
EIBTI, the Pythonistas say. But this need not mean explicit method calls instead of operators for any numeric extension, since that's pretty much unusable (except under SOX mandate :-/).
We've often cited EIBTI in ES4 working group meetings. In general I daresay the TC39 committee is more in favor of avoiding implicit magic, especially conversions, now than ever (E4X, ECMA-357, is full of it, and it's a mess). But this doesn't mean that throwing when a Decimal is used in any not-guaranteed-to-be-numeric operand context is good.
Actually, throwing from valueOf would have the opposite effect. It
would throw in numeric (or possibly-numeric) contexts but not in
string contexts, where conversion to string would be performed (since
toString would not throw). This would avoid any accidental mixing of
decimal and binary floating point numbers. The main sadness would be
due to the fact that string concatenation and numeric addition are the
same operator.
, Maciej
On Aug 24, 2008, at 12:51 AM, Maciej Stachowiak wrote:
But this doesn't mean that throwing when a Decimal is used in any not-guaranteed-to-be-numeric operand context is good.
Actually, throwing from valueOf would have the opposite effect.
Of course, and my "not-" was a thinko. I meant "guaranteed-to-be-
numeric". Having memorized a lot of ES3, including where the
preferredType hint is not number, I was not trying to make the
opposite point within the same mail message.
We could throw from valueOf, but at least you and I agree it would be
better to avoid any such a seemingly-rushed, library-only, user-
hostile Decimal. Waldemar and I said so at the Oslo meeting, and Sam
agrees.
I am interested to see what Sam can pull off, in code more than spec.
I don't doubt the spec can be done, given design decisions from IEEE
P754 and some freedom to match JS design impedance. I'm concerned we
won't have time to give such a purportedly user-friendly Decimal
sufficient user testing.
Good morning everyone! I like the posts from Brendan which occurred while I was sleeping.
On Sun, Aug 24, 2008 at 4:12 AM, Brendan Eich <brendan at mozilla.org> wrote:
On Aug 24, 2008, at 12:51 AM, Maciej Stachowiak wrote:
But this doesn't mean that throwing when a Decimal is used in any not-guaranteed-to-be-numeric operand context is good.
Actually, throwing from valueOf would have the opposite effect.
Of course, and my "not-" was a thinko. I meant "guaranteed-to-be- numeric". Having memorized a lot of ES3, including where the preferredType hint is not number, I was not trying to make the opposite point within the same mail message.
We could throw from valueOf, but at least you and I agree it would be better to avoid any such a seemingly-rushed, library-only, user- hostile Decimal. Waldemar and I said so at the Oslo meeting, and Sam agrees.
Minor correction: Waledemar and you said so at the Oslo meeting, and Sam is quite willing to explore that direction. We agree that rushed and user-hostile is in no-one's interest. While others may disagree on some specifics, I sense that Brendan and I agree more than disagree on what is, and what is not, user hostile.
I am going to continue to do my level best to get an un-rushed, user-friendly Decimal support into 3.1. If I succeed, great. If I fail, then hopefully there is some specific issues that we can all agree upon which needs to be worked.
I am interested to see what Sam can pull off, in code more than spec. I don't doubt the spec can be done, given design decisions from IEEE P754 and some freedom to match JS design impedance. I'm concerned we won't have time to give such a purportedly user-friendly Decimal sufficient user testing.
Such a concern is understandable. I fault no-one that my patch hasn't landed. Rob's comments, while valuable, aren't as much a concern to me than the fact that no-one has yet to review my code. A modest request: if the code could be reviewed and feedback provided to me with sufficient time for me to address the comments and the code to be integrated into a nightly build, by default disabled, in time for the Redmond meeting, I would appreciate it.
I see you (Brendan) asked a question about Object.eq. Instead of trying to lead you to the conclusion that those of us on the bi-weekly "3.1" calls have provisionally made, let me try to get your input on the topic.
Let's posit for the moment that adding two nickles and calling toString() on the result produces "1.10", with the trailing zero. I realize that some may disagree, but for the moment, assume that to be true. Furthermore assume that 1.10m > 1.09m and that 1.10m < 1.11m.
Given the above, as well as your knowledge of ES history, what would you expect the results of evaluating the following two expressions to be:
1.10m == 1.1m
1.10m === 1.1m
Depending on your answers, there may be a followup question.
/be
- Sam Ruby
On Aug 24, 2008, at 6:18 AM, Sam Ruby wrote:
A modest request: if the code could be reviewed and feedback provided to me with sufficient time for me to address the comments and the code to be integrated into a nightly build, by default disabled, in time for the Redmond meeting, I would appreciate it.
Set the review flag to my bugzilla id -- othewise I won't see it in
my request queue, and I'll probably forget to go review it on my own.
Thanks.
Let's posit for the moment that adding two nickles and calling toString() on the result produces "1.10"
Where'd the buck come from? ;-) Ok, I'm with you so far...
Furthermore assume that 1.10m > 1.09m and that 1.10m < 1.11m.
Given the above, as well as your knowledge of ES history, what would you expect the results of evaluating the following two expressions to be:
1.10m == 1.1m
true.
1.10m === 1.1m
true, but there's a weak case for false if you really insist that the
significance is part of the value. If we make the result be false,
then people who advocate always using === instead of == to avoid
implicit conversions would be giving bad advice to Decimal users. I'm
sticking with true for the reason given below.
Depending on your answers, there may be a followup question.
Guy Steele, during ES1 standardization, pointed out that some Lisps
have five equality-like operators. This helped us swallow === and !==
(and keep the == operator, which is not an equivalence relation).
Must we go to this well again, and with Object.eq (not an operator),
all just to distinguish the significance carried along for toString
purposes? Would it not be enough to let those who care force a string
comparison?
On Sun, Aug 24, 2008 at 11:13 AM, Brendan Eich <brendan at mozilla.org> wrote:
On Aug 24, 2008, at 6:18 AM, Sam Ruby wrote:
A modest request: if the code could be reviewed and feedback provided to me with sufficient time for me to address the comments and the code to be integrated into a nightly build, by default disabled, in time for the Redmond meeting, I would appreciate it.
Set the review flag to my bugzilla id -- othewise I won't see it in my request queue, and I'll probably forget to go review it on my own. Thanks.
I'll see if I can rev it first based on the changes we have been discussing and to resync with the latest hg TIP. Once that is done, I'll set the review flag. I was unclear on the process - thanks for the tip.
Let's posit for the moment that adding two nickles and calling toString() on the result produces "1.10"
Where'd the buck come from? ;-) Ok, I'm with you so far...
Oopsie! :-)
Furthermore assume that 1.10m > 1.09m and that 1.10m < 1.11m.
Given the above, as well as your knowledge of ES history, what would you expect the results of evaluating the following two expressions to be:
1.10m == 1.1m
true.
1.10m === 1.1m
true, but there's a weak case for false if you really insist that the significance is part of the value. If we make the result be false, then people who advocate always using === instead of == to avoid implicit conversions would be giving bad advice to Decimal users. I'm sticking with true for the reason given below.
That's the conclusion we came to. An argument could be made for either answer, but the stronger case, based on ES consistency, is for true.
You correctly anticipated the followup question, and I address that below. But your mentioning of conversions triggers a different line of questions.
What should the result of the following expressions be:
1.5m == 1.5
1.5m === 1.5
typeof(1.5m)
A case could be made that both binary64 and decimal128 are both numbers, and that a conversion is required. A case could be made that while they are both numbers, the spec needs to retain the behavior that users have come to expect that conversions are not performed for strict equality. And finally, a case could be made that the result of typeof(1.5m) should be something different, probably "decimal", and that decimal128 values are never strictly equal to binary64 values, even when we are talking about simple integers.
Whatever the conclusion, neither the spec updates nor the code updates would be difficult.
Depending on your answers, there may be a followup question.
Guy Steele, during ES1 standardization, pointed out that some Lisps have five equality-like operators. This helped us swallow === and !== (and keep the == operator, which is not an equivalence relation).
Must we go to this well again, and with Object.eq (not an operator), all just to distinguish the significance carried along for toString purposes? Would it not be enough to let those who care force a string comparison?
I think that a "static" Decimal method (decNumber calls this compareTotal) would suffice. Others believe this needs to be generalized. I don't feel strongly on this, and am willing to go with the consensus.
/be
- Sam Ruby
On Aug 24, 2008, at 1:12 AM, Brendan Eich wrote:
On Aug 24, 2008, at 12:51 AM, Maciej Stachowiak wrote:
But this doesn't mean that throwing when a Decimal is used in any not-guaranteed-to-be-numeric operand context is good.
Actually, throwing from valueOf would have the opposite effect.
Of course, and my "not-" was a thinko. I meant "guaranteed-to-be- numeric".
On reflection, I should have written "not-guaranteed-to-be-string"
context. The preferredType hint to [[DefaultValue]] is either absent,
number, or string. It seems to me we have to throw in the absent and
number cases to be future proof.
Throwing in these cases bites not only +, but == and !=. Old and
heretofore type-agnostic code that uses something like
if (typeof v == "object" && v == memoized_object) { // handle the memoized_object special case }
to guard for a particular object reference would start throwing if
passed a decimal. One could say that such code ought to have used
===, but tell it to the long-gone consultant who wrote it.
Mandatory explicit d.toString() calling instead of "..." + d, e.g.,
is awkward, but people could learn if they had to, and curse our
names. Better if they didn't have to work around future-proofing (or
curse our names).
Maybe this isn't too big a problem in practice, but we don't know. If
we can get operators up and running, we won't have to find out the
hard way. So I'd still rather not go there (you agree, sorry for
belaboring; I'm still preaching past your choir to the heathen who
would put only the BigMistake-or-throw Decimal API into ES3.1 :-/).
On Aug 24, 2008, at 9:34 AM, Sam Ruby wrote:
What should the result of the following expressions be:
1.5m == 1.5
1.5m === 1.5
typeof(1.5m)
A case could be made that both binary64 and decimal128 are both numbers, and that a conversion is required.
For ==, yes. For ===, never!
A case could be made that while they are both numbers, the spec needs to retain the behavior that users have come to expect that conversions are not performed for strict equality.
We must not make strict equality intransitive or potentially side-
effecting. Please tell me the case should be made, not just could be
made.
And finally, a case could be made that the result of typeof(1.5m) should be something different, probably "decimal", and that decimal128 values are never strictly equal to binary64 values, even when we are talking about simple integers.
I'm sympathetic to the "decimal128 values are never strictly equal to
binary64 values" part of this case, but that's largely because I am
not an advocate of === over == simply because == is not an
equivalence relation. == is useful, in spite of its dirtiness.
Advocates of === for all use-cases, even those written by casual or
naive JS programmers, are just setting those programmers up for
confusion when === is too strict for the use-case at hand -- and
they're setting up the TC39 committee to add three more equality-like
operators so we catch up to Common Lisp :-(.
The place to hold this DWIM fuzzy line is at ==. Do not degrade ==='s
strictness.
The typeof question should be separated. You could have typeof return
"number" for a double or a decimal, but still keep === strict. I
believe that would be strictly (heh) more likely to break existing
code than changing typeof d to return "object" for Decimal d.
I don't see why we would add a "decimal" result for typeof in the
absence of a primitive decimal type. That too could break code that
tries to handle all cases, where the "object" case would do fine with
a Decimal instance. IIRC no one is proposing primitive decimal for
ES3.1. All we have (please correct me if I'm wrong) is the capital-D
Decimal object wrapper.
Guy Steele, during ES1 standardization, pointed out that some Lisps have five equality-like operators. This helped us swallow === and !== (and keep the == operator, which is not an equivalence relation).
Must we go to this well again, and with Object.eq (not an operator), all just to distinguish the significance carried along for toString purposes? Would it not be enough to let those who care force a string comparison?
I think that a "static" Decimal method (decNumber calls this compareTotal) would suffice. Others believe this needs to be generalized. I don't feel strongly on this, and am willing to go with the consensus.
Premature generalization without implementation and user experience
is unwarranted. What would Object.eq(NaN, NaN) do, return true?
Never! Would Object.eq(-0, 0) return false? There's no purpose for
this in the absence of evidence. I agree with you, compareTotal or
something like it (stringization followed by ===) is enough.
Brendan Eich wrote:
On Aug 24, 2008, at 9:34 AM, Sam Ruby wrote:
What should the result of the following expressions be:
1.5m == 1.5
1.5m === 1.5
typeof(1.5m)
A case could be made that both binary64 and decimal128 are both numbers, and that a conversion is required.
For ==, yes. For ===, never!
A case could be made that while they are both numbers, the spec needs to retain the behavior that users have come to expect that conversions are not performed for strict equality.
We must not make strict equality intransitive or potentially side-effecting. Please tell me the case /should/ be made, not just could be made.
And finally, a case could be made that the result of typeof(1.5m) should be something different, probably "decimal", and that decimal128 values are never strictly equal to binary64 values, even when we are talking about simple integers.
I'm sympathetic to the "decimal128 values are never strictly equal to binary64 values" part of this case, but that's largely because I am not an advocate of === over == simply because == is not an equivalence relation. == is useful, in spite of its dirtiness.
Advocates of === for all use-cases, even those written by casual or naive JS programmers, are just setting those programmers up for confusion when === is too strict for the use-case at hand -- and they're setting up the TC39 committee to add three more equality-like operators so we catch up to Common Lisp :-(.
The place to hold this DWIM fuzzy line is at ==. Do not degrade ==='s strictness.
The typeof question should be separated. You could have typeof return "number" for a double or a decimal, but still keep === strict. I believe that would be strictly (heh) more likely to break existing code than changing typeof d to return "object" for Decimal d.
If we go with "object", a Decimal.parse methods won't be strictly (heh-backatcha) necessary, a Decimal constructor would do. In fact, decimal could also be called as a function. I like it when things become simpler.
My only remaining comment is that might tip the scales as to whether or not 1.10m === 1.1m. They certainly are not the same object. But we previously agreed that this one could go either way.
I don't see why we would add a "decimal" result for typeof in the absence of a primitive decimal type. That too could break code that tries to handle all cases, where the "object" case would do fine with a Decimal instance. IIRC no one is proposing primitive decimal for ES3.1. All we have (please correct me if I'm wrong) is the capital-D Decimal object wrapper.
Guy Steele, during ES1 standardization, pointed out that some Lisps have five equality-like operators. This helped us swallow === and !== (and keep the == operator, which is not an equivalence relation).
Must we go to this well again, and with Object.eq (not an operator), all just to distinguish the significance carried along for toString purposes? Would it not be enough to let those who care force a string comparison?
I think that a "static" Decimal method (decNumber calls this compareTotal) would suffice. Others believe this needs to be generalized. I don't feel strongly on this, and am willing to go with the consensus.
Premature generalization without implementation and user experience is unwarranted. What would Object.eq(NaN, NaN) do, return true? Never! Would Object.eq(-0, 0) return false? There's no purpose for this in the absence of evidence. I agree with you, compareTotal or something like it (stringization followed by ===) is enough.
/be
- Sam Ruby
First, I'd like to apologize to Sam and the other IBM folks regarding moving the goal posts. While I have repeatedly stated that I would prefer decimal not to go in at all, in recent ES3.1 phone calls it seemed we were coming close enough to closure that I expressed something along the lines of "I wish we hadn't invested the time we have in decimal. But it seems like we're close enough to closure that we should just get it finished. Waiting isn't going to make this any easier." If we were as close to done as it seemed at the time, perhaps this would have been a reasonable sentiment. After Waldemar's recent posts and my reading up a bit more on decimal, I now realize I was wildly optimistic. To the degree that the exercise seems to have moved the goal posts on you, it may be largely my fault. I'm sorry for the confusion.
In any case, I'm glad we seem to be in all around agreement to pull decimal completely from 3.1. This allows us to settle all remaining decimal matters in a less hurried manner.
2008/8/24 Brendan Eich <brendan at mozilla.org>:
Premature generalization without implementation and user experience is unwarranted. What would Object.eq(NaN, NaN) do, return true? Never!
Object.eq(NaN, NaN) should indeed return true.
Would Object.eq(-0, 0) return false? There's no purpose for this in the absence of evidence.
Object.eq(-0, 0) should indeed return false. The evidence from other languages and from math itself is quite strong.
I agree with you, compareTotal or something like it (stringization followed by ===) is enough.
compareTotal is addressing a completely different need than Object.eq.
With decimal postponed, we don't need to settle this now. But JavaScript has no well behaved equality operator. As en.wikipedia.org/wiki/Equivalence_relation explains, for a
predicate P to be an equivalence relationship (and thereby to define equivalence classes), it should be
- reflexive: forall x: P(x,x)
- symmetric: forall x,y: P(x,y) implies P(y,x)
- transitive: forall x,y,z: P(x,y) & P(y,z) implies P(x,z)
However,
- == is not reflexive or transitive. In the Caja project, when we have occasionally violated our own style and used ==, we've always regretted it.
- === is not reflexive. Fortunately, there's only one value on which it isn't, NaN, so code picking off this special case can guard uses of === which is otherwise well behaved.
For code that picks off one additional special case, -0, remaining uses of === are even more well behaved. What's the definition of equality we all learned in grade school? "Substitution of equals for equals does not change the meaning of the expression." Leaving aside -0, for all other values in JavaScript today, if x === y, then x and y and computationally indistinguishable. If you snapshot a computation, substitute some occurrences of x for y, and resume, it would make no observable difference. I expect that Harmony will have some kind of identity-map (hashtable) abstraction, and probably some kind of identity-hashcode that goes along with it. What identity comparison should these be based on? Without a guarantee of indistinguishability, you can't use such maps for transparent memoizing caches or structure-sharing low-level serialization.
So today, if one wants a predicate that's both an equivalence relationship and guarantees computational indistinguishability, one can test for NaN and -0, and then use === for the remaining cases. (This is also the case for Java's "==".) But if we expect 1.0m !== 1.00m in Harmony, then this will break such code. The language itself should provide a strong equality operation that the language promises not to break in future upgrades.
The most popular of Lisp's equality predicates, EQ, is indeed both an equivalence relationship and implies a guarantee of computational indistinguishability. It also has the longest history of robust use in the Lisp community.
On Sun, Aug 24, 2008 at 2:43 PM, Mark S. Miller <erights at google.com> wrote:
In any case, I'm glad we seem to be in all around agreement to pull decimal completely from 3.1.
I believe that's a bit of an overstatement.
- Sam Ruby
2008/8/24 Brendan Eich <brendan at mozilla.org>:
Premature generalization without implementation and user experience is unwarranted. What would Object.eq(NaN, NaN) do, return true? Never!
2008/8/24 Mark S. Miller <erights at google.com>:
Object.eq(NaN, NaN) should indeed return true.
/ snip /
With decimal postponed, we don't need to settle this now. But JavaScript has no well behaved equality operator. As en.wikipedia.org/wiki/Equivalence_relation explains, for a predicate P to be an equivalence relationship (and thereby to define equivalence classes), it should be
- reflexive: forall x: P(x,x)
- symmetric: forall x,y: P(x,y) implies P(y,x)
- transitive: forall x,y,z: P(x,y) & P(y,z) implies P(x,z)
However,
- == is not reflexive or transitive. In the Caja project, when we have occasionally violated our own style and used ==, we've always regretted it.
- === is not reflexive. Fortunately, there's only one value on which it isn't, NaN, so code picking off this special case can guard uses of === which is otherwise well behaved.
And I'd argue that you're wrong there. NaN isn't a single value. If you did === on the object that turned into NaN when converted into a number you'd get true, because that's one specific value you're comparing to itself. But NaN represents any arbitrary value in the infinite set of values that cannot be converted into numbers, not a specific value in that set. Which means that the likelyhood of two NaNs representing the same value approaches zero and is statistically insignificant.
In other words, NaN should never equal NaN using any equality operator, unless you build your number system so that NaNs remember what specific value they were converted from and do an object comparison instead of number comparison for those. Which is not the case for ECMAScript.
On Sun, Aug 24, 2008 at 1:43 PM, liorean <liorean at gmail.com> wrote:
And I'd argue that you're wrong there. NaN isn't a single value.
Arithmetically, perhaps not. == and === already represent the arithmetic semantics of NaN and -0.
Computationally, all NaNs are not observably distinguishable in ES3 and, AFAIK, in 4/4 browsers. -0 and 0 are observably distinguishable in ES3 and in 4/4 browsers.
Finally, even if some NaNs are observably distinguishable from other NaNs (though this is disallowed by ES3 for EcmaScript code), that by itself doesn't justify:
const x = NaN;
x === x // yields false
Surely x holds whatever value x holds. How can this value not be the same as itself? For an arithmetic equality primitive, I accept that this peculiar non-reflexivity may be a desirable property; or at least that this is the mistake that has become the permanent standard among numerics. However, abstractions like memoizers need to reason about computational equivalence at the level of the programming language semantics. Let's not allow the distinct needs of reasoning about arithmetic corrupt our abilities to reason reliably about computation.
Object.eq(x, x) // yields true
since the value of x is whatever it is.
But NaN represents any arbitrary value in the infinite set of values that cannot be converted into numbers, not a specific value in that set.
Then whatever set of arithmetic values is represented by the computational value in x, x is a computational value that represents that set of arithmetic values.
In other words, NaN should never equal NaN using any equality operator, unless you build your number system so that NaNs remember what specific value they were converted from and do an object comparison instead of number comparison for those. Which is not the case for ECMAScript.
Object.eq() is not an operator of the number system. Between two numbers, == and === are already adequate for numeric "equality".
On Sun, Aug 24, 2008 at 1:43 PM, liorean <liorean at gmail.com> wrote:
And I'd argue that you're wrong there. NaN isn't a single value.
2008/8/25 Mark S. Miller <erights at google.com>:
Arithmetically, perhaps not. == and === already represent the arithmetic semantics of NaN and -0.
Computationally, all NaNs are not observably distinguishable in ES3 and, AFAIK, in 4/4 browsers. -0 and 0 are observably distinguishable in ES3 and in 4/4 browsers.
Not being distinguishable does not mean they are all equivalent. It just means that the information necessary to distinguish them is not available.
Finally, even if some NaNs are observably distinguishable from other NaNs (though this is disallowed by ES3 for EcmaScript code), that by itself doesn't justify:
const x = NaN; x === x // yields false
Surely x holds whatever value x holds. How can this value not be the same as itself?
As I said, X doesn't contain any specific value, it contains a symbolic value representing any arbitrary value from an infinite set. This means that the LHS NaN does not contain the same value as the RHS NaN, it contains an value that has a likelyhood of 1/Infinity of being the same as the RHS NaN. Since a NaN does not remember which value it was converted from, and you're comparing values and not variables, it's infinitely improbable that they would refer to the same value. So no, there is no situation where two NaNs should compare as equal without additional data added that is not present.
Object.eq(x, x) // yields true
since the value of x is whatever it is.
Object.eq will be sent two NaN values as arguments, not two x variables. The likelyhood that they represent the same value is still 1/Infinity.
But NaN represents any arbitrary value in the infinite set of values that cannot be converted into numbers, not a specific value in that set.
Then whatever set of arithmetic values is represented by the computational value in x, x is a computational value that represents that set of arithmetic values.
It does not represent the whole set. It represents a single arbitrary value from the set, and the NaN has no further data on which of them. This means that there's no case where you could actually confirm that two NaNs represent the same value without either:
- having the knowledge of which value was converted to a number,
- or knowing that the two NaNs came from the exact same variable/property and that the variable/property was not changed in between.
ECMAScript would provide no data about either to the Object.eq function, so it could under no circumstance know that they are in fact representing the same value.
In other words, NaN should never equal NaN using any equality operator, unless you build your number system so that NaNs remember what specific value they were converted from and do an object comparison instead of number comparison for those. Which is not the case for ECMAScript.
Object.eq() is not an operator of the number system. Between two numbers, == and === are already adequate for numeric "equality".
It's an operator on the value system, though. And NaN represents values, but it can't know which values, and wihtout any further information about the NaNs you still have to assume they are infinitely improbable to represent the same value.
On Sun, Aug 24, 2008 at 4:10 PM, liorean <liorean at gmail.com> wrote:
const x = NaN; x === x // yields false
Surely x holds whatever value x holds. How can this value not be the same as itself?
As I said, X doesn't contain any specific value, it contains a symbolic value representing any arbitrary value from an infinite set.
I am talking about that symbolic value. I am not talking about the possible numeric values it represents, nor about probability distributions. If probability distributions were relevant, would it be acceptable for the above comparison to occasionally yield true, so long as it happens with vanishingly now probability?
Section 8.5 of ES3 states things rather clearly:
The Number Type The Number type has exactly 18437736874454810627 (that is, 264−253+3) values, representing the double-precision 64-bit format IEEE 754 values as specified in the IEEE Standard for Binary Floating-Point Arithmetic, except that the 9007199254740990 (that is, 253−2) distinct "Not-a-Number" values of the IEEE Standard are represented in ECMAScript as a single special NaN value. (Note that the NaN value is produced by the program expression NaN, assuming that the globally defined variable NaN has not been altered by program execution.) In some implementations, external code might be able to detect a difference between various Non-a-Number values, but such behaviour is implementation-dependent; to ECMAScript code, all NaN values are indistinguishable from each other.
There are two other special values, called positive Infinity and negative Infinity. [emphasis added]
2008/8/24 Mark S. Miller <erights at google.com>:
long as it happens with vanishingly now probability?
I meant "vanishingly low probability"
On Aug 24, 2008, at 11:43 AM, Mark S. Miller wrote:
I agree with you, compareTotal or something like it (stringization followed by ===) is enough.
compareTotal is addressing a completely different need than Object.eq.
Why is eq being proposed whatever happens with Decimal?
BTW, eq is traditional in Lisps but way too short and overloaded in
JS. Suggest Object.identical or something like that. If the topic is
Object.hashcode, then this makes sense. I don't see why Decimal
caused eq to be brought up, though.
With decimal postponed, we don't need to settle this now. But JavaScript has no well behaved equality operator. As en.wikipedia.org/wiki/Equivalence_relation explains,
We've been over this before, === is an equivalence relation excluding
NaN. It happens to put 0 and -0 in the same equivalence class. Why
this is a problem has yet to be demonstrated (add hashcode and then
we can talk ;-).
On Sun, Aug 24, 2008 at 5:40 PM, Brendan Eich <brendan at mozilla.org> wrote:
Why is eq being proposed whatever happens with Decimal?
Because in the absence of decimal, === is close enough to an "identical" test that one can practically code around the problem cases (NaN and -0)
We've been over this before, === is an equivalence relation excluding NaN. It happens to put 0 and -0 in the same equivalence class. Why this is a problem has yet to be demonstrated (add hashcode and then we can talk ;-).
Until we add either hashcode or decimal, we can postpone the whole topic.
On Aug 24, 2008, at 6:51 PM, Mark S. Miller wrote:
On Sun, Aug 24, 2008 at 5:40 PM, Brendan Eich <brendan at mozilla.org>
wrote:Why is eq being proposed whatever happens with Decimal?
Because in the absence of decimal, === is close enough to an "identical" test that one can practically code around the problem cases (NaN and -0)
You're assuming 1.0m === 1.00m must be true. I'm disputing that (see
exchange with Sam, who may agree).
We've been over this before, === is an equivalence relation
excluding NaN. It happens to put 0 and -0 in the same equivalence class. Why this
is a problem has yet to be demonstrated (add hashcode and then we can
talk ;-).Until we add either hashcode or decimal, we can postpone the whole
topic.
I'm trying to avoid too much creature feep in any edition ;-). Other
languages have near-universally-quantified hashcode analogues but
solve NaN dilemmas by throwing (Python, IIRC). We can enlarge the
language to be strictly more expressive, or we could enlarge
equivalence classes under ===, or we could choose to make certain
seemingly equal numbers !==. The rationale for excluding the last two
alternatives when mooting Object.eq is not clear.
On Sun, Aug 24, 2008 at 10:01 PM, Brendan Eich <brendan at mozilla.org> wrote:
On Aug 24, 2008, at 6:51 PM, Mark S. Miller wrote:
On Sun, Aug 24, 2008 at 5:40 PM, Brendan Eich <brendan at mozilla.org> wrote:
Why is eq being proposed whatever happens with Decimal?
Because in the absence of decimal, === is close enough to an "identical" test that one can practically code around the problem cases (NaN and -0)
You're assuming 1.0m === 1.00m must be true. I'm disputing that (see exchange with Sam, who may agree).
I agree that you are disputing that. ;-P
Less than 10 hours ago, ISTR you were suggesting the opposite. "there's a weak case for false"...
On Thursday, ISTR Mark arguing that "correspond to the same point on the real number line" was close enough to an "identical" test.
I think I need to sit down now, I'm getting dizzy. :-)
- Sam Ruby
On Sun, Aug 24, 2008 at 7:01 PM, Brendan Eich <brendan at mozilla.org> wrote:
You're assuming 1.0m === 1.00m must be true. I'm disputing that (see exchange with Sam, who may agree).
Indeed. One of the ES3.1 phone calls characterized the two early positions as
- The typeof school: ES3 currently has the regularity (typeof x == typeof y && x == y) iff x === y Let's not break that regularity. It's a good idea, and existing code may count on it.
- The eq school: ES3 currently has the regularities that except for NaN, === is an equivalence relation for all x,y except for -0, x === y implies x is observably indistinguishable for y Let's not break those regularities. It's a good idea, and existing code may count on it.
If === remains as good an indistinguishability test as it is now -- with only the above two exceptions -- then I'd agree we don't need Object.eq. Libraries could provide it without undue burden. However, assuming you agree that 1.0m == 1.00m, then you'd break the regularity in the first bullet. Were that the case, I would soften my normal avoidance of == to "When you know you've got two numbers (e.g. after typeof checks) and you want numeric equality, use ==. Otherwise, always use ===." A problem with this softer stance is it makes it much harder for JSLint-like tools to warn about uses of == that should be avoided. Nevertheless, I would be happy with this resolution.
I'm trying to avoid too much creature feep in any edition ;-). Other languages have near-universally-quantified hashcode analogues but solve NaN dilemmas by throwing (Python, IIRC).
FWIW, E has exactly two equality operators:
- x == y implies observable equivalence, but operates more like Henry Baker's EGAL operator than like Lisp's EQ. NaN == NaN // yields true -0.0 == 0.0 // yields false
- x <=> y means "According to x, x and y are the same magnitude". NaN <=> NaN // yields false -0.0 <=> 0.0 // yields true
We can enlarge the language to be strictly more expressive, or we could enlarge equivalence classes under ===, or we could choose to make certain seemingly equal numbers !==. The rationale for excluding the last two alternatives when mooting Object.eq is not clear.
Re second alternative: don't you mean "shrink equivalence classes", ideally to singletons, ignoring {-0, 0}?
On Aug 24, 2008, at 7:38 PM, Sam Ruby wrote:
On Sun, Aug 24, 2008 at 10:01 PM, Brendan Eich
<brendan at mozilla.org> wrote:On Aug 24, 2008, at 6:51 PM, Mark S. Miller wrote:
On Sun, Aug 24, 2008 at 5:40 PM, Brendan Eich <brendan at mozilla.org> wrote:
Why is eq being proposed whatever happens with Decimal?
Because in the absence of decimal, === is close enough to an "identical" test that one can practically code around the problem cases (NaN and -0)
You're assuming 1.0m === 1.00m must be true. I'm disputing that (see exchange with Sam, who may agree).
I agree that you are disputing that. ;-P
Less than 10 hours ago, ISTR you were suggesting the opposite. "there's a weak case for false"...
I acknowledged that and then went on to shoot it down, you're right.
But really, I'm wondering why it is important to have Object.eq if
1.0m === 1.00m. Without Object.hashcode. The weak case I mentioned
would not need Object.eq. The strong case I favored left users
reaching for compareTotal or stringification and ===. Either way I do
not see the necessity of eq.
Sorry, I misstated your position (I think!). What's your position,
for sure?
On Thursday, ISTR Mark arguing that "correspond to the same point on the real number line" was close enough to an "identical" test.
I think I need to sit down now, I'm getting dizzy. :-)
The way to settle this is to give rationales for all the moving
parts, so we can pin some down and make them stop moving.
On Sun, Aug 24, 2008 at 7:38 PM, Sam Ruby <rubys at intertwingly.net> wrote:
On Thursday, ISTR Mark arguing that "correspond to the same point on the real number line" was close enough to an "identical" test.
No, I was saying the opposite! If == or === is to mean numeric equivalence, then it should say whether two "numbers" represent the exact same point on the real number line. I would also recommend 1.0 == 1.0m be true, though I understand (better than I did on thursday) that accurate mixed more magnitude comparison is expensive.
But my main point was that, unlike binary, numeric equivalence in decimal is no longer similar to observational equivalence. If we're going to break the utility of === as an almost perfect test of observational equivalence, then we need something to take its place. This beget Object.eq.
I think I need to sit down now, I'm getting dizzy. :-)
Likewise ;)
On Aug 24, 2008, at 7:41 PM, Mark S. Miller wrote:
On Sun, Aug 24, 2008 at 7:01 PM, Brendan Eich <brendan at mozilla.org>
wrote:You're assuming 1.0m === 1.00m must be true. I'm disputing that (see exchange with Sam, who may agree).
Indeed. One of the ES3.1 phone calls characterized the two early
positions as
- The typeof school: ES3 currently has the regularity (typeof x == typeof y && x == y) iff x === y Let's not break that regularity. It's a good idea, and existing code may count on it.
Right.
- The eq school: ES3 currently has the regularities that except for NaN, === is an equivalence relation for all x,y except for -0, x === y implies x is observably indistinguishable for y Let's not break those regularities. It's a good idea, and existing code may count on it.
Yup.
If === remains as good an indistinguishability test as it is now -- with only the above two exceptions -- then I'd agree we don't need Object.eq. Libraries could provide it without undue burden. However, assuming you agree that 1.0m == 1.00m, then you'd break the regularity in the first bullet.
Does that not depend on what typeof 1.0m returns?
I'm trying to avoid too much creature feep in any edition ;-). Other languages have near-universally-quantified hashcode analogues but
solve NaN dilemmas by throwing (Python, IIRC).FWIW, E has exactly two equality operators:
- x == y implies observable equivalence, but operates more like Henry Baker's EGAL operator than like Lisp's EQ. NaN == NaN // yields true -0.0 == 0.0 // yields false
- x <=> y means "According to x, x and y are the same magnitude". NaN <=> NaN // yields false -0.0 <=> 0.0 // yields true
Nice. No Decimal in E, though, with preserved significance....
We can enlarge the language to be strictly more expressive, or we could enlarge equivalence classes
under ===, or we could choose to make certain seemingly equal numbers !==. The rationale for excluding the last two alternatives when mooting
Object.eq is not clear.Re second alternative: don't you mean "shrink equivalence classes", ideally to singletons, ignoring {-0, 0}?
No, that would be the third alternative, where 1.0m !== 1.00m. I mean
enlarge as in 1.0m === 1.00m means that {1.0m, 1.00m, ...} is an
equivalence class. This is the current ES3.1 draft spec's position
(hope I have that right!). IOW, each infamous cohort from P754
becomes an equivalence class.
It seems to me that this second way was conflated with adding
Object.eq, when it's not strictly necessary to add Object.eq if those
who really need to distinguish significance of two otherwise equal
decimals could reach for compareTotal or stringification and == or
=== and cope.
On Sun, Aug 24, 2008 at 7:44 PM, Brendan Eich <brendan at mozilla.org> wrote:
[...] But really, I'm wondering why it is important to have Object.eq if 1.0m === 1.00m. Without Object.hashcode.
I would hope Harmony would either have some kind of identity hashcode or some kind of built-in identity hashtable abstraction. Given a function F that one knows is functional, it should be possible to write memoize such that memoize(F) acts like F, trading space for time. Without Object.eq or the equivalent, how does memoize tell whether its got a cache hit or miss?
On Sun, Aug 24, 2008 at 10:44 PM, Brendan Eich <brendan at mozilla.org> wrote:
On Aug 24, 2008, at 7:38 PM, Sam Ruby wrote:
On Sun, Aug 24, 2008 at 10:01 PM, Brendan Eich <brendan at mozilla.org> wrote:
On Aug 24, 2008, at 6:51 PM, Mark S. Miller wrote:
On Sun, Aug 24, 2008 at 5:40 PM, Brendan Eich <brendan at mozilla.org> wrote:
Why is eq being proposed whatever happens with Decimal?
Because in the absence of decimal, === is close enough to an "identical" test that one can practically code around the problem cases (NaN and -0)
You're assuming 1.0m === 1.00m must be true. I'm disputing that (see exchange with Sam, who may agree).
I agree that you are disputing that. ;-P
Less than 10 hours ago, ISTR you were suggesting the opposite. "there's a weak case for false"...
I acknowledged that and then went on to shoot it down, you're right. But really, I'm wondering why it is important to have Object.eq if 1.0m === 1.00m. Without Object.hashcode. The weak case I mentioned would not need Object.eq. The strong case I favored left users reaching for compareTotal or stringification and ===. Either way I do not see the necessity of eq.
Sorry, I misstated your position (I think!). What's your position, for sure?
You didn't mistake my position. Here's what I think (for the moment, at least):
If there were an Object.eq method, then 1.1m and 1.10m should be considered different by such a function.
I don't believe that decimal, by itself, justifies the addition of an Object.eq method. Even if we were to go with 1.0m == 1.00m.
As to what the what the value of 1.0m == 1.00m should be, the amount of code and the amount of spec writing effort is the same either way. I can see arguments both ways. But if it were up to me, the tiebreaker would be what the value of typeof(1.1m) is. If "number", the scale tips slightly towards the answer being false. If "object", then then scale is firmly on the side of the answer being true.
All things considered, I would argue for false. I just wouldn't dig in my heels while doing so.
- Sam Ruby
On Sun, Aug 24, 2008 at 7:54 PM, Brendan Eich <brendan at mozilla.org> wrote:
If === remains as good an indistinguishability test as it is now -- with only the above two exceptions -- then I'd agree we don't need Object.eq. Libraries could provide it without undue burden. However, assuming you agree that 1.0m == 1.00m, then you'd break the regularity in the first bullet.
Does that not depend on what typeof 1.0m returns?
Huh? As long as typeof 1.0m == typeof 1.00m, then you'd still break the first bullet.
I'm trying to avoid too much creature feep in any edition ;-). Other languages have near-universally-quantified hashcode analogues but solve NaN dilemmas by throwing (Python, IIRC).
FWIW, E has exactly two equality operators:
- x == y implies observable equivalence, but operates more like Henry Baker's EGAL operator than like Lisp's EQ. NaN == NaN // yields true -0.0 == 0.0 // yields false
- x <=> y means "According to x, x and y are the same magnitude". NaN <=> NaN // yields false -0.0 <=> 0.0 // yields true
Nice. No Decimal in E, though, with preserved significance....
Someone could add a decimal library to E without special dispensation from the language designers. If one did, then I'd recommend they do it such that
m`1.0` == m`1.00` // yields false
m`1.0` <=> m`1.00` // yields true
(The backquote syntax is E's way of embedding DSLs. The identifier to the left of the backquote is mangled into a quasi-parser name, to parse and evaluate the string between the quotes. google-caja.googlecode.com/svn/changes/mikesamuel/string-interpolation-29-Jan-2008/trunk/src/js/com/google/caja/interp/index.html
brings some similar ideas to JavaScript.)
It seems to me that this second way was conflated with adding Object.eq, when it's not strictly necessary to add Object.eq if those who really need to distinguish significance of two otherwise equal decimals could reach for compareTotal or stringification and == or === and cope.
Object.eq isn't about numbers or strings, so that's not an adequate coping strategy.
On Sun, Aug 24, 2008 at 8:09 PM, Sam Ruby <rubys at intertwingly.net> wrote:
All things considered, I would argue for false.
I'm curious. If 1.0m == 1.00m were false, what about 1.0m < 1.00m and 1.0m > 1.00m?
On Sun, Aug 24, 2008 at 11:15 PM, Mark S. Miller <erights at google.com> wrote:
On Sun, Aug 24, 2008 at 8:09 PM, Sam Ruby <rubys at intertwingly.net> wrote:
All things considered, I would argue for false.
I'm curious. If 1.0m == 1.00m were false, what about 1.0m < 1.00m and 1.0m > 1.00m?
1.0m == 1.00m should be true.
But to answer your question, IEEE 754 does define a "total" ordering. One can even test is using the implementation I posted a few weeks back.
js> Decimal.compareTotal(1.0m, 1.00m)
1 js> Decimal.compareTotal(1.00m, 1.0m) -1
"This function compares two numbers using the IEEE 754 total ordering. If the lhs is less than the rhs in the total order then the number will be set to the value -1. If they are equal, then number is set to 0. If the lhs is greater than the rhs then the number will be set to the value 1.
The total order differs from the numerical comparison in that: –NaN < –sNaN < –Infinity < –finites < –0 < +0 < +finites < +Infinity < +sNaN < +NaN. Also, 1.000 < 1.0 (etc.) and NaNs are ordered by payload."
-- Cheers, --MarkM
- Sam Ruby
On Sun, Aug 24, 2008 at 8:09 PM, Sam Ruby <rubys at intertwingly.net> wrote:
As to what the what the value of 1.0m == 1.00m should be, the amount of code and the amount of spec writing effort is the same either way. I can see arguments both ways. But if it were up to me, the tiebreaker would be what the value of typeof(1.1m) is. If "number", the scale tips slightly towards the answer being false. If "object", then then scale is firmly on the side of the answer being true.
All things considered, I would argue for false.
On Sun, Aug 24, 2008 at 8:40 PM, Sam Ruby <rubys at intertwingly.net> wrote:
On Sun, Aug 24, 2008 at 11:15 PM, Mark S. Miller <erights at google.com> wrote:
On Sun, Aug 24, 2008 at 8:09 PM, Sam Ruby <rubys at intertwingly.net> wrote:
All things considered, I would argue for false.
I'm curious. If 1.0m == 1.00m were false, what about 1.0m < 1.00m and 1.0m > 1.00m?
1.0m == 1.00m should be true.
I'm confused. All things considered, what do you think 1.0m == 1.00m should be?
On Aug 24, 2008, at 7:57 PM, Mark S. Miller wrote:
On Sun, Aug 24, 2008 at 7:44 PM, Brendan Eich <brendan at mozilla.org>
wrote:[...] But really, I'm wondering why it is important to have Object.eq if
1.0m === 1.00m. Without Object.hashcode.I would hope Harmony would either have some kind of identity hashcode or some kind of built-in identity hashtable abstraction.
Absolutely.
Given a function F that one knows is functional, it should be possible to write memoize such that memoize(F) acts like F, trading space for time. Without Object.eq or the equivalent, how does memoize tell whether its got a cache hit or miss?
We had this in the ES4 RI, implemented in ES4 (builtins/Map.es). The
Map class takes type parameters K and V and has a constructor:
function Map(equals = (function (x,y) x === y),
hashcode = intrinsic::hashcode)
NaNs all hash to code 0 (any constant would do, 0 was picked for
simplicity and interoperation). You have to avoid mapping NaN or else
provide a non-default equals function that tests isNaN() or x !== x.
Ignoring the ES4 details (type parameters, default arguments),
there's nothing here that adds a new equality operator to the
language as a whole. Anything like Map in Harmony, sans distinct type
parameters (any such would be value parameters), could do likewise.
No need for an Object.eq analogue.
On Sun, Aug 24, 2008 at 9:48 PM, Brendan Eich <brendan at mozilla.org> wrote:
On Aug 24, 2008, at 7:57 PM, Mark S. Miller wrote:
Given a function F that one knows is functional, it should be possible to write memoize such that memoize(F) acts like F, trading space for time. Without Object.eq or the equivalent, how does memoize tell whether its got a cache hit or miss?
We had this in the ES4 RI, implemented in ES4 (builtins/Map.es). The Map class takes type parameters K and V and has a constructor:
function Map(equals = (function (x,y) x === y), hashcode = intrinsic::hashcode)
NaNs all hash to code 0 (any constant would do, 0 was picked for simplicity and interoperation). You have to avoid mapping NaN or else provide a non-default equals function that tests isNaN() or x !== x.
Let's say you did that -- make a special case for NaN but not for -0. Let's say you use this Map to build memoize. Now let's say someone writes a purely functional function F such that F(0) is 3 and F(-0) is 7. Let's say G is memoize(F). Would you find it acceptable for G(0) to sometimes yield 3 and sometimes 7?
Ignoring the ES4 details (type parameters, default arguments), there's nothing here that adds a new equality operator to the language as a whole. Anything like Map in Harmony, sans distinct type parameters (any such would be value parameters), could do likewise. No need for an Object.eq analogue.
Once you fix your Map defaults to distinguish 0 and -0, what you've done here is implemented Object.eq easily using ===. Good. So long as that's practical, I agree we don't need to codify a built-in Object.eq. But if the introduction of decimal means that it is no longer practical to build Object.eq from ===, what then? Perhaps we can avoid facing that dilemma?
On Aug 24, 2008, at 8:09 PM, Sam Ruby wrote:
If there were an Object.eq method, then 1.1m and 1.10m should be considered different by such a function.
I don't believe that decimal, by itself, justifies the addition of an Object.eq method. Even if we were to go with 1.0m == 1.00m.
Good, that's my position too!
Whew. Sorry for the confusion, too much after-hours replying instead
of sleeping on my part.
As to what the what the value of 1.0m == 1.00m should be, the amount of code and the amount of spec writing effort is the same either way. I can see arguments both ways. But if it were up to me, the tiebreaker would be what the value of typeof(1.1m) is. If "number", the scale tips slightly towards the answer being false. If "object", then then scale is firmly on the side of the answer being true.
Ah, typeof. No good can come of making typeof 1.0m == "number", but
there is more room for non-singleton equivalence classes in that
choice. We have -0 == 0 already. Making cohorts equivalence classes
under == seems both more usable and more compatible if the operands
have "number" type(of). If they're "object" then we have no
precedent: o == p for two object references o and p is an identity test.
In spite of this lack of precedent, I believe we are free to make
1.0m == 1.00m if typeof 1.0m == "object".
But what should typeof 1.0m evaluate to, anyway? I don't believe
"number" is right, since 0.1 == 0.1m won't be true. Is anyone
seriously proposing typeof 1.0m == "number"?
If Decimal is an object type, then typeof 1.0m == "object" is good
for a couple of reasons:
-
Future-proof in case we do add a primitive decimal type, as ES4
proposed -- a peer of double that shares Number.prototype; typeof on
a decimal would return "number". See below for the possibly-bogus
flip side. -
Analogous to RegExp, which has literal syntax but is an object
(RegExp is worse because of mutable state; Decimal presumably would
have immutable instances -- please confirm!).
Making typeof 1.0m == "object" could be future-hostile if we ever
wanted to add a decimal primitive type, though. We're stuck if we
treat literals as objects and tell that truth via typeof. We can't
make them be "number"s some day if they are objects in a nearer
edition, with mutable prototype distinct from Number.prototype, etc.
At least not in the ES4 model, which has had some non-trivial thought
and RI work put into it.
Probably at this point, any future number types will have to be
distinct "object" typeof-types, with magic (built-in, hardcoded) or
(generic/multimethod) non-magic operator support. That may be ok
after all. We never quite got the ES4 model whereby several
primtiives (at one point, byte, int, uint, double, and decimal) could
all be peer Object subtypes (non-nullable value types, final classes)
that shared Number.prototype. We cut byte, int, and uint soon enough,
but problems remained.
I can't find any changes to 11.4.3 "The typeof Operator" in ES3.1
drafts. Am I right to conclude that typeof 1.0m == "object"? Sorry if
I'm beating a dead horse. Just want it to stay dead, if it is dead ;-).
All things considered, I would argue for false. I just wouldn't dig in my heels while doing so.
I was playing down the importance of this design decision to
highlight the separable and questionable addition of Object.eq, but I
do think it's important to get == right for the case of both operands
of Decimal type. I'm still sticking to my 1.0m == 1.00m story, while
acknowledging the trade-offs. No free lunch.
On Aug 24, 2008, at 8:10 PM, Mark S. Miller wrote:
On Sun, Aug 24, 2008 at 7:54 PM, Brendan Eich <brendan at mozilla.org>
wrote:If === remains as good an indistinguishability test as it is now -- with only the above two exceptions -- then I'd agree we don't need Object.eq. Libraries could provide it without undue burden. However, assuming you agree that 1.0m == 1.00m, then you'd break the
regularity in the first bullet.Does that not depend on what typeof 1.0m returns?
Huh? As long as typeof 1.0m == typeof 1.00m, then you'd still break the first bullet.
First bullet:
(typeof x == typeof y && x == y) iff x === y
I'm suggesting that we could specify that 1.0m === 1.00m, which of
course implies 1.0m == 1.0m, and vice versa.
Yes, it's independent of what typeof 1.0m returns. My point about
typeof was the one I just made to Sam: that since -0 == 0, we have
wiggle room that does not yet exist for object types. But I think we
can wiggle our way to === equating distinct object identities
(cohorts in P754 terms), just as we do for the numbers -0 and 0.
Indistinguishability is already "broken" for -0 and 0 (you can use
Math.atan2 to discern) and NaN (irreflexive). What's the big deal if
we go further? I smell purity trumping usability. Again I do not see
a compelling case for adding eq, solely based on this Decimal conundrum.
It seems to me I've seen this kind of arguing before. "Pay no
attention to the impurity behind the curtain, or in front of it!
Assume purity and use it to justify further generalizations." I'm not
in favor without more real-world evidence. Impurity of JS functions
as imperfect lambdas, or of == among like types as less than the
ideal indistinguishability equivalent relation, matters.
It seems to me that this second way was conflated with adding
Object.eq, when it's not strictly necessary to add Object.eq if those who
really need to distinguish significance of two otherwise equal decimals could
reach for compareTotal or stringification and == or === and cope.Object.eq isn't about numbers or strings, so that's not an adequate coping strategy.
That's circular. The coping strategy is for Decimal users who really
want to distinguish inside a cohort. It's not (yet, or universally)
necessary to have Object.eq for all values. The hashtable idea is
good but it just needs funargs with sane defaults. There will
probably be many such collection implementations.
On Aug 24, 2008, at 10:02 PM, Mark S. Miller wrote:
Let's say you did that -- make a special case for NaN but not for -0. Let's say you use this Map to build memoize. Now let's say someone writes a purely functional function F such that F(0) is 3 and F(-0) is 7. Let's say G is memoize(F). Would you find it acceptable for G(0) to sometimes yield 3 and sometimes 7?
I'd file a bug, or find a better memoizer :-). "Quality of
implementation" eager beavers can use Math.atan2 to tell -0 from 0,
just as they can use isNaN or x !== x.
Yes, this is gross. I'm in favor of Object.identical and
Object.hashcode, maybe even in ES3.1 (I should get my act together
and help spec 'em). Just not particularly on account of Decimal, even
with equated cohort members. I still agree with Sam. And as
always,"hard cases make bad law".
On Mon, Aug 25, 2008 at 12:47 AM, Mark S. Miller <erights at google.com> wrote:
On Sun, Aug 24, 2008 at 8:09 PM, Sam Ruby <rubys at intertwingly.net> wrote:
As to what the what the value of 1.0m == 1.00m should be, the amount of code and the amount of spec writing effort is the same either way. I can see arguments both ways. But if it were up to me, the tiebreaker would be what the value of typeof(1.1m) is. If "number", the scale tips slightly towards the answer being false. If "object", then then scale is firmly on the side of the answer being true.
All things considered, I would argue for false.
Typo in the above: I meant ===
On Sun, Aug 24, 2008 at 8:40 PM, Sam Ruby <rubys at intertwingly.net> wrote:
On Sun, Aug 24, 2008 at 11:15 PM, Mark S. Miller <erights at google.com> wrote:
On Sun, Aug 24, 2008 at 8:09 PM, Sam Ruby <rubys at intertwingly.net> wrote:
All things considered, I would argue for false.
I'm curious. If 1.0m == 1.00m were false, what about 1.0m < 1.00m and 1.0m > 1.00m?
1.0m == 1.00m should be true.
I'm confused. All things considered, what do you think 1.0m == 1.00m should be?
true
-- Cheers, --MarkM
- Sam Ruby
On Mon, Aug 25, 2008 at 1:44 AM, Brendan Eich <brendan at mozilla.org> wrote:
On Aug 24, 2008, at 8:09 PM, Sam Ruby wrote:
If there were an Object.eq method, then 1.1m and 1.10m should be considered different by such a function.
I don't believe that decimal, by itself, justifies the addition of an Object.eq method. Even if we were to go with 1.0m == 1.00m.
Good, that's my position too!
Whew. Sorry for the confusion, too much after-hours replying instead of sleeping on my part.
As to what the what the value of 1.0m == 1.00m should be, the amount of code and the amount of spec writing effort is the same either way. I can see arguments both ways. But if it were up to me, the tiebreaker would be what the value of typeof(1.1m) is. If "number", the scale tips slightly towards the answer being false. If "object", then then scale is firmly on the side of the answer being true.
Ah, typeof. No good can come of making typeof 1.0m == "number", but there is more room for non-singleton equivalence classes in that choice. We have -0 == 0 already. Making cohorts equivalence classes under == seems both more usable and more compatible if the operands have "number" type(of). If they're "object" then we have no precedent: o == p for two object references o and p is an identity test.
In spite of this lack of precedent, I believe we are free to make 1.0m == 1.00m if typeof 1.0m == "object".
Note: that was a typo on my part. We agree here.
But what should typeof 1.0m evaluate to, anyway? I don't believe "number" is right, since 0.1 == 0.1m won't be true. Is anyone seriously proposing typeof 1.0m == "number"?
My working assumption coming out of the SF/Adobe meeting was that in ES4 there would be both a primitive decimal, which could be wrappered by the Number class, which is much in line with what you mentioned below. Based on your recent input, I now question the need to provide a decimal primitive ever.
Note: here I'm talking about the conceptual model that the language exposes. Doubles are GC'ed in SpiderMonkey, as would be decimals.
If Decimal is an object type, then typeof 1.0m == "object" is good for a couple of reasons:
- Future-proof in case we do add a primitive decimal type, as ES4 proposed -- a peer of double that shares Number.prototype; typeof on a decimal would return "number". See below for the possibly-bogus flip side.
What would be the upside to such an approach? I can see the next-edition-of-ES-that-provides-decimal (my working assumption still is "3.1" whatever that may be called, others may be understandably skeptical) only providing a Decimal object, and with that addition the language with respect to decimal being considered a steady state that not need to be revisited in subsequent editions.
- Analogous to RegExp, which has literal syntax but is an object (RegExp is worse because of mutable state; Decimal presumably would have immutable instances -- please confirm!).
I'd prefer if Decimal instances in ES were considered immutable and automatically "interned". By the latter, I simply mean that new Decimal("1.0") === new Decimal("1.0").
Making typeof 1.0m == "object" could be future-hostile if we ever wanted to add a decimal primitive type, though. We're stuck if we treat literals as objects and tell that truth via typeof. We can't make them be "number"s some day if they are objects in a nearer edition, with mutable prototype distinct from Number.prototype, etc. At least not in the ES4 model, which has had some non-trivial thought and RI work put into it.
My assumption prior to the few days of discussion was that 1.0m was a primitive. Based on these discussions, making it be an object makes sense to me.
Probably at this point, any future number types will have to be distinct "object" typeof-types, with magic (built-in, hardcoded) or (generic/multimethod) non-magic operator support. That may be ok after all. We never quite got the ES4 model whereby several primtiives (at one point, byte, int, uint, double, and decimal) could all be peer Object subtypes (non-nullable value types, final classes) that shared Number.prototype. We cut byte, int, and uint soon enough, but problems remained.
Agreed, that may be ok after all.
I can't find any changes to 11.4.3 "The typeof Operator" in ES3.1 drafts. Am I right to conclude that typeof 1.0m == "object"? Sorry if I'm beating a dead horse. Just want it to stay dead, if it is dead ;-).
That was previously still on my list of things to do. Now it appears that it is one less thing to do. :-)
All things considered, I would argue for false. I just wouldn't dig in my heels while doing so.
I was playing down the importance of this design decision to highlight the separable and questionable addition of Object.eq, but I do think it's important to get == right for the case of both operands of Decimal type. I'm still sticking to my 1.0m == 1.00m story, while acknowledging the trade-offs. No free lunch.
Again, sorry for the typo. I hope nobody is seriously considering having the following be false, no matter what the value of typeof(1.0m) is:
1.0m == 1.00m
/be
- Sam Ruby
On Sun, Aug 24, 2008 at 11:20 PM, Brendan Eich <brendan at mozilla.org> wrote:
Yes, this is gross. I'm in favor of Object.identical and Object.hashcode,
I don't care if Object.eq is named Object.identical. Other than spelling, does your Object.identical differ from Object.eq? If not, then I think we're in agreement.
maybe even in ES3.1 (I should get my act together and help spec 'em). Just not particularly on account of Decimal, even with equated cohort members. I still agree with Sam. And as always,"hard cases make bad law".
What is it you and Sam are agreeing about? I lost track.
I like the point about bad law. Numerics are definitely a hard case.
On Mon, Aug 25, 2008 at 9:45 AM, Mark S. Miller <erights at google.com> wrote:
On Sun, Aug 24, 2008 at 11:20 PM, Brendan Eich <brendan at mozilla.org> wrote:
Yes, this is gross. I'm in favor of Object.identical and Object.hashcode,
I don't care if Object.eq is named Object.identical. Other than spelling, does your Object.identical differ from Object.eq? If not, then I think we're in agreement.
maybe even in ES3.1 (I should get my act together and help spec 'em). Just not particularly on account of Decimal, even with equated cohort members. I still agree with Sam. And as always,"hard cases make bad law".
What is it you and Sam are agreeing about? I lost track.
I like the point about bad law. Numerics are definitely a hard case.
Give me a day or so, and I'll post a typo-free transcript based on running code, and people can identify specific results that they take issue with, and/or more expressions that they would like to see in the results.
- Sam Ruby
On 25 Aug 2008, at 12:29, Sam Ruby wrote:
the next-edition-of-ES-that-provides-decimal (my working assumption still is "3.1" whatever that may be called, others may be understandably skeptical)
Clearly that version should be called "3.1m"!
drj
On Aug 25, 2008, at 4:29 AM, Sam Ruby wrote:
If Decimal is an object type, then typeof 1.0m == "object" is good for a couple of reasons:
- Future-proof in case we do add a primitive decimal type, as ES4 proposed -- a peer of double that shares Number.prototype; typeof on a decimal would return "number". See below for the possibly-bogus flip side.
What would be the upside to such an approach? I can see the next-edition-of-ES-that-provides-decimal (my working assumption still is "3.1" whatever that may be called, others may be understandably skeptical) only providing a Decimal object, and with that addition the language with respect to decimal being considered a steady state that not need to be revisited in subsequent editions.
Yes, I was being a little too generous with the possibility of a
primitive making a come-back. If we evolve by adding Decimal as an
"object" typeof-type, that's it. Probably better this way in the long
run, the operator extensions should inform our evolving thinking on
generic arithmetic and multimethods.
- Analogous to RegExp, which has literal syntax but is an object (RegExp is worse because of mutable state; Decimal presumably would have immutable instances -- please confirm!).
I'd prefer if Decimal instances in ES were considered immutable and automatically "interned". By the latter, I simply mean that new Decimal("1.0") === new Decimal("1.0").
The spec should say this.
Decimal.prototype would be extensible and not sealed, however, just
as other standard constructor prototype objects are. A bit of a
challenge in the spec, and in implementations that make an
unconstructed primordial instance be the class prototype, but not
insuperable (hand-wave alert).
On Aug 25, 2008, at 6:45 AM, Mark S. Miller wrote:
maybe even in ES3.1 (I should get my act together and help spec 'em).
(I will help on identical/hashcode, btw -- think we're agreeing
vehemently ;-).)
Just not particularly on account of Decimal, even with equated cohort
members. I still agree with Sam. And as always,"hard cases make bad law".What is it you and Sam are agreeing about? I lost track.
That if we make cohort members == and ===, telling anyone who still
wants to distinguish 1.0m from 1.00m to use compareTotal is "good
enough".
On Mon, Aug 25, 2008 at 1:20 PM, Brendan Eich <brendan at mozilla.org> wrote:
(I will help on identical/hashcode, btw -- think we're agreeing vehemently ;-).)
;) !!
On Mon, Aug 25, 2008 at 1:20 PM, Brendan Eich <brendan at mozilla.org> wrote:
On Aug 25, 2008, at 6:45 AM, Mark S. Miller wrote:
What is it you and Sam are agreeing about? I lost track.
That if we make cohort members == and ===, telling anyone who still wants to distinguish 1.0m from 1.00m to use compareTotal is "good enough".
I agree with this if-then, but why not recommend Object.identical instead?
In any case, is there any general agreement about whether 1.0m == 1.00m or 1.0m === 1.00m? This is where I lost track.
On Mon, Aug 25, 2008 at 9:45 AM, Mark S. Miller <erights at google.com> wrote:
On Sun, Aug 24, 2008 at 11:20 PM, Brendan Eich <brendan at mozilla.org> wrote:
Yes, this is gross. I'm in favor of Object.identical and Object.hashcode,
I don't care if Object.eq is named Object.identical. Other than spelling, does your Object.identical differ from Object.eq? If not, then I think we're in agreement.
maybe even in ES3.1 (I should get my act together and help spec 'em). Just not particularly on account of Decimal, even with equated cohort members. I still agree with Sam. And as always,"hard cases make bad law".
What is it you and Sam are agreeing about? I lost track.
I like the point about bad law. Numerics are definitely a hard case.
Here's the current output from my current branch of SpiderMonkey:
js> /* Hopefully, we all agree on these */
js> 1.0m == 1.0m
true js> 1.0m == new Decimal("1.0")
true js> 1.0m == Decimal("1.0")
true js> 1.0m == 1.0.toDecimal()
true js>
js> /* And these too... */
js> 1.0m == 1.00m
true js> 1.0m == 1.0
true js> 1.0m == "1.0"
true js>
js> /* Conversion is exact, up to the number of digits you specify */
js> 1.1.toDecimal()
1.100000000000000088817841970012523 js> 1.1.toDecimal(2)
1.10 js> 1.1m - 1.1 -8.8817841970012523E-17 js> 1.1m == 1.1
false js>
js> /* Non-strict equals doesn't care about precision */
js> 1.0m == 1.0m
true js> 1.0m == 1.00m
true js> 1.0m == 1.0
true js> 1.0m == "1.000"
true js>
js> /* You can mix things up */
js> Decimal.add(1.1,1.1m)
2.200000000000000088817841970012523 js> 1.2m - 1.1m
0.1 js> 1.2m - 1.1
0.099999999999999911182158029987477 js> 1.2 - 1.1
0.09999999999999987 js> 1.1 > 1.1m
true js>
js> /* Things we agree on for strictly equals */
js> 1.0m === 1.0m
true js> 1.0m === 1.0
false js>
js> /* Something that a case could be made for either way */
js> 1.00m === 1.0m
true js>
js> /* In any case, there always is this to fall back on */
js> Decimal.compareTotal(1.0m, 1.0m) == 0
true js> Decimal.compareTotal(1.00m, 1.0m) == 0
false js>
js> /* Still open for discussion */
js> typeof 1m
object
- Sam Ruby
On Aug 25, 2008, at 1:59 PM, Mark S. Miller wrote:
On Mon, Aug 25, 2008 at 1:20 PM, Brendan Eich <brendan at mozilla.org>
wrote:On Aug 25, 2008, at 6:45 AM, Mark S. Miller wrote:
What is it you and Sam are agreeing about? I lost track.
That if we make cohort members == and ===, telling anyone who
still wants to distinguish 1.0m from 1.00m to use compareTotal is "good enough".I agree with this if-then, but why not recommend Object.identical
instead?
Cuz it's more work for 3.1 than needed, and compareTotal is required
by P754 IIRC. Either's good, Object.identical is better, but when in
doubt... Yeah, I'm still thinking in Decimal for ES3.1 terms here.
In any case, is there any general agreement about whether 1.0m == 1.00m or 1.0m === 1.00m? This is where I lost track.
Yes, I believe Sam and I agree on those holding true. As you note,
typeof x == typeof y && x == y => x === y, so anything else will
break that relation, and it's not just an ideal form: users,
especially those taught to avoid == in favor of ===, will have our
heads if 1.0m != 1.00m.
Brendan Eich wrote:
In any case, is there any general agreement about whether 1.0m == 1.00m or 1.0m === 1.00m? This is where I lost track.
Yes, I believe Sam and I agree on those holding true. As you note,
typeof x == typeof y && x == y => x === y, so anything else will
break that relation, and it's not just an ideal form: users,
especially those taught to avoid == in favor of ===, will have our
heads if 1.0m != 1.00m./be
As I wrote earlier, having 1.0m === 1.00m is the only sensible choice. If you don't then you'll get total weirdness such as:
1e12m - 1e12m !== 0m
Waldemar
Brendan Eich wrote:
On Aug 24, 2008, at 10:02 PM, Mark S. Miller wrote:
Let's say you did that -- make a special case for NaN but not for -0. Let's say you use this Map to build memoize. Now let's say someone writes a purely functional function F such that F(0) is 3 and F(-0) is 7. Let's say G is memoize(F). Would you find it acceptable for G(0) to sometimes yield 3 and sometimes 7?
I'd file a bug, or find a better memoizer :-). "Quality of
implementation" eager beavers can use Math.atan2 to tell -0 from 0,
just as they can use isNaN or x !== x.Yes, this is gross. I'm in favor of Object.identical and
Object.hashcode, maybe even in ES3.1 (I should get my act together
and help spec 'em).
In the absence of decimal, Object.eq is trivial to spec:
Object.eq(NaN, NaN) = true Object.eq( 0, -0) = false Object.eq( -0, 0) = false Object.eq( x, y) = (x === y), otherwise.
or to implement:
Object.eq = function (x, y) { if (x === y) { return x !== 0 || 1/x === 1/y; } else { return x !== x && y !== y; } };
Just not particularly on account of Decimal, even
with equated cohort members.
If there is a possibility that we are ever going to add decimal (or other types for which === might not be an identity test), then adding Object.eq now allows writing future-proof code for such things as memoizers -- whatever the semantics of === for decimals (or between decimals and other numbers) turns out to be.
On Sep 3, 2008, at 11:15 PM, David-Sarah Hopwood wrote:
In the absence of decimal, Object.eq is trivial to spec:
Object.eq(NaN, NaN) = true Object.eq( 0, -0) = false Object.eq( -0, 0) = false Object.eq( x, y) = (x === y), otherwise.
or to implement:
Object.eq = function (x, y) { if (x === y) { return x !== 0 || 1/x === 1/y; } else { return x !== x && y !== y; } };
Even shorter, without else after return non-sequitur and with one
guaranteed same-type == instead of === to be perverse:
Object.eq = function (x, y) { return (x === y) ? x !== 0 || 1/x == 1/y : x !== x && y !== y; }
But what's the point?
My point was that Decimal doesn't make the case particularly strong,
since you just need to add Decimal.compareTotal:
Object.eq = function (x, y) { if (x instanceof Decimal && y instanceof Decimal) return Decimal.compareTotal(x, y) == 0; return (x === y) ? x !== 0 || 1/x == 1/y : x !== x && y !== y; }
How much less trivial is this? Two lines, fewer with more ?: chaining.
Just not particularly on account of Decimal, even with equated cohort members.
If there is a possibility that we are ever going to add decimal (or
other types for which === might not be an identity test), then adding
Object.eq now allows writing future-proof code for such things as memoizers -- whatever the semantics of === for decimals (or between decimals and other numbers) turns out to be.
JS developers have to cope with downrev browsers. There's no
imperative "now" (ES3.1) vs. "next time", since they'll need
something like the above as fallback for "last time" (ES3-based)
browsers.
Again my point is to avoid mission creep in 3.1. I have no problem
with identical and hashcode. If I'm wrong and they can be slipped in
without any delay, filling inevitable latency in the current
schedule, great.
I should get busy on the spec.
Ob. Bikeshed: You seem committed to eq as the name. I say it's undue
Lisp nostalgia, if not arrant cybercrud. But I concede that identical
is overlong. Alternative ideas?
Brendan Eich wrote:
On Sep 3, 2008, at 11:15 PM, David-Sarah Hopwood wrote:
In the absence of decimal, Object.eq is trivial to spec:
Object.eq(NaN, NaN) = true Object.eq( 0, -0) = false Object.eq( -0, 0) = false Object.eq( x, y) = (x === y), otherwise.
or to implement:
Object.eq = function (x, y) { if (x === y) { return x !== 0 || 1/x === 1/y; } else { return x !== x && y !== y; } };
[shorter, but less clear code snipped]
My point was that Decimal doesn't make the case particularly strong, since you just need to add Decimal.compareTotal:
Object.eq = function (x, y) { if (x instanceof Decimal && y instanceof Decimal) return Decimal.compareTotal(x, y) == 0; return (x === y) ? x !== 0 || 1/x == 1/y : x !== x && y !== y; }
-
You can't do that in ES3.1 if Decimal is not in ES3.1. You'd have to add "typeof Decimal !== 'undefined' &&" to the 'if' condition. And then you'd be trying to anticipate a future spec, rather than relying on an existing one.
-
This only accounts for Decimal, not for any other future types where === might not be an identity comparison.
-
It's not clear to me that this code is correct; at least not without making assumptions about how === will work on mixed arguments that have not yet been agreed. Will 1 === 1m be true (when decimals are added)? If so, then Object.eq(1, 1m) as implemented above will be true when it should be false.
How much less trivial is this? Two lines, fewer with more ?: chaining.
The size of the code is irrelevant. Object.eq doesn't add significantly to language complexity, and provides a useful basic operation, so why shouldn't it be in the spec?
If there is a possibility that we are ever going to add decimal (or other types for which === might not be an identity test), then adding Object.eq now allows writing future-proof code for such things as memoizers -- whatever the semantics of === for decimals (or between decimals and other numbers) turns out to be.
JS developers have to cope with downrev browsers. There's no imperative "now" (ES3.1) vs. "next time", since they'll need something like the above as fallback for "last time" (ES3-based) browsers.
Unless they are targetting only ES3.1-and-above. At some point (maybe several years) in the future, they'll be able to do that -- provided that Object.eq was added in ES3.1.
Again my point is to avoid mission creep in 3.1. I have no problem with identical and hashcode. If I'm wrong and they can be slipped in without any delay, filling inevitable latency in the current schedule, great.
I should get busy on the spec.
Ob. Bikeshed: You seem committed to eq as the name.
Not really, but eq has been used to refer to this operation for decades in both the Lisp and capability communities. I can live with Object.identical, but I'll always think of it as 'eq'.
On Sep 4, 2008, at 12:29 AM, David-Sarah Hopwood wrote:
- You can't do that in ES3.1 if Decimal is not in ES3.1. You'd have to add "typeof Decimal !== 'undefined' &&" to the 'if' condition. And then you'd be trying to anticipate a future spec, rather than relying on an existing one.
Straw man -- Ajax library authors have to object-detect, they do it
as needed. We can't predict the future, so why worry? The point is
that Object.eq could be done now or later. There's no imperative to
do it now, with or without Decimal. Whatever the mix of Decimal and
Object.eq, library authors can and must cope -- since they'll face
downrev browsers.
- This only accounts for Decimal, not for any other future types
where === might not be an identity comparison.
Accounting for the future is not a reason to add an API that can be
self-hosted (and will be, for older browsers).
- It's not clear to me that this code is correct; at least not
without making assumptions about how === will work on mixed arguments that have not yet been agreed. Will 1 === 1m be true (when decimals are added)? If so, then Object.eq(1, 1m) as implemented above will be true when it should be false.
We haven't agreed on X, Y depends on X, therefore Y is not ready to
spec -- yup, you're right. That doesn't mean we must add Object.eq
now just to future-proof (even within the "future" that's before 3.1
is done!). Cart Horse, ahead of.
The size of the code is irrelevant. Object.eq doesn't add
significantly to language complexity, and provides a useful basic operation, so why shouldn't it be in the spec?
Are you familiar with requirements management, scheduling? It was not
the last addition that made us late. It was not the last cookie I ate
that made me fat.
Unless they are targetting only ES3.1-and-above. At some point (maybe several years) in the future, they'll be able to do that --
provided that Object.eq was added in ES3.1.
Or we do it next time and use the self-hosted version in the mean time.
You are trying to construct an absolute argument on relative terms. I
could do that to add a large number of good APIs to 3.1 but then we'd
be done in 2011. But if we keep arguing, we'll miss the chance to fix
what's wrong in the current draft spec, so I'm going to stop here --
have the last word if you like.
Not really, but eq has been used to refer to this operation for
decades in both the Lisp and capability communities. I can live with Object.identical, but I'll always think of it as 'eq'.
Ok.
On 2008-09-04, at 03:37EDT, Brendan Eich wrote:
Not really, but eq has been used to refer to this operation for decades in both the Lisp and capability communities. I can live with Object.identical, but I'll always think of it as 'eq'.
Ok.
If identical
is too long, try id
? Or, since it is the equivalence
predicate for Object/hash, maybe it should be an operator named #=
.
Some comments in reply to several of the comments and questions from overnight, from various writers...
(It's 5.4.2.) That section, indeed, requires correctly-rounded conversions be available.
There was much debate about that, because existing implementations of binary -> character sequences rarely produce correctly rounded
results -- one of the reasons why users are so often confused by binary floating point (converting 0.1 to a double and then printing it commonly shows "0.1" -- it might have been better if it produced (say) "0.10" to indicate the inexactness, much as "3.0" notation is used to indicate float rather than integer).
However, there's nothing in 754 that says all conversions must act that way, just that there be a conversion that is correctly rounded. That's a language designer's choice. (More on this below.)
[Aside: IEEE now says, for binary -> string conversion, that the
result should be exact unless rounding is necessary (due to specifying a smaller number of digits than would be needed for an exact conversion). So ES really should provide a binary -> string
conversion that does this (and preserves the sign!).]
That conversion would be inexact, so I'd hope you'd get 1.100000000000000m not 1.1m. This is a case where the preserved exponent tells you something about the past history (another is that the exponent on a zero after an underflow is the most negative possible exponent, not 0).
The binary64 value is not 1.1 -- and knowing that it is not is surely a good thing, rather than trying to 'cover it up' by rounding.
[snip]
Agreed, if Decimal.parse were described as being the 754-conformant conversion operation. However,in this case, ES could define Decimal.parse as not being correctly rounded and provide another method which gives you the correctly rounded conversion (when and if ESn.n claims to be conformant to 754). But it might be better to have Decimal.parse correctly round in the first place.
(The expert on binary FP <--> decimal FP conversions is Michel Hack,
btw -- this committee might want to ask his opinion; he wrote much of 5.12.2 in 754.)
wrote:
feature.
I think this is looking at it backwards. Before the calculation, the different exponents might well be a valid indicator of quantum of a measurement (for example). After the calculation the result may continue to be a valid indication of quantum (e.g., when adding up currency values) and in certain special cases this is very useful. But the result most certainly does not attempt to maintain significance.
For example, 6.13 * 2.34 gives 14.3442, a six-digit exact result, even though each of the inputs had only three significant digits.
The arithmetic is not significance arithmetic, and nor is it claimed or intended to be. So there is no contradiction at all.
Consider, too, the slightly different calculation:
3.01 + 1.00001
The answer here is 4.01001. If this were some kind of 'significance' or 'decimal precision' arithmetic then one might argue that the last 3 digits should be removed -- but they are not. All numbers going into an operation are (and must be assumed to be) exact.
The calculation of the exponent is the same in the two cases. There is no normalization step (avoiding this step, among other things, helps performance -- when adding a column of currency numbers, for example, often all the exponents will be the same (even though some values will have trailing zeros), so there will be no need to align the numbers before each addition and no need to normalize after.
The rationale for decimal is primarily the need to represent decimal fractions exactly. That's at:
speleotrove.com/decimal/decifaq1.html#inexact
The discussion here is whether unnormalized arithmetic is a good thing or not, which is a subsidiary point. Unnormalized decimal arithmetic has the huge advantage that it mirrors basic arithmetic (as taught in schools etc.), which preserves trailing zeros and treats all values as exact until forced otherwise (e.g., division by 3).
This is not 'significance arithmetic', and you really do not what that in a language anyway.
[Aside: the value of normalization in binary floating-point is that it permits one bit of the significand to be implied, so you get more precision in a given encoding. It's a compression technique (which has a negative impact in that it's harder and generally slower to implement than unnormalized binary FP).]
With respect, I think you are asking the wrong set of people. The ES committee is not (and nor should it be) a collection of ES users or a collection of arithmetic experts. It's a committee of experts in (ES) language design.
Decimal arithmetic is sufficiently important that it is already available in all the 'Really Important' languages except ES (including C, Java, Python, C#, COBOL, and many more). EcmaScript is the 'odd one out' here, and not having decimal support makes it terribly difficult to move commercial calculations to the browser for 'cloud computing' and the like.
[and finally...]
Yes, if you claim 754 compliance then you must have conversions that are correctly rounded (and this is true for binary Numbers, too). Correct rounding is good semantics, and also ensures that all implementations provide precisely the same results, which is also generally a good thing.
However, there is no 754 requirement whatsoever that implicit conversions be provided, and for several reasons it is probably safer if they are not. That is a language design choice.
So I think this comes down to 'what to do for Decimal.parse'? There are two obvious choices:
Sam's proposal for Decimal.parse is a perfectly good one: if the source is a binary floating point Number then use the existing binary64 -> string conversion in ES to get a (decimal) string, and then convert from that to decimal128 (which should always be exact).
This has the advantage that it is simple to define (given that binary64 -> string already has a definition) and to implement, and in the ES3.1 timeframe that may be important.
It has the disadvantage that different implementations can give different results for the conversion. This is 'ugly', but may not be important given that one is starting from a binary64.
The alternative is to do a correctly-rounded conversion. This is equally simple to define but a little harder to implement (in that it does not already exist in ES implementations). However, Java already has this in the BigDecimal class, and there are also now hardware implementations, so it's not a showstopper. Also, this could also provide or use a correctly-rounded (or exact) and sign-preserving binary64 -> string conversion.
The advantage of the second approach is that ES would have 754-compliant binary -> decimal conversions from day 1, and at the
same time could provide the enhanced conversion for binary -> string.
The first approach would need the addition of a different method later (Decimal.correctparse?) to comply. But the first approach may be more in the spirit of ES.
Mike
[And thanks to Ingvar -- I think I'm now no longer in 'digest' mode :-)]
Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU