use decimal
Brendan Eich wrote:
On Sep 17, 2008, at 6:54 PM, Douglas Crockford wrote:
Also, programs that do opt in should continue to work as they do now on the ES3 processors. They will produce the same slightly incorrect totals, but will otherwise behave as before.
An implementation of decimal that does not meet these goals must be rejected.
I think these goals require that typeof decimal_number produce 'number'.
Only under the "use decimal" pragma, right?
I don't care what happens when -m notation is used without "use decimal". I think the best thing would be to have a syntax error, but I don't feel strongly about that, except that in the future, we may want to evolve to a language in which decimal would be the default representation. We should be cautious is doing anything that would prevent that possibility.
As discussed previously in late August, typeof applied to any decimal should not be "object" or else
typeof x == "object" && !x => x === null
is not preserved. Agreed?
Agreed.
But double values leaking into a "use decimal" context, mixing with decimal values in expressions, could be worse than a rare exception. They could cause different control flow, depending on relational operators: the missile launches when it should not. They would tend to produce hard-to-find yet non-trivial bugs.
It would be better to make it an error for a double to flow into a mixed-mode expression in a "use decimal" context.
We should not be making standards until we can determine the extent to which that is better. It may be best to defer decimal to Harmony so we can try some experiments and have more confidence.
On Sep 17, 2008, at 4:01 PM, Douglas Crockford wrote:
It may be best to defer decimal to Harmony so we can try some experiments and have more confidence.
Absolutely agree (someone added es-discuss to the cc: list already). See
which led to many followups and different discussions. In all of
those messages, I do not recall reading why decimal can't wait for
Harmony.
It's not that we wouldn't "just take decimal" if it dropped into 3.1
without slipping 3.1's schedule, but I don't see how that is possible
simply in terms of spec production. Getting feedback from significant
numbers of users based on Sam's open-source implementation also looks
like it will take more than a few months, and if this user feedback
must come before spec production, then we are doubly out of time for
decimal in 3.1.
This doesn't mean the pace of implementation should slow down. Sam,
I'm still keen on reviewing your patch for SpiderMonkey. It may not
make Firefox 3.1 (I think it could) but it should get into automated
proto-Firefox builds that people can test.
On Wed, Sep 17, 2008 at 4:55 PM, Brendan Eich <brendan at mozilla.org> wrote:
On Sep 17, 2008, at 4:01 PM, Douglas Crockford wrote:
It may be best to defer decimal to Harmony so we can try some experiments and have more confidence.
Absolutely agree (someone added es-discuss to the cc: list already). See
which led to many followups and different discussions. In all of those messages, I do not recall reading why decimal can't wait for Harmony.
It's not that we wouldn't "just take decimal" if it dropped into 3.1 without slipping 3.1's schedule, but I don't see how that is possible simply in terms of spec production. Getting feedback from significant numbers of users based on Sam's open-source implementation also looks like it will take more than a few months, and if this user feedback must come before spec production, then we are doubly out of time for decimal in 3.1.
My only claim is that I have a head start on implementations and user feedback on such items as "use strict", and I intend to maintain that lead.
This doesn't mean the pace of implementation should slow down. Sam, I'm still keen on reviewing your patch for SpiderMonkey. It may not make Firefox 3.1 (I think it could) but it should get into automated proto-Firefox builds that people can test.
My patch does not match the direction that Doug now is advocating.
To the extent that I understand Doug's proposal, it essentially is an opt-in that would remove a feature (binary64 floating points); and therefore would likely be about as successful as an opt-in that removed another feature that Doug dislikes, such as "with". Am I missing something?
I also don't understand the argument that the direction I have outlined is not acceptable for users who care about quantities that represent monetary amounts. Can somebody spell that out for me please?
/be
- Sam Ruby
Sam Ruby wrote:
To the extent that I understand Doug's proposal, it essentially is an opt-in that would remove a feature (binary64 floating points); and therefore would likely be about as successful as an opt-in that removed another feature that Doug dislikes, such as "with". Am I missing something?
It would replace a feature that many consider to be broken with a feature that we hope will work better.
Most JavaScript programs would not notice if we changed the underlying type to integer. The most common operations on intentionally real numbers are on money and percentages. Those programs would benefit from a non-disruptive decimal option.
There are too many loose threads for the ES3.1 window. I hope we can get this stuff resolved in Harmony.
On Sep 17, 2008, at 2:34 PM, Douglas Crockford wrote:
Sam Ruby wrote:
To the extent that I understand Doug's proposal, it essentially is an opt-in that would remove a feature (binary64 floating points); and therefore would likely be about as successful as an opt-in that removed another feature that Doug dislikes, such as "with". Am I missing something?
It would replace a feature that many consider to be broken with a
feature that we hope will work better.
While decimal floating point may be better than binary floating point
for some applications, such as currency calculations, it would be
unsuitable for other applications such as graphics. Doing advanced
client-side graphics in JavaScript is an increasingly popular
application. For example, see Processing.JS, Algorithmic Ink,
Mozilla's recent image filtering demo, and any of a number of charting
and graphic applications.
Using decimal floating point is unacceptable for these kinds of
applications - the performance would not be acceptable. So we should
not go down a path based on the assumption that binary floating point
can be removed.
I agree with you and Brendan that these issues are better resolved in
Harmony.
, Maciej
Maciej Stachowiak wrote:
On Sep 17, 2008, at 2:34 PM, Douglas Crockford wrote:
Sam Ruby wrote:
To the extent that I understand Doug's proposal, it essentially is an opt-in that would remove a feature (binary64 floating points); and therefore would likely be about as successful as an opt-in that removed another feature that Doug dislikes, such as "with". Am I missing something?
It would replace a feature that many consider to be broken with a feature that we hope will work better.
While decimal floating point may be better than binary floating point for some applications, such as currency calculations, it would be unsuitable for other applications such as graphics. Doing advanced client-side graphics in JavaScript is an increasingly popular application. For example, see Processing.JS, Algorithmic Ink, Mozilla's recent image filtering demo, and any of a number of charting and graphic applications.
Using decimal floating point is unacceptable for these kinds of applications - the performance would not be acceptable. So we should not go down a path based on the assumption that binary floating point can be removed.
If we agree that binary64 isn't a "bug" that needs to be removed, wouldn't the simplest solution that could possibly work be separate number and decimal data types that can be freely convertible between each other, and the absolutely most a "use decimal" could possibly mean is that the interpretation of numeric literals is as decimal floating point as opposed to binary floating point?
I agree with you and Brendan that these issues are better resolved in Harmony.
At the minimum, I would hope that we should be able to agree on what the issues are. If they can't be worked in time, so be it.
Doug has put forward the proposal that Harmony seek to "wholly replace the binary representation with the decimal representation". Is that the consensus of the group?
If not, the data types need to co-exist. I've made a specific proposal on how that could work, and made repeated requests that people provide specific feedback, and I've provided executable source, spec text, test results, and an online web service.
Anybody care to mark up what they would like to see the following look like?
intertwingly.net/stories/2008/09/12/estest.html
, Maciej
- Sam Ruby
On Sep 17, 2008, at 4:48 PM, Sam Ruby wrote:
Maciej Stachowiak wrote:
On Sep 17, 2008, at 2:34 PM, Douglas Crockford wrote:
Sam Ruby wrote:
To the extent that I understand Doug's proposal, it essentially
is an opt-in that would remove a feature (binary64 floating points); and therefore would likely be about as successful as an opt-in that removed another feature that Doug dislikes, such as "with". Am I missing something?It would replace a feature that many consider to be broken with a
feature that we hope will work better. While decimal floating point may be better than binary floating
point for some applications, such as currency calculations, it
would be unsuitable for other applications such as graphics. Doing
advanced client-side graphics in JavaScript is an increasingly
popular application. For example, see Processing.JS, Algorithmic
Ink, Mozilla's recent image filtering demo, and any of a number of
charting and graphic applications. Using decimal floating point is unacceptable for these kinds of
applications - the performance would not be acceptable. So we
should not go down a path based on the assumption that binary
floating point can be removed.If we agree that binary64 isn't a "bug" that needs to be removed,
wouldn't the simplest solution that could possibly work be separate
number and decimal data types that can be freely convertible between
each other, and the absolutely most a "use decimal" could possibly
mean is that the interpretation of numeric literals is as decimal
floating point as opposed to binary floating point?
That approach sounds much better to me than Doug's proposal.
I agree with you and Brendan that these issues are better resolved
in Harmony.At the minimum, I would hope that we should be able to agree on what
the issues are. If they can't be worked in time, so be it.Doug has put forward the proposal that Harmony seek to "wholly
replace the binary representation with the decimal representation".
Is that the consensus of the group?
I certainly do not agree with his proposal. Other embers of the group
may speak for themselves, but it was not my understanding previously
that anyone wanted to go down such a road.
On Wed, Sep 17, 2008 at 2:18 PM, Sam Ruby <rubys at intertwingly.net> wrote:
To the extent that I understand Doug's proposal, it essentially is an opt-in that would remove a feature (binary64 floating points); and therefore would likely be about as successful as an opt-in that removed another feature that Doug dislikes, such as "with". Am I missing something?
Without advocating either Crock's or Sam's positions on "use decimal", I will take issue with the above. Proposed ES3.1 "use strict" is an opt-in that will remove "with", "delete <var>", eval-operator exporting bindings to
its calling context, "arguments.callee", "arguments.caller", joining of arguments with parameter variables, "Function.caller", and "Function.arguments". That's why it's opt-in. Genuine improvements which are purely additive (rare though they may be) are upwards compatible, and so need not be contingent on opt-in.
My own position on decimal:
-
I agree with Brendan and Crock that Decimal should wait until after 3.1. That said, I think Sam is doing a great job refining the proposal, taking into account all objections, and producing code people can test and incorporate. I agree that the status of engineering quality here is well ahead of most of what this committee has ever produced. We would all do well to emulate Sam's example.
-
Between Crock's and Sam's present proposals, I prefer Sam's.
-
Given that we leave compareTotal out of EcmaScript, IIUC, the operational semantics of Sam's proposed decimal in EcmaScript would have only one NaNm, Inifinitym, and -Infinitym, just as EcmaScript binary floating point only has one NaN, Infinity, and -Infinity. Given that, I have no further specific objections to Sam's proposal. I'd just like to see it wait till after 3.1.
-
Please let us never refer to the result of decimal as "correct". If it were possible for computers to implement arithmetic correctly in finite time and space, we wouldn't be in this mess. The best that can be said of decimal is that it is incorrect in ways that people find more intuitive.
-
Like Crock, I prefer that typeof 1m === 'number'. However, unlike Crock's and Brendan's agreement, I think this should be unconditional. I think it would be horrible for 'typeof X' to depend not only on the value of X but also on the mode of the program unit in which the typeof appears. Please don't do that. Let's examine the case against:
On Wed, Sep 17, 2008 at 12:49 PM, Brendan Eich wrote:
On Sep 17, 2008, at 6:54 PM, Douglas Crockford wrote:
It requires that a === b produce the same result as typeof a == typeof b && a == b.
Agreed.
Without "use decimal", typeof 1.1m must not be "number" to preserve this same invariant. Otherwise (without "use decimal") 1.5m == 1.5 but 1.1m != 1.1, so without making typeof 1.5m != typeof 1.1m, we cannot have typeof 1.5m == "number".
I do not believe anyone wants typeof 1.5m != typeof 1.1m.
I agree that typeof 1.5m == typeof 1.1m. However, I do not agree that 1.1m == 1.1. These should be !=, since they represent distinct real numbers. I do agree that 1.5m == 1.5, but only because they represent the same real number, not because they look the same. This extends the notion of cohort across the decimal/binary divide.
-0, 0, -0m, 0m, 0.000m
may all be computationally distinct values. (Is 0.00m distinct from 0m?) If they are, then they should therefore be distinguished by any Object.identical() and Object.indentityHash() operations. But they all represent the same real number, so they should be ==. If we agree that they should also all be typeof 'number', then they should also all be === and be considered generalized cohorts.
If typeof all these were 'number', then binary and decimal could share a single NaN, Infinity, and -Infinity. Only the remaining finite numeric values would be distinctly either binary or decimal.
I agree with Sam and apparently Crock and everyone else that mixed mode should coerce to decimal. Or, at least, I have no objection to that position.
Were we to adopt this, then I think "use decimal" should condition only whether an unqualified numeric literal be interpreted as binary or decimal floating point. We should then have a suffix which means binary floating point, so you can say it explicitly. Nothing else about the numerics should be conditioned by the pragma.
On Wed, Sep 17, 2008 at 8:05 PM, Maciej Stachowiak <mjs at apple.com> wrote:
On Sep 17, 2008, at 4:48 PM, Sam Ruby wrote:
Maciej Stachowiak wrote:
On Sep 17, 2008, at 2:34 PM, Douglas Crockford wrote:
Sam Ruby wrote:
To the extent that I understand Doug's proposal, it essentially is an opt-in that would remove a feature (binary64 floating points); and therefore would likely be about as successful as an opt-in that removed another feature that Doug dislikes, such as "with". Am I missing something?
It would replace a feature that many consider to be broken with a feature that we hope will work better. While decimal floating point may be better than binary floating point for some applications, such as currency calculations, it would be unsuitable for other applications such as graphics. Doing advanced client-side graphics in JavaScript is an increasingly popular application. For example, see Processing.JS, Algorithmic Ink, Mozilla's recent image filtering demo, and any of a number of charting and graphic applications. Using decimal floating point is unacceptable for these kinds of applications - the performance would not be acceptable. So we should not go down a path based on the assumption that binary floating point can be removed.
If we agree that binary64 isn't a "bug" that needs to be removed, wouldn't the simplest solution that could possibly work be separate number and decimal data types that can be freely convertible between each other, and the absolutely most a "use decimal" could possibly mean is that the interpretation of numeric literals is as decimal floating point as opposed to binary floating point?
That approach sounds much better to me than Doug's proposal.
Let's take that a step further. If other languages are any guide, then complex, rational, and big integer are common requirements. For the moment, let's assume that unlimited precision integer arithmetic is the most likely next "number-like" requirement for ES. In Python such constants end with an "L", so I will use that to illustrate the next question. But first some background.
9007199254740993 is the smallest positive integer that can't be exactly represented as a ES number. 9007199254740993L and 9007199254740992m, however can be represented exactly as hypothetical big integer and 128 bit decimal values.
Given that as background, what should typeof(9007199254740993L) return?
I agree with you and Brendan that these issues are better resolved in Harmony.
At the minimum, I would hope that we should be able to agree on what the issues are. If they can't be worked in time, so be it.
Doug has put forward the proposal that Harmony seek to "wholly replace the binary representation with the decimal representation". Is that the consensus of the group?
I certainly do not agree with his proposal. Other embers of the group may speak for themselves, but it was not my understanding previously that anyone wanted to go down such a road.
Doug's previous proposal where "use decimal" merely determined whether a constant like "1.1" would be interpreted as binary64 or decimal128 still may be worth exploring. Presumably with the ability to have a suffix (like d) to explicitly indicate that a given quantity is double precision.
But given the past history of this language, and the wide deployment is has enjoyed, I don't think that "use decimal" should ever mean much more than that.
- Maciej
- Sam Ruby
On Sep 17, 2008, at 6:22 PM, Mark S. Miller wrote:
On Wed, Sep 17, 2008 at 2:18 PM, Sam Ruby <rubys at intertwingly.net>
wrote: On Wed, Sep 17, 2008 at 12:49 PM, Brendan Eich wrote:On Sep 17, 2008, at 6:54 PM, Douglas Crockford wrote:
It requires that a === b produce the same result as typeof a == typeof b && a == b.
Agreed.
Without "use decimal", typeof 1.1m must not be "number" to preserve this same invariant. Otherwise (without "use decimal") 1.5m == 1.5 but 1.1m != 1.1, so without making typeof 1.5m != typeof 1.1m, we cannot have typeof 1.5m == "number".
I do not believe anyone wants typeof 1.5m != typeof 1.1m.
I agree that typeof 1.5m == typeof 1.1m. However, I do not agree
that 1.1m == 1.1. These should be !=, since they represent distinct
real numbers.
It seems to me that Brendan said 1.1m != 1.1 and did not question this.
I do agree that 1.5m == 1.5, but only because they represent the
same real number, not because they look the same. This extends the
notion of cohort across the decimal/binary divide.-0, 0, -0m, 0m, 0.000m
may all be computationally distinct values. (Is 0.00m distinct from
0m?) If they are, then they should therefore be distinguished by any
Object.identical() and Object.indentityHash() operations. But they
all represent the same real number, so they should be ==. If we
agree that they should also all be typeof 'number', then they should
also all be === and be considered generalized cohorts.
Brendan's premise is that 1.5m has different operational behavior from
1.5 and therefore it would be wrong to make them '==='. I tend to
agree. Making '===' treat distinct values the same and then
introducing a new Object.identical to tell the difference seems
inelegant.
If typeof all these were 'number', then binary and decimal could
share a single NaN, Infinity, and -Infinity. Only the remaining
finite numeric values would be distinctly either binary or decimal.I agree with Sam and apparently Crock and everyone else that mixed
mode should coerce to decimal. Or, at least, I have no objection to
that position.
If that is the case then 1.5m / 10.0 != 1.5 / 10.0, and thus it seems
wrong for 1.5m and 1.5 to be '==='.
Were we to adopt this, then I think "use decimal" should condition
only whether an unqualified numeric literal be interpreted as binary
or decimal floating point. We should then have a suffix which means
binary floating point, so you can say it explicitly. Nothing else
about the numerics should be conditioned by the pragma.
Perhaps that suffix could be 'f' as in C/C++.
, Maciej
On Sep 17, 2008, at 6:24 PM, Sam Ruby wrote:
On Wed, Sep 17, 2008 at 8:05 PM, Maciej Stachowiak <mjs at apple.com>
wrote:On Sep 17, 2008, at 4:48 PM, Sam Ruby wrote:
That approach sounds much better to me than Doug's proposal.
Let's take that a step further. If other languages are any guide, then complex, rational, and big integer are common requirements. For the moment, let's assume that unlimited precision integer arithmetic is the most likely next "number-like" requirement for ES. In Python such constants end with an "L", so I will use that to illustrate the next question. But first some background.
9007199254740993 is the smallest positive integer that can't be exactly represented as a ES number. 9007199254740993L and 9007199254740992m, however can be represented exactly as hypothetical big integer and 128 bit decimal values.
Given that as background, what should typeof(9007199254740993L)
return?
All of this depends on what identities we wish to preserve regarding
typeof and various operators.
Other languages with a rich numeric tower have predicates for
detecting any number in general, as well as specific types of numbers.
I do not think this richness can all be expressed adequately via
typeof's single string return. So I think such extensions to the
numeric tower would call for proper type predicates and then typeof
adjusted to do whatever is most sane in light of legacy code.
I agree with you and Brendan that these issues are better resolved in Harmony.
At the minimum, I would hope that we should be able to agree on what the issues are. If they can't be worked in time, so be it.
Doug has put forward the proposal that Harmony seek to "wholly replace the binary representation with the decimal representation". Is that the consensus of the group?
I certainly do not agree with his proposal. Other embers of the group may speak for themselves, but it was not my understanding previously that anyone wanted to go down such a road.
Doug's previous proposal where "use decimal" merely determined whether a constant like "1.1" would be interpreted as binary64 or decimal128 still may be worth exploring. Presumably with the ability to have a suffix (like d) to explicitly indicate that a given quantity is double precision.
But given the past history of this language, and the wide deployment is has enjoyed, I don't think that "use decimal" should ever mean much more than that.
I particularly would prefer that any dialect pragmas ("use strict" as
much as "use decimal") should have effects that can all be explained
as occuring at parse/compile time, rather than persistent runtime
behavior effects. I have not closely reviewed strict mode for whether
it satisfies this criterion.
, Maciej
On Wed, Sep 17, 2008 at 6:24 PM, Sam Ruby <rubys at intertwingly.net> wrote:
Let's take that a step further. If other languages are any guide, then complex, rational, and big integer are common requirements. For the moment, let's assume that unlimited precision integer arithmetic is the most likely next "number-like" requirement for ES. In Python such constants end with an "L", so I will use that to illustrate the next question. But first some background.
9007199254740993 is the smallest positive integer that can't be exactly represented as a ES number. 9007199254740993L and 9007199254740992m, however can be represented exactly as hypothetical big integer and 128 bit decimal values.
Given that as background, what should typeof(9007199254740993L) return?
If we adopt Sam's current position, where typeof 1m === 'decimal', then I think we're stuck with typeof 1L === 'integer' or something. Likewise 'complex', 'rational', 'matrix', 'unreal', 'quaternion', etc. If we adopt my preferred position that typeof 1m === 'number', then typeof 1L and these others should be 'number' too.
In both cases, it would seem that new numeric types must still be added by the language's providers rather than the language's users. This is the tragic constraint that none of the present proposals have been able to escape. I would much rather see us work on that problem, rather than trying to decide what is the next numeric type we language designers need to cook in.
Doug's previous proposal where "use decimal" merely determined whether a constant like "1.1" would be interpreted as binary64 or decimal128 still may be worth exploring. Presumably with the ability to have a suffix (like d) to explicitly indicate that a given quantity is double precision.
But given the past history of this language, and the wide deployment is has enjoyed, I don't think that "use decimal" should ever mean much more than that.
I agree that the "use decimal" pragma should determine nothing other than the interpretation of unqualified numeric constants. However, "d" is a terrible choice for a suffix meaning "binary floating point", because in effect it means "not decimal". It is anti-mnemonic.
On Wed, Sep 17, 2008 at 6:54 PM, Maciej Stachowiak <mjs at apple.com> wrote:
I particularly would prefer that any dialect pragmas ("use strict" as much as "use decimal") should have effects that can all be explained as occuring at parse/compile time, rather than persistent runtime behavior effects. I have not closely reviewed strict mode for whether it satisfies this criterion.
Please do review strict mode and raise any issues that need discussion. This is rather urgent. Thanks.
2008/9/17 Maciej Stachowiak <mjs at apple.com>:
Were we to adopt this, then I think "use decimal" should condition only whether an unqualified numeric literal be interpreted as binary or decimal floating point. We should then have a suffix which means binary floating point, so you can say it explicitly. Nothing else about the numerics should be conditioned by the pragma.
Perhaps that suffix could be 'f' as in C/C++.
'f' in languages like C/C++ and ECMA 334 means single precision floating point.
'd' in such languages means double precision.
And, yes, ECMA 334 has 'd' for binary floating point and 'm' for decimal floating point.
, Maciej
- Sam Ruby
On Wed, Sep 17, 2008 at 6:50 PM, Maciej Stachowiak <mjs at apple.com> wrote:
I agree that typeof 1.5m == typeof 1.1m. However, I do not agree that 1.1m == 1.1. These should be !=, since they represent distinct real numbers.
It seems to me that Brendan said 1.1m != 1.1 and did not question this.
Brendan, please correct me if I misrepresent your argument.
I took Brendan to be saying that
-
If typeof 1.1m were 'number' then we'd be obligated to have 1.1m == 1.1.
-
We want 1.1m != 1.1.
-
Therefore, typeof 1.1m must not be 'number'.
I disagree with #1 and thus #3.
I do agree that 1.5m == 1.5, but only because they represent the same real number, not because they look the same. This extends the notion of cohort across the decimal/binary divide.
-0, 0, -0m, 0m, 0.000m
may all be computationally distinct values. (Is 0.00m distinct from 0m?) If they are, then they should therefore be distinguished by any Object.identical() and Object.indentityHash() operations. But they all represent the same real number, so they should be ==. If we agree that they should also all be typeof 'number', then they should also all be === and be considered generalized cohorts.
Brendan's premise is that 1.5m has different operational behavior from 1.5 and therefore it would be wrong to make them '==='. I tend to agree. Making '===' treat distinct values the same and then introducing a new Object.identical to tell the difference seems inelegant.
-0 has a different operational behavior than 0. Ok, we've got one weird one. I could live with that as an exception to a general rule.
However, with the introduction of decimal, we've got cohorts galore. 1.1m has a different operational behavior than 1.1000m, but we've agree they should be ===. With the introduction of decimal, === no longer approximates a test of operational equivalence, and we still need such a test.
If typeof all these were 'number', then binary and decimal could share a single NaN, Infinity, and -Infinity. Only the remaining finite numeric values would be distinctly either binary or decimal.
I agree with Sam and apparently Crock and everyone else that mixed mode should coerce to decimal. Or, at least, I have no objection to that position.
If that is the case then 1.5m / 10.0 != 1.5 / 10.0, and thus it seems wrong for 1.5m and 1.5 to be '==='.
0/-0 != 0/0. Does it thus seem wrong that -0 === 0?
Well, yes, actually it does seem wrong to me, but we all accept that particular wrongness. This is just more of the same.
Were we to adopt this, then I think "use decimal" should condition only whether an unqualified numeric literal be interpreted as binary or decimal floating point. We should then have a suffix which means binary floating point, so you can say it explicitly. Nothing else about the numerics should be conditioned by the pragma.
Perhaps that suffix could be 'f' as in C/C++.
I think that's better than 'd'. But it's confusing in a different way: decimal floating point is still a floating point just as much as binary floating point is. I have no letter to suggest.
On Sep 17, 2008, at 10:13 PM, Mark S. Miller wrote:
On Wed, Sep 17, 2008 at 6:50 PM, Maciej Stachowiak <mjs at apple.com>
wrote:I agree that typeof 1.5m == typeof 1.1m. However, I do not agree
that 1.1m == 1.1. These should be !=, since they represent
distinct real numbers.It seems to me that Brendan said 1.1m != 1.1 and did not question
this.Brendan, please correct me if I misrepresent your argument.
I took Brendan to be saying that
If typeof 1.1m were 'number' then we'd be obligated to have 1.1m
== 1.1.We want 1.1m != 1.1.
Therefore, typeof 1.1m must not be 'number'.
I disagree with #1 and thus #3.
Here is what I wrote:
Without "use decimal", typeof 1.1m must not be "number" to preserve this same invariant [that a === b <=> typeof a == typeof b && a
== b]. Otherwise (without "use decimal") 1.5m == 1.5 but 1.1m != 1.1, so without making typeof 1.5m != typeof 1.1m, we cannot have typeof 1.5m == "number".
Please expand your syllogistic reasoning:
Major premise #1: a == b
Minor premise #1: typeof a == typeof b
Conclusion #1: a === b
(you can switch == and === above -- the implication goes both ways
and choice of == or === for typeof a OP typeof b does not matter
since typeof's result is of string type.)
Major premise #2: 1.5m == 1.5 Minor premise #2: typeof 1.5m == "number" (your position, IIUC) Conclusion #2: 1.5m === 1.5 (by Conclusion #1 and ES1-3 (which define typeof 1.5 == "number"))
Counter-example: 1.1m != 1.1 but typeof 1.1m == typeof 1.1 in your proposal (again IIUC)
This is a for-all problem. If there exists some double value x such
that, for the decimal form y spelled literally using the same digits
but with an 'm' suffix, x != y, yet typeof x == typeof y, then typeof
must depend on whether its operand value converts losslessly to
decimal from double and double from decimal.
This is broken -- typeof should depend on type, not value.
Hence my position (Waldemar's too) that typeof 1.1m == typeof 1.5m &&
typeof 1.1m == "decimal".
However, with the introduction of decimal, we've got cohorts
galore. 1.1m has a different operational behavior than 1.1000m, but
we've agree they should be ===. With the introduction of decimal,
=== no longer approximates a test of operational equivalence, and
we still need such a test.
=== does approximate operational equivalence apart from significance.
Right? (I'm asking because I could be wrong, not rhetorically!)
0/-0 != 0/0. Does it thus seem wrong that -0 === 0?
Well, yes, actually it does seem wrong to me, but we all accept
that particular wrongness. This is just more of the same.
A lot more.
Two wrongs don't make a right.
One exception to the rule is better than two, or 2^53 or larger.
On Sep 17, 2008, at 7:48 PM, Sam Ruby wrote:
Anybody care to mark up what they would like to see the following
look like?
Shipt it!
(Not in ES3.1, certainly in Firefox 3.1 if we can... :-)
2008/9/17 Mark S. Miller <erights at google.com>:
On Wed, Sep 17, 2008 at 6:54 PM, Maciej Stachowiak <mjs at apple.com> wrote:
I particularly would prefer that any dialect pragmas ("use strict" as much as "use decimal") should have effects that can all be explained as occuring at parse/compile time, rather than persistent runtime behavior effects. I have not closely reviewed strict mode for whether it satisfies this criterion.
Please do review strict mode and raise any issues that need discussion. This is rather urgent. Thanks.
What should a reviewer look for? What is strict mode supposed to accomplish?
On Wed, Sep 17, 2008 at 7:35 PM, Brendan Eich <brendan at mozilla.org> wrote:
On Sep 17, 2008, at 10:13 PM, Mark S. Miller wrote:
On Wed, Sep 17, 2008 at 6:50 PM, Maciej Stachowiak <mjs at apple.com> wrote:
I agree that typeof 1.5m == typeof 1.1m. However, I do not agree that 1.1m == 1.1. These should be !=, since they represent distinct real numbers.
It seems to me that Brendan said 1.1m != 1.1 and did not question this.
Brendan, please correct me if I misrepresent your argument.
I took Brendan to be saying that
If typeof 1.1m were 'number' then we'd be obligated to have 1.1m == 1.1.
We want 1.1m != 1.1.
Therefore, typeof 1.1m must not be 'number'.
I disagree with #1 and thus #3.
Here is what I wrote:
Without "use decimal", typeof 1.1m must not be "number" to preserve this same invariant [that a === b <=> typeof a == typeof b && a == b]. Otherwise (without "use decimal") 1.5m == 1.5 but 1.1m != 1.1, so without making typeof 1.5m != typeof 1.1m, we cannot have typeof 1.5m == "number".
Please expand your syllogistic reasoning:
Major premise #1: a == b Minor premise #1: typeof a == typeof b Conclusion #1: a === b (you can switch == and === above -- the implication goes both ways and choice of == or === for typeof a OP typeof b does not matter since typeof's result is of string type.)
Agreed.
Major premise #2: 1.5m == 1.5 Minor premise #2: typeof 1.5m == "number" (your position, IIUC) Conclusion #2: 1.5m === 1.5 (by Conclusion #1 and ES1-3 (which define typeof 1.5 == "number"))
Yes.
Counter-example: 1.1m != 1.1 but typeof 1.1m == typeof 1.1 in your proposal (again IIUC)
Yes. That's because the binary floating point literal written as "1.1" does not represent the real number "1.1".
This is a for-all problem. If there exists some double value x such that, for the decimal form y spelled literally using the same digits but with an 'm' suffix, x != y, yet typeof x == typeof y, then typeof must depend on whether its operand value converts losslessly to decimal from double and double from decimal.
This is broken -- typeof should depend on type, not value.
No, I agree that typeof must depend on type, not value. I advocate the following results:
typeof 1.1 === typeof 1.1m === typeof 1.5 === typeof 1.5m === 'number' 1.1 !== 1.1m 1.1 === 1.1000 1.1m === 1.1000m 1.5 === 1.5m === 1.5000m
The reason? The binary floating point literal "1.1" evaluates to a binary floating point numeric value that denotes a real number other than the real number 1.1.
For all finite numeric values X and Y, if the real number exactly represented by X and Y is the same, then X === Y and X and Y are in the same generalized cohort. Otherwise X !== Y.
I am not defining my position based on conversion between binary and decimal numeric values, but rather by correspondence with the reals. Thus I also advocate that 1.000000000000000000000000000000000000000000000000001m === 1m === 1.00000000000000000000000000000000000000000000000002 === 1, since these are all evaluated to numeric values that all exactly represent the real number 1.
Hence my position (Waldemar's too) that typeof 1.1m == typeof 1.5m && typeof 1.1m == "decimal".
However, with the introduction of decimal, we've got cohorts galore. 1.1m has a different operational behavior than 1.1000m, but we've agree they should be ===. With the introduction of decimal, === no longer approximates a test of operational equivalence, and we still need such a test.
=== does approximate operational equivalence apart from significance. Right? (I'm asking because I could be wrong, not rhetorically!)
AFAIK, yes, and apart from minus-zero-ness. And ignoring the fact that the extra digits in "1.000m", whatever they might mean, apparently mean something other than "significance". Recall all the clarifications, especially by Mike Colishaw, that these extra zeros aren't "significance". And I agree that they shouldn't be significance, but I'm still unclear on what they are instead.
0/-0 != 0/0. Does it thus seem wrong that -0 === 0?
Well, yes, actually it does seem wrong to me, but we all accept that particular wrongness. This is just more of the same.
A lot more.
Two wrongs don't make a right.
One exception to the rule is better than two, or 2^53 or larger.
"apart from somthing like significance" is a huge new set of exceptions. A large enough set that we can no longer understand === as approximating operational equivalence. If instead we understand === to be asking whether two operands are cohorts, and we define cohorts to be those numeric values that exactly represent a given real number, then
- -0 vs 0 is no longer irregular
- We can explain why 1.5 === 1.5m === 1.5000m
- We can explain why 1.1 !== 1.1m.
On Sep 17, 2008, at 10:06 PM, Mark S. Miller wrote:
0/-0 != 0/0. Does it thus seem wrong that -0 === 0?
Well, yes, actually it does seem wrong to me, but we all accept
that particular wrongness. This is just more of the same.A lot more.
Two wrongs don't make a right.
One exception to the rule is better than two, or 2^53 or larger.
"apart from somthing like significance" is a huge new set of
exceptions. A large enough set that we can no longer understand ===
as approximating operational equivalence. If instead we understand
=== to be asking whether two operands are cohorts, and we define
cohorts to be those numeric values that exactly represent a given
real number, then
- -0 vs 0 is no longer irregular
- We can explain why 1.5 === 1.5m === 1.5000m
- We can explain why 1.1 !== 1.1m.
I think it is a tenable position that 1.5m === 1.5000m based on the
"cohort" concept, since performing the same operation on both will
give answers that are in the same "cohort" equivalence class. But
1.5 / 10.0 != 1.5m / 10.0, and indeed, the answers would not even be
in the same cohort. A notion of 'cohort' equivalence class based on
correspondence in the abstract to the same real number but divorced
from the actual semantics of the programming language strikes me as
incoherent. I think such a notion of equivalence class only makes
sense if performing identical operations on members of the same cohort
gives answers which are in the same cohort.
As a reductio ad absurdum, consider that we can place ECMAScript
strings in one-to-one correspondence with arbitrary precision integers
by considering them as base 2^16 numbers, but surely no one would
argument that this implies "00" === 3145776L. That's because in
defining a type, it is not just the set of represented values that
matters for type equivalence, but also behavior of operations on that
type.
, Maciej
On Wed, Sep 17, 2008 at 10:52 PM, Brendan Eich <brendan at mozilla.org> wrote:
On Sep 17, 2008, at 7:48 PM, Sam Ruby wrote:
Anybody care to mark up what they would like to see the following look like?
Shipt it!
:-)
(Not in ES3.1, certainly in Firefox 3.1 if we can... :-)
We need to discuss the former further next week. Some support for decimal was always in the plan for 3.1 (ever since I have been involved at least) and I've been working tirelessly to refine what that support is.
In Redmond, we should be in a position to determine what the remaining work items are for any of the features that anybody has ever proposed for ES3.1.
/be
- Sam Ruby
On Wed, Sep 17, 2008 at 10:53 PM, Maciej Stachowiak <mjs at apple.com> wrote:
I think it is a tenable position that 1.5m === 1.5000m based on the "cohort" concept, since performing the same operation on both will give answers that are in the same "cohort" equivalence class. But 1.5 / 10.0 != 1.5m / 10.0, and indeed, the answers would not even be in the same cohort. A notion of 'cohort' equivalence class based on correspondence in the abstract to the same real number but divorced from the actual semantics of the programming language strikes me as incoherent. I think such a notion of equivalence class only makes sense if performing identical operations on members of the same cohort gives answers which are in the same cohort.
Are -0 and 0 in the same cohort?
2008/9/17 Mark S. Miller <erights at google.com>:
If that is the case then 1.5m / 10.0 != 1.5 / 10.0, and thus it seems wrong for 1.5m and 1.5 to be '==='.
0/-0 != 0/0. Does it thus seem wrong that -0 === 0?
Just so that I'm clear what you point is, It is worth noting that 42/0 != 42/0, yet hopefully we all agree that 42 === 42.
Perhaps the example that you were looking for 1/0 != 1/-0 ?
Well, yes, actually it does seem wrong to me, but we all accept that particular wrongness. This is just more of the same.
"just" is a powerful word. Use sparingly.
- Sam Ruby
On Wed, Sep 17, 2008 at 10:53 PM, Maciej Stachowiak <mjs at apple.com>
wrote:
I think it is a tenable position that 1.5m === 1.5000m based on the "cohort" concept, since performing the same operation on both will give answers that are in the same "cohort" equivalence class. But 1. 5 / 10.0 != 1.5m / 10.0, and indeed, the answers would not even be in the same cohort. A notion of 'cohort' equivalence class based on correspondence in the abstract to the same real number but divorced from the actual semantics of the programming language strikes me as incoherent. I think such a notion of equivalence class only makes sense if performing identical operations on members of the same cohort gives answers which are in the same cohort.
Are -0 and 0 in the same cohort?
In IEEE 754, no:
2.1.10 cohort: The set of all floating-point representations that represent a given floating-point number in a given floating-point format. In this context ?0 and +0 are considered distinct and are in different cohorts.
(+0 and -0 are distinguishable in binary FP as well as in decimal FP; in fact this is the only case in binary where two finite numbers have the same value but different representations and encodings.)
Mike
Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU
On Thu, Sep 18, 2008 at 7:54 AM, Sam Ruby <rubys at intertwingly.net> wrote:
2008/9/17 Mark S. Miller <erights at google.com>:
If that is the case then 1.5m / 10.0 != 1.5 / 10.0, and thus it seems wrong for 1.5m and 1.5 to be '==='.
0/-0 != 0/0. Does it thus seem wrong that -0 === 0?
Just so that I'm clear what you point is, It is worth noting that 42/0 != 42/0, yet hopefully we all agree that 42 === 42.
Perhaps the example that you were looking for 1/0 != 1/-0 ?
Yes, thanks.
On Sep 18, 2008, at 1:06 AM, Mark S. Miller wrote:
"apart from somthing like significance" is a huge new set of
exceptions. A large enough set that we can no longer understand ===
as approximating operational equivalence. If instead we understand
=== to be asking whether two operands are cohorts, and we define
cohorts to be those numeric values that exactly represent a given
real number, then
- -0 vs 0 is no longer irregular
0 and -0 are in different cohorts, thanks to MFC for making this
clear (it seemed implied by past discussion of 1.5m, 1.50m, 1.500m,
etc. all being in the same cohort -- no sign difference --but it's
good to have clarity).
- We can explain why 1.5 === 1.5m === 1.5000m
That's explained by cohorts in 754r.
- We can explain why 1.1 !== 1.1m.
The alternative to real number projection is conversion with
rounding, which is what people will actually run into when mixing
modes. (I smell lots of bugs there, btw, but that's a separate topic.)
The point here is that, for all practical meanings of "operational"
that I can think of, and excluding the -0 vs. 0 issue (or non-issue
-- it's been that way since ES1 and numerical analysts prefer it for
subtle reasons), defining decimal and binary floating point
equivalence based on projection idealizes away from the concrete
operational semantics, as Maciej pointed out with 1.5m/10 vs. 1.5/10.
Real numbers are wonderful (I mean this sincerely, in a Mathematical
sense), but they're neither well-supported in ES or most languages
and all hardware, nor well-understood by most programmers (IMHO).
If we did make typeof 1.1m == "number" then I suspect we would break
some current code that "knows" about IEEE-754 double (precision
limits, e.g.).
The safest course still seems to me to be typeof 1.1m == "decimal".
On Sep 18, 2008, at 7:35 AM, Mark S. Miller wrote:
On Wed, Sep 17, 2008 at 10:53 PM, Maciej Stachowiak <mjs at apple.com>
wrote: I think it is a tenable position that 1.5m === 1.5000m based on the
"cohort" concept, since performing the same operation on both will
give answers that are in the same "cohort" equivalence class. But
1.5 / 10.0 != 1.5m / 10.0, and indeed, the answers would not even be
in the same cohort. A notion of 'cohort' equivalence class based on
correspondence in the abstract to the same real number but divorced
from the actual semantics of the programming language strikes me as
incoherent. I think such a notion of equivalence class only makes
sense if performing identical operations on members of the same
cohort gives answers which are in the same cohort.Are -0 and 0 in the same cohort?
-0 === 0 is a quirk of the language that in my opinion we should not
repeat. Other than that oddity, === is a useful identity operator. As
Brendan said, 2^53 wrongs don't make a right.
But to answer your actual question, it seems to me they are not in the
same cohort in the same sense 1.5m and 1.500m are, since they may give
mathematically different answers under the same operation, answers
that differ in more than just precision.
, Maciej
On Thu, Sep 18, 2008 at 8:00 AM, Mike Cowlishaw <MFC at uk.ibm.com> wrote:
Are -0 and 0 in the same cohort?
In IEEE 754, no:
*2.1.10 cohort: *The set of all floating-point representations that represent a given floating-point number in a given floating-point format. In this context −0 and +0 are considered distinct and are in different cohorts.
(+0 and -0 are distinguishable in binary FP as well as in decimal FP; in fact this is the only case in binary where two finite numbers have the same value but different representations and encodings.)
I don't understand this definition of cohort. It seems to contradict itself. The first sentence implies that -0 and 0 are in the same cohort -- since they are different representations of the same number in the same given floating point format. Should I read the second sentence as a clarification -- in which case it seems inconsistent. Or should I read it as a qualification on the first sentence, saying in effect "well, except for -0 and 0, which are just weird".
Given that -0 and 0 are in different cohorts, let me probe the extent to which cohorts imply something like operational equivalence. For any arithmetic operator OP and any decimal floating point values X1,X2,Y1,Y2, is it the case that
cohort(X1,X2) & cohort(Y1,Y2) implies cohort(X1 OP Y1,X2 OP Y2)
? I say "arithmetic" above in order to include operations like "+" but not ".toString()".
Clearly, if -0 and 0 were cohorts, this would not hold. Since they aren't, does the above property hold for all decimal floating point values? If so, I withdraw the proposal to generalize cohort. In which case I agree that Sam's current proposal + an agreement never to supply compareTotal() (so that decimal can have a single NaNm) is the best proposal on the table.
-0 and 0 are not the same "given floating point number". 1/-0 vs. 1/0
and Math.atan2(-0,0) vs. 0,0 are but two examples.
/be
Sent from my iPhone
On Thu, Sep 18, 2008 at 4:52 PM, Brendan Eich <brendan at mozilla.org> wrote:
-0 and 0 are not the same "given floating point number". 1/-0 vs. 1/0 and Math.atan2(-0,0) vs. 0,0 are but two examples.
Yes, I understand their operational difference. Whether that difference means they are not the same number depends on how you define "number". I would have defined "representing the same number" by correspondence to the reals. YMMV.
In any case, my question is, does there exist any similar operational difference between decimal floating point values that are considered to be in the same cohort? If not, then I back Sam's proposal without compareTotal().
On Sep 18, 2008, at 5:13 PM, Mark S. Miller wrote:
On Thu, Sep 18, 2008 at 4:52 PM, Brendan Eich <brendan at mozilla.org>
wrote: -0 and 0 are not the same "given floating point number". 1/-0 vs.
1/0 and Math.atan2(-0,0) vs. 0,0 are but two examples.Yes, I understand their operational difference. Whether that
difference means they are not the same number depends on how you
define "number". I would have defined "representing the same number"
by correspondence to the reals. YMMV.
But the mathematical behavior of floating point numbers is not the
same as that of the reals. Not just because of -0. See <docs.sun.com/source/806-3568/ncg_goldberg.html
Floating point numbers are a useful rough approximation of the reals
for many practical calculations, but treating them as if they were in
fact a subset of the reals is not mathematically sound.
In any case, my question is, does there exist any similar
operational difference between decimal floating point values that
are considered to be in the same cohort? If not, then I back Sam's
proposal without compareTotal().
I assumed not, but it would be nice to hear from an expert.
If there is an operational difference (in the sense you helpfully
expressed in more precise form) then I tend to think it undermines the
notion of cohort.
, Maciej
On Thu, Sep 18, 2008 at 5:22 PM, Maciej Stachowiak <mjs at apple.com> wrote:
On Sep 18, 2008, at 5:13 PM, Mark S. Miller wrote:
On Thu, Sep 18, 2008 at 4:52 PM, Brendan Eich <brendan at mozilla.org> wrote:
-0 and 0 are not the same "given floating point number". 1/-0 vs. 1/0 and Math.atan2(-0,0) vs. 0,0 are but two examples.
Yes, I understand their operational difference. Whether that difference means they are not the same number depends on how you define "number". I would have defined "representing the same number" by correspondence to the reals. YMMV.
But the mathematical behavior of floating point numbers is not the same as that of the reals. Not just because of -0. See < docs.sun.com/source/806-3568/ncg_goldberg.html>
Floating point numbers are a useful rough approximation of the reals for many practical calculations, but treating them as if they were in fact a subset of the reals is not mathematically sound.
Long long ago I actually had read that document carefully, and I had also looked at I think the [Brown 1981] which it cites. (But the doc has no bibliography. Anyone have a pointer?) My memory of the theory of floating point is that the numbers are exact but the operations are approximate.
-
A given finite floating point value exactly represents a given real number.
-
The subset of real numbers which are exactly representable in a given format are its "representable numbers".
-
The standards require various well behaved properties like:
Let Tr = B2R(Xb) +r B2R(Yb) // Tr stands for True sum as a real number If representable(Tr) then Tr ===r B2R(Xb +b Yb) Else (greatest representable number <r Tr) <r B2R(Xb +b Yb) <r (smallest representable number >r Tr)
Since we have many people on this list much more expert on this than I, if the above summary is mistaken, I'm sure someone will correct.
2008/9/18 Mark S. Miller <erights at google.com>
The standards require various well behaved properties like:
Let Tr = B2R(Xb) +r B2R(Yb) // Tr stands for True sum as a real number If representable(Tr) then Tr ===r B2R(Xb +b Yb) Else (greatest representable number <r Tr) <r B2R(Xb +b Yb) <r (smallest representable number >r Tr)
Oops. That last line should obviously be
Else (greatest representable number <r Tr) <=r B2R(Xb +b Yb) <=r
(smallest representable number >r Tr)
Mark S. Miller wrote:
Long long ago I actually had read that document carefully, and I had also looked at I think the [Brown 1981] which it cites. (But the doc has no bibliography. Anyone have a pointer?) My memory of the theory of floating point is that the numbers are exact but the operations are approximate.
More precisely, the basic operations are "correctly rounded". That is, they calculate the nearest representable value to the real result using a specified rounding rule. Nothing in the basic operations of floating point arithmetic is approximate -- they are just not the same operations as the similarly named ones on real numbers.
(Some of the more complicated operations are specified nondeterministically.)
You can, of course, use this exact arithmetic to represent values that are only know approximately. That is where I think that the confusion creeps into some people's mental model of floating point; they are not distinguishing the "map" (floating point values) from the "territory" (the application-domain values being represented).
A given finite floating point value exactly represents a given real number.
The subset of real numbers which are exactly representable in a given format are its "representable numbers".
The standards require various well behaved properties like:
Let Tr = B2R(Xb) +r B2R(Yb) // Tr stands for True sum as a real number If representable(Tr) then Tr ===r B2R(Xb +b Yb) Else (greatest representable number <r Tr) <r B2R(Xb +b Yb) <r (smallest representable number >r Tr)
You mean
Else (greatest representable number <r Tr) <=r B2R(Xb +b Yb) <=r
(smallest representable number >r Tr)
On Sep 18, 2008, at 6:03 PM, Mark S. Miller wrote:
On Thu, Sep 18, 2008 at 5:22 PM, Maciej Stachowiak <mjs at apple.com>
wrote:On Sep 18, 2008, at 5:13 PM, Mark S. Miller wrote:
On Thu, Sep 18, 2008 at 4:52 PM, Brendan Eich <brendan at mozilla.org>
wrote: -0 and 0 are not the same "given floating point number". 1/-0 vs.
1/0 and Math.atan2(-0,0) vs. 0,0 are but two examples.Yes, I understand their operational difference. Whether that
difference means they are not the same number depends on how you
define "number". I would have defined "representing the same
number" by correspondence to the reals. YMMV.But the mathematical behavior of floating point numbers is not the
same as that of the reals. Not just because of -0. See <docs.sun.com/source/806-3568/ncg_goldberg.htmlFloating point numbers are a useful rough approximation of the reals
for many practical calculations, but treating them as if they were
in fact a subset of the reals is not mathematically sound.Long long ago I actually had read that document carefully, and I had
also looked at I think the [Brown 1981] which it cites. (But the doc
has no bibliography. Anyone have a pointer?) My memory of the theory
of floating point is that the numbers are exact but the operations
are approximate.
I'm not sure what it means for values to be "exact" representations of
real numbers when operations on them break even the most fundamental
identities of real arithmetic. For many values of A, B and C, (A + B)
- C != A + (B + C). And with the right values, it's not just a tiny
difference of units in the last place, these expressions can be wildly
different in value.
The bottom line to me is, if you are very very careful, then floating
point numbers might be a useful approximate model of the real numbers
for a given problem, but fundamentally their behavior is very
different from the reals and they should not be considered to truly
map to any subset of R.
, Maciej
"Mark S. Miller" <erights at google.com> wrote:
On Thu, Sep 18, 2008 at 4:52 PM, Brendan Eich <brendan at mozilla.org>
wrote:
-0 and 0 are not the same "given floating point number". 1/-0 vs. 1/ 0 and Math.atan2(-0,0) vs. 0,0 are but two examples.
Yes, I understand their operational difference. Whether that difference means they are not the same number depends on how you define "number". I would have defined "representing the same number" by correspondence to the reals. YMMV.
I was quoting the IEEE 754 definition of cohort, and that uses 'floating-point number'. The IEEE 754 definition of 'floating-point number' is:
2.1.25 floating-point number: A finite or infinite number that is representable in a floating-point format. A floating-point datum that is not a NaN. All floating-point numbers, including zeros and infinities, are signed.
There's a helpful diagram at the beginning of subclause 3.2 that shows the four layers; considering just the finites for now, these are:
-
Real numbers -- the mathematical underpinning.
-
Floating-point numbers -- the set of discrete values that a format can represent (-0 and +0 are distinct, because of their relationship to +/- Infinity).
-
Floating-point representations -- the set of number representations in a format: {sign, exponent, significand}. For binary, there is only one representation for each number in a format, for decimal there may be many.
-
Encodings -- the specific bit-strings that encode a representation.
In any case, my question is, does there exist any similar operational difference between decimal floating point values that are considered to be in the same cohort?
Numerically, no. (There are operations that can distinguish between different members of the same cohort, such as sameQuantum, but all members of the same cohort always represent the same floating-point number.)
(IEEE 754 avoids the word 'value', because -0 and +0 have the same value (they are numerically equal) but they are different numbers, but 'value' and 'number' are often used loosely to mean the same thing.)
If not, then I back Sam's proposal without compareTotal().
OK. ES needs to have a discussion about sign of NaNs, signaling NaNs, payloads, etc. -- for binary as well as decimal -- but perhaps that's better left until the Harmony timeframe.
Mike
Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU
Maciej Stachowiak wrote:
On Sep 18, 2008, at 6:03 PM, Mark S. Miller wrote:
[re: docs.sun.com/source/806-3568/ncg_goldberg.html]
Long long ago I actually had read that document carefully, and I had also looked at I think the [Brown 1981] which it cites. (But the doc has no bibliography. Anyone have a pointer?) My memory of the theory of floating point is that the numbers are exact but the operations are approximate.
I'm not sure what it means for values to be "exact" representations of real numbers when operations on them break even the most fundamental identities of real arithmetic. For many values of A, B and C, (A + B) + C != A + (B + C).
That's because those '+' operators are not real addition; they are floating-point addition (and also the '!=' operator is not real inequality). The values still correspond exactly to real values (we can be more specific and say that they correspond exactly to rational values).
Mark S. Miller wrote: In both cases, it would seem that new numeric types must still be added
by the language's providers rather than the language's users. This is the tragic constraint that none of the present proposals have been able to escape. I would much rather see us work on that problem, rather than trying to decide what is the next numeric type we language designers need to cook in.
And this brings us full circle back to the early days of ES4, which had operator overloading facilities to do this kind of type definition within the language. Most folks didn't like the complexity.
Waldemar
Mark S. Miller wrote:
In any case, my question is, does there exist any similar operational difference between decimal floating point values that are considered to be in the same cohort? If not, then I back Sam's proposal without compareTotal().
Depends on which proposal you're referring to.
In Sam's proposal toString is one such operation, but this is not inherent in IEEE754r.
IEEE754r does have a few other such operations besides compareTotal, but they're about as useful to us as implementing signaling NaNs would be: quantize (5.3.2), sameQuantum (5.7.3), perhaps decimal<->decimal format conversions (there are several 128-bit decimal encoding formats in that standard).
Waldemar
On Sep 17, 2008, at 6:54 PM, Douglas Crockford wrote:
Only under the "use decimal" pragma, right?
Without "use decimal", typeof 1.1m must not be "number" to preserve
this same invariant. Otherwise (without "use decimal") 1.5m == 1.5
but 1.1m != 1.1, so without making typeof 1.5m != typeof 1.1m, we
cannot have typeof 1.5m == "number".
I do not believe anyone wants typeof 1.5m != typeof 1.1m.
So without "use decimal", typeof 1.1m and typeof 1.5m must be "decimal".
As discussed previously in late August, typeof applied to any decimal
should not be "object" or else
typeof x == "object" && !x => x === null
is not preserved. Agreed?
Approximate, as in incorrectly rounded :-/.
Many people agree with the intent of your proposal, which has been a
proposal from various people in similar form for a while. Evidence
includes the fact that
bugzilla.mozilla.org/show_bug.cgi?id=5856
keeps collecting dups. A "use decimal" should mean "use decimal as
number".
But double values leaking into a "use decimal" context, mixing with
decimal values in expressions, could be worse than a rare exception.
They could cause different control flow, depending on relational
operators: the missile launches when it should not. They would tend
to produce hard-to-find yet non-trivial bugs.
It would be better to make it an error for a double to flow into a
mixed-mode expression in a "use decimal" context.