Revisiting Decimal
On Dec 4, 2008, at 10:22 AM, David-Sarah Hopwood wrote:
Sam Ruby wrote:
So now the question is: where are we now?
For the time being, concentrating on other things that will be in
ES3.1. That was the main point of removing Decimal, no?
es-discuss at mozilla.org has more bandwidth than singular focus on 3.1.
If you do not, no problem -- but please don't quash discussion.
Brendan Eich wrote:
On Dec 4, 2008, at 10:22 AM, David-Sarah Hopwood wrote:
Sam Ruby wrote:
So now the question is: where are we now?
For the time being, concentrating on other things that will be in ES3.1. That was the main point of removing Decimal, no?
es-discuss at mozilla.org has more bandwidth than singular focus on 3.1.
OK, but there is no longer any current detailed spec for Decimal to comment on. I think the only major outstanding semantic issue was wrapper objects; apart from that, the devil was in the detail of spec wording.
David-Sarah Hopwood wrote:
Brendan Eich wrote:
On Dec 4, 2008, at 10:22 AM, David-Sarah Hopwood wrote:
Sam Ruby wrote:
So now the question is: where are we now? For the time being, concentrating on other things that will be in ES3.1. That was the main point of removing Decimal, no? es-discuss at mozilla.org has more bandwidth than singular focus on 3.1.
OK, but there is no longer any current detailed spec for Decimal to comment on. I think the only major outstanding semantic issue was wrapper objects; apart from that, the devil was in the detail of spec wording.
Would it be possible for you to expand upon "only major outstanding semantic issue was wrapper objects"? Can you give even a single example of a ECMAScript expression for which the consensus is in dispute as to what the correct results are?
If it helps, take a look at the following:
intertwingly.net/stories/2008/09/20/estest.html, esdiscuss/2008-August/007261
What I would like to determine is whether or not there is consensus on what the desired behavior for Decimal should be, and it would be helpful if those that maintain that there is not consensus could review the content provided by the two links I provided above.
- Sam Ruby
On Dec 4, 2008, at 12:39 PM, David-Sarah Hopwood wrote:
Brendan Eich wrote:
On Dec 4, 2008, at 10:22 AM, David-Sarah Hopwood wrote:
Sam Ruby wrote:
So now the question is: where are we now?
For the time being, concentrating on other things that will be in
ES3.1. That was the main point of removing Decimal, no?es-discuss at mozilla.org has more bandwidth than singular focus on 3.1.
OK,
Good, since we have harmonious energy for lambda syntax on this list
(you do, at any rate -- so do I, don't get me wrong -- but let's play
fair with Decimal for Harmony as well as Lambda for Harmony).
but there is no longer any current detailed spec for Decimal to comment on.
Sam pointed that out too, and directed everyone to his test- implementation results page:
intertwingly.net/stories/2008/09/20/estest.html
Indeed we still have an open issue there ignoring the wrapper one:
I think the only major outstanding semantic issue was wrapper objects; apart from that, the devil was in the detail of spec wording.
No, the cohort/toString issue remains too (at least).
2008/12/4 Brendan Eich <brendan at mozilla.com>:
Sam pointed that out too, and directed everyone to his test-implementation results page: intertwingly.net/stories/2008/09/20/estest.html Indeed we still have an open issue there ignoring the wrapper one:
I think the only major outstanding semantic issue was wrapper objects; apart from that, the devil was in the detail of spec wording.
No, the cohort/toString issue remains too (at least).
With a longer schedule, I would like to revisit that; but as of Redmond, we had consensus on what that would look like in the context of a 3.1 edition.
Sam's mail cited below has gone without a reply for over a month.
Decimal is surely not a high priority, but this message deserves some
kind of response or we'll have to reconstruct the state of the
argument later, at probably higher cost.
I was not at the Redmond meeting, but I would like to take Sam's word
that the "cohort/toString" issue was settled there. I heard from Rob
Sayre something to this effect.
But in case we don't have consensus, could any of you guys state the
problem for the benefit of everyone on this list? Sorry if this seems
redundant. It will help, I'm convinced (compared to no responses and
likely differing views of what the problem is, or what the consensus
was, followed months later by even more painful reconstruction of the
state of the argument).
The wrapper vs. primitive issue remains, I believe everyone agrees.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
What is the current state of the result of typeof on decimals, was there consensus on this? I hope we will be using typeof 1.1m -> "number". For a little bit of emperical evidence, I went through Dojo's codebase and their are numerous places that we would probably want to alter our code to include additional checks for "decimal" if typeof 1.1m -> "decimal", whereas if "number" we would probably leave
virtually everything intact in to number handling with consideration for decimals. Thanks, Kris
Brendan Eich wrote:
Sam's mail cited below has gone without a reply for over a month. Decimal is surely not a high priority, but this message deserves some kind of response or we'll have to reconstruct the state of the argument later, at probably higher cost.
I was not at the Redmond meeting, but I would like to take Sam's word that the "cohort/toString" issue was settled there. I heard from Rob Sayre something to this effect.
But in case we don't have consensus, could any of you guys state the problem for the benefit of everyone on this list? Sorry if this seems redundant. It will help, I'm convinced (compared to no responses and likely differing views of what the problem is, or what the consensus was, followed months later by even more painful reconstruction of the state of the argument).
The wrapper vs. primitive issue remains, I believe everyone agrees.
/be
On Dec 4, 2008, at 2:22 PM, Sam Ruby wrote:
2008/12/4 Brendan Eich <brendan at mozilla.com>:
Sam pointed that out too, and directed everyone to his test-implementation results page: intertwingly.net/stories/2008/09/20/estest.html Indeed we still have an open issue there ignoring the wrapper one:
[Sam wrote:] I think the only major outstanding semantic issue was wrapper objects; apart from that, the devil was in the detail of spec wording.[End Sam]
No, the cohort/toString issue remains too (at least).
With a longer schedule, I would like to revisit that; but as of Redmond, we had consensus on what that would look like in the context of a 3.1 edition.
From where I sit, I find myself in the frankly surreal position that we are in early December, and there are no known issues of consensus, though I respect that David-Sarah claims that there is one on wrappers, and I await his providing of more detail.
/be
- Sam Ruby
_______________________________________________ Es-discuss mailing list Es-discuss at mozilla.org, mail.mozilla.org/listinfo/es-discuss
Kris Zyp SitePen (503) 806-1841 sitepen.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (MingW32) Comment: Using GnuPG with Mozilla - enigmail.mozdev.org
iEYEARECAAYFAkln1acACgkQ9VpNnHc4zAzSOwCbBbcYMmHxg2emCBgjrca9ZDjq 3M4An16zI6nUjssjQ/q3ecnH84aomA5K =nbmt -----END PGP SIGNATURE-----
On Jan 9, 2009, at 2:54 PM, Kris Zyp wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
What is the current state of the result of typeof on decimals, was there consensus on this?
Did you follow Sam's link?
intertwingly.net/stories/2008/09/20/estest.html
I hope we will be using typeof 1.1m -> "number". For a little bit of emperical evidence, I went through Dojo's codebase and their are numerous places that we would probably want to alter our code to include additional checks for "decimal" if typeof 1.1m -> "decimal", whereas if "number" we would probably leave virtually everything intact in to number handling with consideration for decimals. Thanks,
The counter-argument is strong:
typeof x == typeof y => (x == y <=> x === y)
but 1.1 != 1.1m for fundamental reasons.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
The counter-argument is strong:
typeof x == typeof y => (x == y <=> x === y)
but 1.1 != 1.1m for fundamental reasons.
I understand the counter-argument, but with such an overwhelming number of typeof uses having far easier migration with "number", I can't possibly see how the desire to preserve this property is more important than better usability for the majority use cases. Do you think other libraries and JS code are that vastly different than Dojo? Thanks, Kris
-----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (MingW32) Comment: Using GnuPG with Mozilla - enigmail.mozdev.org
iEYEARECAAYFAkln2QcACgkQ9VpNnHc4zAwasgCfbRvlyhoUlNuWSRUKyNeTyWzh B0IAoIh59kZflQy9A8Re9KpVUrNLQj/A =PnfR -----END PGP SIGNATURE-----
On Jan 9, 2009, at 3:08 PM, Kris Zyp wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
The counter-argument is strong:
typeof x == typeof y => (x == y <=> x === y)
but 1.1 != 1.1m for fundamental reasons. I understand the counter-argument, but with such an overwhelming number of typeof uses having far easier migration with "number",
Migration how? You'll have to change something to "use decimal" or s/
1.1/1.1m/. Only once you do that can you be sure about all operands
being decimal.
I'm assuming it would be "bad" in the Dojo code you've looked at if
1.1 came in from some standard library that returns doubles, and was
tested against 1.1m via == or === with false result, where previous to
decimal being added, the result would be true.
I can't possibly see how the desire to preserve this property is more important than better usability for the majority use cases.
You really need to show some of these use cases from Dojo. I have a
hard time believing you've ruled out mixed-mode accidents.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Brendan Eich wrote:
On Jan 9, 2009, at 3:08 PM, Kris Zyp wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
The counter-argument is strong:
typeof x == typeof y => (x == y <=> x === y)
but 1.1 != 1.1m for fundamental reasons. I understand the counter-argument, but with such an overwhelming number of typeof uses having far easier migration with "number",
Migration how? You'll have to change something to "use decimal" or s/1.1/1.1m/. Only once you do that can you be sure about all operands being decimal.
And I am sure our users will do that and pass decimals into our library functions.
I'm assuming it would be "bad" in the Dojo code you've looked at if 1.1 came in from some standard library that returns doubles, and was tested against 1.1m via == or === with false result, where previous to decimal being added, the result would be true.
I am not aware of any situations in the Dojo codebase where this would cause a problem. I can't think of any place where we use an equivalence test and users would expect that decimal behave in the same way as a double. Do you have any expected pitfalls that I could look for in Dojo?
I can't possibly see how the desire to preserve this property is more important than better usability for the majority use cases.
You really need to show some of these use cases from Dojo. I have a hard time believing you've ruled out mixed-mode accidents.
Ok, sounds good, I will be glad to be corrected if I misunderstanding this. Here are some of the places where I believe we would probably add extra code to handle the case of typeof checks where decimal values may have been passed in by users, and we would want the behavior to be the same as a number: As I have mentioned before, we would need to change our JSON serializer to handle "decimal": archive.dojotoolkit.org/nightly/dojotoolkit/dojo/_base/json.js (line 118) Our parser function would need to add support for "decimal" archive.dojotoolkit.org/nightly/dojotoolkit/dojo/parser.js (line 32) Matrix math handling for our graphics module: archive.dojotoolkit.org/nightly/dojotoolkit/dojox/gfx/matrix.js (line 88 is one example) Actually there are numerous situations in the graphics packages where a decimal should be acceptable for defining coordinates, scaling, etc.: archive.dojotoolkit.org/nightly/dojotoolkit/dojox/gfx Charting also has a number of places where decimals should be an acceptable form of a number: archive.dojotoolkit.org/nightly/dojotoolkit/dojox/charting For example: archive.dojotoolkit.org/nightly/dojotoolkit/dojox/charting/action2d/Magnify.js (line 22)
Again, I understand there are difficulties with typeof 1.1m returning "number", but in practice it seems we would experience far more pain with "decimal".
Kris Zyp SitePen (503) 806-1841 sitepen.com
-----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (MingW32) Comment: Using GnuPG with Mozilla - enigmail.mozdev.org
iEYEARECAAYFAkluhukACgkQ9VpNnHc4zAyaLgCeLbJeVvoLd1ypvK9uiyfO0Jhw RuEAoKNZQeBKKfHzoupEdY+Nv16Lk+ch =pV7U -----END PGP SIGNATURE-----
On Jan 14, 2009, at 4:44 PM, Kris Zyp wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Brendan Eich wrote:
On Jan 9, 2009, at 3:08 PM, Kris Zyp wrote:
The counter-argument is strong:
typeof x == typeof y => (x == y <=> x === y)
but 1.1 != 1.1m for fundamental reasons. I understand the counter-argument, but with such an overwhelming number of typeof uses having far easier migration with "number",
Migration how? You'll have to change something to "use decimal" or s/1.1/1.1m/. Only once you do that can you be sure about all operands being decimal.
And I am sure our users will do that and pass decimals into our library functions.
I'm not disputing that. Straw man?
I'm assuming it would be "bad" in the Dojo code you've looked at if 1.1 came in from some standard library that returns doubles, and was tested against 1.1m via == or === with false result, where previous to decimal being added, the result would be true. I am not aware of any situations in the Dojo codebase where this would cause a problem. I can't think of any place where we use an equivalence test and users would expect that decimal behave in the same way as a double. Do you have any expected pitfalls that I could look for in Dojo?
Sure, starting with JSON (see below).
I can't possibly see how the desire to preserve this property is
more important than better usability for the majority use cases.You really need to show some of these use cases from Dojo. I have a hard time believing you've ruled out mixed-mode accidents. Ok, sounds good, I will be glad to be corrected if I misunderstanding this. Here are some of the places where I believe we would probably add extra code to handle the case of typeof checks where decimal values may have been passed in by users, and we would want the behavior to be the same as a number: As I have mentioned before, we would need to change our JSON serializer to handle "decimal": archive.dojotoolkit.org/nightly/dojotoolkit/dojo/_base/json.js (line 118)
You need to change this in any case, since even though the JSON RFC
allows arbitrary precision decimal literals, real-world decoders only
decode into IEEE doubles. You'd have to encode decimals as strings and
decode them using domain-specific (JSON schema based) type knowledge.
Our parser function would need to add support for "decimal" archive.dojotoolkit.org/nightly/dojotoolkit/dojo/parser.js (line 32)
You're right, this parser would need to be extended. But if typeof
1.1m == "number", then str2ob around line 52 might incorrectly call
Number on a decimal string literal that does not convert to double
(which Number must do, for backward compatibility), or else return a
double NaN (not the same as a decimal NaN, although it's hard to tell
-- maybe impossible?).
It seems to me you are assuming that decimal and double convert to and
from string equivalently. This is false.
Matrix math handling for our graphics module: archive.dojotoolkit.org/nightly/dojotoolkit/dojox/gfx/matrix.js (line 88 is one example)
I couldn't be sure, but by grepping for .xx and .yy I think I saw
only generic arithmetic operators and no mode mixing, so you're
probably right that this code would work if typeof 1.1m == "number".
Someone should take a closer look:
:g/.xx/p
this.xx = this.yy = arg;
matrix.xx = l.xx * r.xx + l.xy * r.yx;
matrix.xy = l.xx * r.xy + l.xy * r.yy;
matrix.yx = l.yx * r.xx + l.yy * r.yx;
matrix.dx = l.xx * r.dx + l.xy * r.dy + l.dx;
D = M.xx * M.yy - M.xy * M.yx,
yx: -M.yx/D, yy: M.xx/D,
dy: (M.yx * M.dx - M.xx * M.dy) / D
return {x: matrix.xx * x + matrix.xy * y + matrix.dx, y:
matrix.yx * x + matrix.yy * y +
matrix.dy}; // dojox.gfx.Point
M.xx = l.xx * r.xx + l.xy * r.yx;
M.xy = l.xx * r.xy + l.xy * r.yy;
M.yx = l.yx * r.xx + l.yy * r.yx;
M.dx = l.xx * r.dx + l.xy * r.dy + l.dx;
:g/.yy/p
this.xx = this.yy = arg;
matrix.xy = l.xx * r.xy + l.xy * r.yy;
matrix.yx = l.yx * r.xx + l.yy * r.yx;
matrix.yy = l.yx * r.xy + l.yy * r.yy;
matrix.dy = l.yx * r.dx + l.yy * r.dy + l.dy;
D = M.xx * M.yy - M.xy * M.yx,
xx: M.yy/D, xy: -M.xy/D,
dx: (M.xy * M.dy - M.yy * M.dx) / D,
return {x: matrix.xx * x + matrix.xy * y + matrix.dx, y:
matrix.yx * x + matrix.yy * y +
matrix.dy}; // dojox.gfx.Point
M.xy = l.xx * r.xy + l.xy * r.yy;
M.yx = l.yx * r.xx + l.yy * r.yx;
M.yy = l.yx * r.xy + l.yy * r.yy;
M.dy = l.yx * r.dx + l.yy * r.dy + l.dy;
I did not look at other files.
Actually there are numerous situations in the graphics packages where a decimal should be acceptable for defining coordinates, scaling,
etc.: archive.dojotoolkit.org/nightly/dojotoolkit/dojox/gfx
Only if never compared to a double. How do you prevent this?
Charting also has a number of places where decimals should be an acceptable form of a number: archive.dojotoolkit.org/nightly/dojotoolkit/dojox/charting For example: archive.dojotoolkit.org/nightly/dojotoolkit/dojox/charting/action2d/Magnify.js (line 22)
I will look at these later as time allows, pending replies on above
points.
Again, I understand there are difficulties with typeof 1.1m returning "number", but in practice it seems we would experience far more pain with "decimal".
Trouble for you Dojo maintainers but savings for users. You may have
to do a bit more work to avoid imposing bugs on your users. That's
life in the big Dojo city.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
You need to change this in any case, since even though the JSON RFC allows arbitrary precision decimal literals, real-world decoders only decode into IEEE doubles. You'd have to encode decimals as strings and decode them using domain-specific (JSON schema based) type knowledge.
No, every Java JSON library I have seen parses (at least some, if not all) numbers to Java's BigDecimal. JSON's numbers are decimal, languages that support decimals agree. Dojo will convert JS decimal's to JSON numbers regardless of what path ES-Harmony takes with typeof, whether it requires a code change or not.
Our parser function would need to add support for "decimal" archive.dojotoolkit.org/nightly/dojotoolkit/dojo/parser.js (line 32)
You're right, this parser would need to be extended. But if typeof 1.1m == "number", then str2ob around line 52 might incorrectly call Number on a decimal string literal that does not convert to double (which Number must do, for backward compatibility), or else return a double NaN (not the same as a decimal NaN, although it's hard to tell -- maybe impossible?).
It seems to me you are assuming that decimal and double convert to and from string equivalently. This is false.
Actually there are numerous situations in the graphics packages where a decimal should be acceptable for defining coordinates, scaling, etc.: archive.dojotoolkit.org/nightly/dojotoolkit/dojox/gfx
Only if never compared to a double. How do you prevent this?
We already agree that the decimal-double comparison will always be false. The point is that this is representative of real world code that benefits more from the treatment of decimals as numbers.
Charting also has a number of places where decimals should be an acceptable form of a number: archive.dojotoolkit.org/nightly/dojotoolkit/dojox/charting For example: archive.dojotoolkit.org/nightly/dojotoolkit/dojox/charting/action2d/Magnify.js (line 22)
I will look at these later as time allows, pending replies on above points.
Again, I understand there are difficulties with typeof 1.1m returning "number", but in practice it seems we would experience far more pain with "decimal".
Trouble for you Dojo maintainers but savings for users. You may have to do a bit more work to avoid imposing bugs on your users. That's life in the big Dojo city.
If that's true, that's fine, I have no problem with Dojo feeling the pain for the sake of others, but I still find it very surprising that Dojo code would be so misrepresentative of real code out there today. Dojo covers a very broad swath of topics. Do you really think real world JS is that much different than Dojo's? Kris
/be
Kris Zyp SitePen (503) 806-1841 sitepen.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (MingW32) Comment: Using GnuPG with Mozilla - enigmail.mozdev.org
iEYEARECAAYFAklur9AACgkQ9VpNnHc4zAzlxwCgpKOVIUfUvIZpdYGOOTC3c2vp LDIAmgLvpzAW8500idQvyTFaXQ4+eRPv =cn6y -----END PGP SIGNATURE-----
On Jan 14, 2009, at 7:38 PM, Kris Zyp wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
You need to change this in any case, since even though the JSON RFC allows arbitrary precision decimal literals, real-world decoders only decode into IEEE doubles. You'd have to encode decimals as strings and decode them using domain-specific (JSON schema based) type knowledge. No, every Java JSON library I have seen
You've seen
It and the json.js alternative JS implementation are popular. json2.js
contains
String.prototype.toJSON =
Number.prototype.toJSON =
Boolean.prototype.toJSON = function (key) {
return this.valueOf();
};
parses (at least some, if not all) numbers to Java's BigDecimal.
JSON has nothing to do wth Java, and most implementations do not have
Java BigDecimal, so I don't know how it can be relevant.
JSON's numbers are decimal, languages that support decimals agree. Dojo will convert JS decimal's to JSON numbers regardless of what path ES-Harmony takes with typeof, whether it requires a code change or not.
That will break interoperatability between current implementations
that use doubles not decimals.
We already agree that the decimal-double comparison will always be false.
No, only some for ==. See intertwingly.net/stories/2008/08/27/estest.html :
1.5m == 1.5 true
The point is that this is representative of real world code that benefits more from the treatment of decimals as numbers.
It's not a question of more or less. If you let decimals and numbers
mix, you'll get data-dependent, hard to diagnose bugs. If you do not,
then you won't (and Dojo maintainers will have to work a bit to extend
their code to handle decimals -- which is the right trade. Recall Mr.
Spock's dying words from STII:TWoK :-).
If that's true, that's fine, I have no problem with Dojo feeling the pain for the sake of others, but I still find it very surprising that Dojo code would be so misrepresentative of real code out there today.
It's not necessarily representative. It's not necessarily mis-
representative. But we need to agree on how decimal as proposed
compares to number (double) first, since from what you wrote above I
see misunderstanding.
Dojo covers a very broad swath of topics. Do you really think real world JS is that much different than Dojo's?
I have no idea, but this is completely beside the point. Breaking
typeof x == typeof x => (x == y <=> x === y) for decimal will break
existing code in data-dependent, hard to diagnose ways.
Adding a new typeof code will not depend on the value of a given
decimal: any decimal will cause control to fall into an else, default,
or unhandled case, which is strictly easier to debug and fix. Plus,
any future JS standard with Decimal will be a big enough deal that
porting will be obligatory and understood, by the time browsers adopt
decimal.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Brendan Eich wrote:
On Jan 14, 2009, at 7:38 PM, Kris Zyp wrote:
You need to change this in any case, since even though the JSON RFC allows arbitrary precision decimal literals, real-world decoders only decode into IEEE doubles. You'd have to encode decimals as strings and decode them using domain-specific (JSON schema based) type knowledge. No, every Java JSON library I have seen
You've seen
It and the json.js alternative JS implementation are popular. json2.js contains
String.prototype.toJSON = Number.prototype.toJSON = Boolean.prototype.toJSON = function (key) { return this.valueOf(); };
Of course, there is no decimal support in ES3, there is no other option.
parses (at least some, if not all) numbers to Java's BigDecimal.
JSON has nothing to do wth Java, and most implementations do not have Java BigDecimal, so I don't know how it can be relevant.
One of the major incentives for JSON is that it is interoperability between languages. If other implementations in other languages treat JSON's number as decimal than the assertion that I understood you were making that JSON number's are being universally expected to be treated as binary is not true.
JSON's numbers are decimal, languages that support decimals agree. Dojo will convert JS decimal's to JSON numbers regardless of what path ES-Harmony takes with typeof, whether it requires a code change or not.
That will break interoperatability between current implementations that use doubles not decimals.
How so? And how did all the implementations that use decimals to interpret JSON numbers not break interoperability?
It's not a question of more or less. If you let decimals and numbers mix, you'll get data-dependent, hard to diagnose bugs. If you do not, then you won't (and Dojo maintainers will have to work a bit to extend their code to handle decimals -- which is the right trade. Recall Mr. Spock's dying words from STII:TWoK :-).
So you are suggesting that we shouldn't let users pass mix of decimals and numbers even if they explicitly attempt to do so?
If that's true, that's fine, I have no problem with Dojo feeling the pain for the sake of others, but I still find it very surprising that Dojo code would be so misrepresentative of real code out there today.
It's not necessarily representative. It's not necessarily mis-representative. But we need to agree on how decimal as proposed compares to number (double) first, since from what you wrote above I see misunderstanding.
Dojo covers a very broad swath of topics. Do you really think real world JS is that much different than Dojo's?
I have no idea, but this is completely beside the point. Breaking typeof x == typeof x => (x == y <=> x === y) for decimal will break existing code in data-dependent, hard to diagnose ways.
Adding a new typeof code will not depend on the value of a given decimal: any decimal will cause control to fall into an else, default, or unhandled case, which is strictly easier to debug and fix. Plus, any future JS standard with Decimal will be a big enough deal that porting will be obligatory and understood, by the time browsers adopt decimal.
It's not beside my point. If signficantly more real world code will break due to violating the expected invariant of a constant finite set of typeof results (and the expectation that numbers regardless of precision will be typeof -> "number") than those that break due to
violating the expected invariant of typeof x == typeof x => (x == y <=> x === y) than I think we would be negligent as language designers
to ignore that consideration. I understand the logical concerns, but I would love to see real empirical evidence that contradicts my suspicions.
Kris -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (MingW32) Comment: Using GnuPG with Mozilla - enigmail.mozdev.org
iEYEARECAAYFAkluynkACgkQ9VpNnHc4zAx5LgCfWWzZ7s2gGDz0OMS6QrjOMbYy VMIAoLWc9d6ZUqVmY/ma2PygBCXdNgK2 =oUop -----END PGP SIGNATURE-----
On Jan 14, 2009, at 9:32 PM, Kris Zyp wrote:
Of course, there is no decimal support in ES3, there is no other
option.
This is not strictly true:
code.google.com/p/gwt-math/source/browse/trunk/gwt-math/js_originals/bigdecimal.js
The point is that JSON peers that do math on numbers, to interoperate
in general, need to parse and stringify to the same number type. It
may be ok if only ints that fit in a double are used by a particular
application or widget, but the syntax allows for fraction and
exponent, which begs representation-type precision and radix questions.
One of the major incentives for JSON is that it is interoperability between languages. If other implementations in other languages treat JSON's number as decimal than the assertion that I understood you were making that JSON number's are being universally expected to be treated as binary is not true.
It's probably a mix, with application-dependent restrictions on domain
and/or computation so that using either double or decimal works, or
else buggy lack of such restrictions.
JSON's numbers are decimal, languages that support decimals agree. Dojo will convert JS decimal's to JSON numbers regardless of what path ES-Harmony takes with typeof, whether it requires a code change or not.
That will break interoperatability between current implementations that use doubles not decimals.
How so? And how did all the implementations that use decimals to interpret JSON numbers not break interoperability?
Not necessarily. But correctness is not a matter of hopes or
percentages. It may be fine for JSON to leave it to the app to choose
number type and/or operations done on the data. But some layer has to
care. Some apps probably already depend on json2.js and json.js and
the like (ES3.1's JSON built-in) using double, not decimal. Changing a
future JSON codec to use decimal instead of double is not a backward-
compatible change.
So you are suggesting that we shouldn't let users pass mix of decimals and numbers even if they explicitly attempt to do so?
No, I'm suggesting unintended mixed-mode bugs will be common if we
make typeof 1.1m == "number".
It's not beside my point. If signficantly more real world code will break due to violating the expected invariant of a constant finite set of typeof results (and the expectation that numbers regardless of precision will be typeof -> "number") than those that break due to violating the expected invariant of typeof x == typeof x => (x == y <=> x === y)
We can't measure this, realistically, but again: the breakage from a
new typeof result is not dependent on the numeric value of the
operand, and entails either a missing case, or a possibly insufficient
default case, while the breakage from your proposal is subtly data-
dependent.
Plus, the invariant (while not holy writ) is an important property of
JS to conserve, all else equal.
than I think we would be negligent as language designers to ignore that consideration.
It's not a consideration if it can't be quantified, and if it
introduces value-dependent numeric bugs. Decimal and double are
different enough that typeof should tell the truth. 1.1m != 1.1, 1.2m !
= 1.2, but 1.5m == 1.5.
I understand the logical concerns, but I would love to see real empirical evidence that contradicts my
suspicions.
I gave some already, you didn't reply. Here's one, about dojotoolkit/ dojo/parser.js:
"But if typeof 1.1m == "number", then str2obj around line 52 might
incorrectly call Number on a decimal string literal that does not
convert to double (which Number must do, for backward
compatibility), ...."
It won't do to assume your proposal saves effort and demand me to
prove you wrong. First, no one has access to all the extant typeof x
== "number" code to do the analysis and prove the majority of such
code would "work" with your proposal. This is akin to proving a
negative. Second, I've given evidence based on Dojo that shows
incompatibility if typeof 1.1m == "number".
How about we talk about an alternative: "use decimal" as a way to make
all literals, operators, and built-ins decimal never double?
The problem with this "big red switch" is that it requires conversion
from outside the lexical scope in which the pragma is enabled, since
code outside could easily pass double data into functions or variables
in the pragma's scope. It requires a decimal-based suite of Math,
etc., built-ins too, but that may be ok (it was contemplated for ES4).
The problem with this old idea is really the challenge of ensuring
conversion when cross the pragma's lexical scope boundary. Presumably
double numbers going in would convert to decimal, while decimals
flowing out would remain decimal. Even this is questionable: what if
the callee was compiled without "use decimal" and it's another window
object's function that expects a double-number?
On Wed, Jan 14, 2009 at 9:32 PM, Kris Zyp <kris at sitepen.com> wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Brendan Eich wrote:
On Jan 14, 2009, at 7:38 PM, Kris Zyp wrote:
You need to change this in any case, since even though the JSON RFC allows arbitrary precision decimal literals, real-world decoders only decode into IEEE doubles. You'd have to encode decimals as strings and decode them using domain-specific (JSON schema based) type knowledge. No, every Java JSON library I have seen
You've seen
It and the json.js alternative JS implementation are popular. json2.js contains
String.prototype.toJSON = Number.prototype.toJSON = Boolean.prototype.toJSON = function (key) { return this.valueOf(); };
Of course, there is no decimal support in ES3, there is no other option.
parses (at least some, if not all) numbers to Java's BigDecimal.
JSON has nothing to do wth Java, and most implementations do not have Java BigDecimal, so I don't know how it can be relevant.
One of the major incentives for JSON is that it is interoperability between languages. If other implementations in other languages treat JSON's number as decimal than the assertion that I understood you were making that JSON number's are being universally expected to be treated as binary is not true.
JSON's numbers are decimal, languages that support decimals agree. Dojo will convert JS decimal's to JSON numbers regardless of what path ES-Harmony takes with typeof, whether it requires a code change or not.
That will break interoperatability between current implementations that use doubles not decimals.
How so? And how did all the implementations that use decimals to interpret JSON numbers not break interoperability?
I'm pretty sure that interoperability is broken when they do this, it's just very subtle and hard to debug. I have the same stance as Brendan here, I've even refused to implement the capability to directly encode decimal as JSON numbers in my simplejson package (the de facto json for Python). If a user of the library controls both ends of the wire, they can just as easily use strings to represent decimals and work with them exactly how they expect on both ends of the wire regardless of what their JSON implementation happens to do.
Imagine the person at the other end of the wire is using something like JavaScript or PHP. If the message contains decimals as JSON numbers they can not accurately encode or decode those messages unless they write their own custom JSON implementation. How do they even KNOW if the document is supposed to have decimal precision? What if the other end passes too many digits (often the case if one side is actually using doubles)? If they are passed around as strings then everyone can use the document just fine without any compatibility issues. The lack of a de jure number precision and the lack of a date/datetime type are definitely my biggest grievances with the JSON spec.
Kris Zyp wrote:
Only if never compared to a double. How do you prevent this? We already agree that the decimal-double comparison will always be false.
Not strictly true.
(1m == 1) => true (1m === 1) => false
It is only fractions which have denominators which are not a pure power of two within the precision of double precision floating point that will compare unequal. In particular,
(1.75m == 1.75m) => true (1.76m == 1.76m) => false
For most people, what that works out to mean is that integers compare equal, but fractions almost never do. It is worth noting that comparing fractions that are the result of computations with double precision floating point for strict equality rarely works out in practice, one typically needs to take into account an "epsilon".
The point is that this is representative of real world code that benefits more from the treatment of decimals as numbers.
I agree with your overall argument that the real point of JSON is inter-language interoperability, and that when viewed from that perspective, and that any JSON support that goes into ECMAScript should interpret literals which contain a decimal point as decimal. But that's just an opinion.
At the moment, the present state is that we have discussed at length what the results of typeof(1m) and what JSON.parse('[1.1]') should return. And now we are revisiting both without any new evidence.
In the past, I have provided a working implementation, either as a standalone JSON interpreter, as a web service, or integrated into Firefox. I could do so again, and provide multiple versions that differ only in how they deal with typeof and JSON.parse.
But first, we need to collectively decide what empirical tests would help us to make a different conclusion.
- Sam Ruby
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Unfortunately, I don't have enough time to continue to point by point discussion. If the group feels typeof 1.1m -> "decimal", then so be
it, we can certainly handle that. My point was to show empirical evidence that could hopefully be considered in the decision process.
As far as JSON goes, Dojo will encode decimals to numbers, there is really no coherent alternative (encoding to strings would be even more bizarre, and I can't think of any other option).
Kris
Brendan Eich wrote:
On Jan 14, 2009, at 9:32 PM, Kris Zyp wrote:
Of course, there is no decimal support in ES3, there is no other option.
This is not strictly true:
code.google.com/p/gwt-math/source/browse/trunk/gwt-math/js_originals/bigdecimal.js
The point is that JSON peers that do math on numbers, to interoperate in general, need to parse and stringify to the same number type. It may be ok if only ints that fit in a double are used by a particular application or widget, but the syntax allows for fraction and exponent, which begs representation-type precision and radix questions.
One of the major incentives for JSON is that it is interoperability between languages. If other implementations in other languages treat JSON's number as decimal than the assertion that I understood you were making that JSON number's are being universally expected to be treated as binary is not true.
It's probably a mix, with application-dependent restrictions on domain and/or computation so that using either double or decimal works, or else buggy lack of such restrictions.
JSON's numbers are decimal, languages that support decimals agree. Dojo will convert JS decimal's to JSON numbers regardless of what path ES-Harmony takes with typeof, whether it requires a code change or not.
That will break interoperatability between current implementations that use doubles not decimals.
How so? And how did all the implementations that use decimals to interpret JSON numbers not break interoperability?
Not necessarily. But correctness is not a matter of hopes or percentages. It may be fine for JSON to leave it to the app to choose number type and/or operations done on the data. But some layer has to care. Some apps probably already depend on json2.js and json.js and the like (ES3.1's JSON built-in) using double, not decimal. Changing a future JSON codec to use decimal instead of double is not a backward-compatible change.
So you are suggesting that we shouldn't let users pass mix of decimals and numbers even if they explicitly attempt to do so?
No, I'm suggesting unintended mixed-mode bugs will be common if we make typeof 1.1m == "number".
It's not beside my point. If signficantly more real world code will break due to violating the expected invariant of a constant finite set of typeof results (and the expectation that numbers regardless of precision will be typeof -> "number") than those that break due to violating the expected invariant of typeof x == typeof x => (x == y <=> x === y)
We can't measure this, realistically, but again: the breakage from a new typeof result is not dependent on the numeric value of the operand, and entails either a missing case, or a possibly insufficient default case, while the breakage from your proposal is subtly data-dependent.
Plus, the invariant (while not holy writ) is an important property of JS to conserve, all else equal.
than I think we would be negligent as language designers to ignore that consideration.
It's not a consideration if it can't be quantified, and if it introduces value-dependent numeric bugs. Decimal and double are different enough that typeof should tell the truth. 1.1m != 1.1, 1.2m != 1.2, but 1.5m == 1.5.
I understand the logical concerns, but I would love to see real empirical evidence that contradicts my suspicions.
I gave some already, you didn't reply. Here's one, about dojotoolkit/dojo/parser.js:
"But if typeof 1.1m == "number", then str2obj around line 52 might incorrectly call Number on a decimal string literal that does not convert to double (which Number must do, for backward compatibility), ...."
It won't do to assume your proposal saves effort and demand me to prove you wrong. First, no one has access to all the extant typeof x == "number" code to do the analysis and prove the majority of such code would "work" with your proposal. This is akin to proving a negative. Second, I've given evidence based on Dojo that shows incompatibility if typeof 1.1m == "number".
How about we talk about an alternative: "use decimal" as a way to make all literals, operators, and built-ins decimal never double?
The problem with this "big red switch" is that it requires conversion from outside the lexical scope in which the pragma is enabled, since code outside could easily pass double data into functions or variables in the pragma's scope. It requires a decimal-based suite of Math, etc., built-ins too, but that may be ok (it was contemplated for ES4).
The problem with this old idea is really the challenge of ensuring conversion when cross the pragma's lexical scope boundary. Presumably double numbers going in would convert to decimal, while decimals flowing out would remain decimal. Even this is questionable: what if the callee was compiled without "use decimal" and it's another window object's function that expects a double-number?
/be
Kris Zyp SitePen (503) 806-1841 sitepen.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (MingW32) Comment: Using GnuPG with Mozilla - enigmail.mozdev.org
iEYEARECAAYFAklvPJAACgkQ9VpNnHc4zAwTAgCcDcM5uqnPFa8IZ5FWtszIvJka bRQAnR6CccHOlNRSF5cVoeVk9SszAzaC =KNSr -----END PGP SIGNATURE-----
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Bob Ippolito wrote:
On Wed, Jan 14, 2009 at 9:32 PM, Kris Zyp <kris at sitepen.com> wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Brendan Eich wrote:
On Jan 14, 2009, at 7:38 PM, Kris Zyp wrote:
You need to change this in any case, since even though the JSON RFC allows arbitrary precision decimal literals, real-world decoders only decode into IEEE doubles. You'd have to encode decimals as strings and decode them using domain-specific (JSON schema based) type knowledge. No, every Java JSON library I have seen
You've seen www.json.org/json2.js It and the json.js alternative JS implementation are popular. json2.js contains String.prototype.toJSON = Number.prototype.toJSON = Boolean.prototype.toJSON = function (key) { return this.valueOf(); }; Of course, there is no decimal support in ES3, there is no other option. parses (at least some, if not all) numbers to Java's BigDecimal.
JSON has nothing to do wth Java, and most implementations do not have Java BigDecimal, so I don't know how it can be relevant. One of the major incentives for JSON is that it is interoperability between languages. If other implementations in other languages treat JSON's number as decimal than the assertion that I understood you were making that JSON number's are being universally expected to be treated as binary is not true. JSON's numbers are decimal, languages that support decimals agree. Dojo will convert JS decimal's to JSON numbers regardless of what path ES-Harmony takes with typeof, whether it requires a code change or not.
That will break interoperatability between current implementations that use doubles not decimals. How so? And how did all the implementations that use decimals to interpret JSON numbers not break interoperability?
I'm pretty sure that interoperability is broken when they do this, it's just very subtle and hard to debug. I have the same stance as Brendan here, I've even refused to implement the capability to directly encode decimal as JSON numbers in my simplejson package (the de facto json for Python). If a user of the library controls both ends of the wire, they can just as easily use strings to represent decimals and work with them exactly how they expect on both ends of the wire regardless of what their JSON implementation happens to do.
Imagine the person at the other end of the wire is using something like JavaScript or PHP. If the message contains decimals as JSON numbers they can not accurately encode or decode those messages unless they write their own custom JSON implementation. How do they even KNOW if the document is supposed to have decimal precision? What if the other end passes too many digits (often the case if one side is actually using doubles)? If they are passed around as strings then everyone can use the document just fine without any compatibility issues. The lack of a de jure number precision and the lack of a date/datetime type are definitely my biggest grievances with the JSON spec.
Specifying number representations would be far more grievous in terms of creating tight-couplings with JSON data. It is essential that implementations are free to use whatever number representation they desire in order to facilitate a loose coupled interchange.
Kris
-----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (MingW32) Comment: Using GnuPG with Mozilla - enigmail.mozdev.org
iEYEARECAAYFAklvPtIACgkQ9VpNnHc4zAygHwCfZ38UEIK6ZJZDvvmddMlcgMvH ELQAnR8r3Hy25cwG+19BDVmVIw79EBDQ =JZqE -----END PGP SIGNATURE-----
On Thu, Jan 15, 2009 at 5:49 AM, Kris Zyp <kris at sitepen.com> wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Bob Ippolito wrote:
On Wed, Jan 14, 2009 at 9:32 PM, Kris Zyp <kris at sitepen.com> wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Brendan Eich wrote:
On Jan 14, 2009, at 7:38 PM, Kris Zyp wrote:
You need to change this in any case, since even though the JSON RFC allows arbitrary precision decimal literals, real-world decoders only decode into IEEE doubles. You'd have to encode decimals as strings and decode them using domain-specific (JSON schema based) type knowledge. No, every Java JSON library I have seen
You've seen www.json.org/json2.js It and the json.js alternative JS implementation are popular. json2.js contains String.prototype.toJSON = Number.prototype.toJSON = Boolean.prototype.toJSON = function (key) { return this.valueOf(); }; Of course, there is no decimal support in ES3, there is no other option. parses (at least some, if not all) numbers to Java's BigDecimal.
JSON has nothing to do wth Java, and most implementations do not have Java BigDecimal, so I don't know how it can be relevant. One of the major incentives for JSON is that it is interoperability between languages. If other implementations in other languages treat JSON's number as decimal than the assertion that I understood you were making that JSON number's are being universally expected to be treated as binary is not true. JSON's numbers are decimal, languages that support decimals agree. Dojo will convert JS decimal's to JSON numbers regardless of what path ES-Harmony takes with typeof, whether it requires a code change or not.
That will break interoperatability between current implementations that use doubles not decimals. How so? And how did all the implementations that use decimals to interpret JSON numbers not break interoperability?
I'm pretty sure that interoperability is broken when they do this, it's just very subtle and hard to debug. I have the same stance as Brendan here, I've even refused to implement the capability to directly encode decimal as JSON numbers in my simplejson package (the de facto json for Python). If a user of the library controls both ends of the wire, they can just as easily use strings to represent decimals and work with them exactly how they expect on both ends of the wire regardless of what their JSON implementation happens to do.
Imagine the person at the other end of the wire is using something like JavaScript or PHP. If the message contains decimals as JSON numbers they can not accurately encode or decode those messages unless they write their own custom JSON implementation. How do they even KNOW if the document is supposed to have decimal precision? What if the other end passes too many digits (often the case if one side is actually using doubles)? If they are passed around as strings then everyone can use the document just fine without any compatibility issues. The lack of a de jure number precision and the lack of a date/datetime type are definitely my biggest grievances with the JSON spec. Specifying number representations would be far more grievous in terms of creating tight-couplings with JSON data. It is essential that implementations are free to use whatever number representation they desire in order to facilitate a loose coupled interchange.
For decimals, I definitely disagree here. In languages that support both float and decimal, it's confusing at best. You can only decode as one or the other, and if you try and do any math afterwards with the wrong type it will explode. In Python's case anyway, you can't even convert a float directly to a decimal without explicitly going through string first. simplejson raises an exception when you try and encode a decimal unless you tell it differently, it makes you decide how they should get represented.
In simplejson it's trivial to transcode decimal to float (or string or anything else) during encoding, or to get all numbers back as decimal... but you have to do it explicitly. Loosely coupled doesn't have to mean lossy.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Bob Ippolito wrote:
On Thu, Jan 15, 2009 at 5:49 AM, Kris Zyp <kris at sitepen.com> wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Bob Ippolito wrote:
On Wed, Jan 14, 2009 at 9:32 PM, Kris Zyp <kris at sitepen.com> wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Brendan Eich wrote:
On Jan 14, 2009, at 7:38 PM, Kris Zyp wrote:
You need to change this in any case, since even though the JSON RFC allows arbitrary precision decimal literals, real-world decoders only decode into IEEE doubles. You'd have to encode decimals as strings and decode them using domain-specific (JSON schema based) type knowledge. No, every Java JSON library I have seen
You've seen www.json.org/json2.js It and the json.js alternative JS implementation are popular. json2.js contains String.prototype.toJSON = Number.prototype.toJSON = Boolean.prototype.toJSON = function (key) { return this.valueOf(); }; Of course, there is no decimal support in ES3, there is no other option. parses (at least some, if not all) numbers to Java's BigDecimal.
JSON has nothing to do wth Java, and most implementations do not have Java BigDecimal, so I don't know how it can be relevant. One of the major incentives for JSON is that it is interoperability between languages. If other implementations in other languages treat JSON's number as decimal than the assertion that I understood you were making that JSON number's are being universally expected to be treated as binary is not true. JSON's numbers are decimal, languages that support decimals agree. Dojo will convert JS decimal's to JSON numbers regardless of what path ES-Harmony takes with typeof, whether it requires a code change or not.
That will break interoperatability between current implementations that use doubles not decimals. How so? And how did all the implementations that use decimals to interpret JSON numbers not break interoperability? I'm pretty sure that interoperability is broken when they do this, it's just very subtle and hard to debug. I have the same stance as Brendan here, I've even refused to implement the capability to directly encode decimal as JSON numbers in my simplejson package (the de facto json for Python). If a user of the library controls both ends of the wire, they can just as easily use strings to represent decimals and work with them exactly how they expect on both ends of the wire regardless of what their JSON implementation happens to do.
Imagine the person at the other end of the wire is using something like JavaScript or PHP. If the message contains decimals as JSON numbers they can not accurately encode or decode those messages unless they write their own custom JSON implementation. How do they even KNOW if the document is supposed to have decimal precision? What if the other end passes too many digits (often the case if one side is actually using doubles)? If they are passed around as strings then everyone can use the document just fine without any compatibility issues. The lack of a de jure number precision and the lack of a date/datetime type are definitely my biggest grievances with the JSON spec. Specifying number representations would be far more grievous in terms of creating tight-couplings with JSON data. It is essential that implementations are free to use whatever number representation they desire in order to facilitate a loose coupled interchange.
For decimals, I definitely disagree here. In languages that support both float and decimal, it's confusing at best. You can only decode as one or the other, and if you try and do any math afterwards with the wrong type it will explode. In Python's case anyway, you can't even convert a float directly to a decimal without explicitly going through string first. simplejson raises an exception when you try and encode a decimal unless you tell it differently, it makes you decide how they should get represented.
In simplejson it's trivial to transcode decimal to float (or string or anything else) during encoding, or to get all numbers back as decimal... but you have to do it explicitly. Loosely coupled doesn't have to mean lossy.
-bob
Where is the loss coming from? JSON isn't doing any computations or coercions, and ES would only be experiencing a loss when serializing binary floats to JSON, but not with decimals. Decoders should be allowed to be explicit and have control over how they choose to internally represent the numbers they receive from JSON. Decimals in string format doesn't change that fact, and is far more confusing. Kris
Kris Zyp SitePen (503) 806-1841 sitepen.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (MingW32) Comment: Using GnuPG with Mozilla - enigmail.mozdev.org
iEYEARECAAYFAklvdr4ACgkQ9VpNnHc4zAyNngCcD7fU1kuQIaQAugtjZZQL7a7X 3lQAnj1RnvhEYFNmtatdmeVN5tBlxuVk =XS7F -----END PGP SIGNATURE-----
On Jan 15, 2009, at 9:47 AM, Kris Zyp wrote:
Where is the loss coming from?
Decimal-using peer S1 encodes
{p:1.1, q:2.2}
Double-using peer C1 decodes, adds, and returns
{p:1.1, q:2.2, r:3.3000000000000003}
The sender then checks the result using decimal and finds an error.
Meanwhile the same exchange between S1 and decimal-using peer C2
succeeds without error.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Brendan Eich wrote:
On Jan 15, 2009, at 9:47 AM, Kris Zyp wrote:
Where is the loss coming from?
Decimal-using peer S1 encodes
{p:1.1, q:2.2}
Double-using peer C1 decodes, adds, and returns
{p:1.1, q:2.2, r:3.3000000000000003}
The sender then checks the result using decimal and finds an error. Meanwhile the same exchange between S1 and decimal-using peer C2 succeeds without error.
/be
Exactly, C1 introduces the error by it's use of binary. The JSON encoding didn't introduce the error. JSON exactly represented the data it was given, and the decimal decoding and encoding peer refrains from introducing errors as well.
Are you arguing that all JSON interaction relies on other peers introduces errors according to binary floating computations? I already disproved that idea by pointing out there are existing implementations that use decimal and thus don't add such errors. If you are arguing that there are certain client-server couples have become dependent on these errors there are couple faults in this logic. First, the error dependencies experienced in languages are almost always going to be internalized by devs. Because there may be a number of internal error expectations in existence does not imply a significant number of inter-machine rounding error expectation dependency. Second, I am not asserting that JSON decoding should automatically convert JSON numbers to binary, only that JSON encoding should serialize decimals to numbers. In your example, if a server is sending these JSON values to C1 ES-harmony based peer for computation, the computations will still take place in binary, unless we were to add some mechanism for explicitly specifying how numbers should be decoded.
JSON does not follow the path of other formats that attempt to dictate tight language-type couplings. In all cases, peers can ultimately choose how they want to internally handle the data provided by JSON. JSON is pure data format, not computation prescription and won't dictate how computations are performed.
Kris
Kris Zyp SitePen (503) 806-1841 sitepen.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (MingW32) Comment: Using GnuPG with Mozilla - enigmail.mozdev.org
iEYEARECAAYFAklvhGsACgkQ9VpNnHc4zAxwNACgrDi8/pAkvCljE3NX/5lpJZak Ek0AoL8h/3q/fKjhaN6GPvf9q6UGjwR1 =zN90 -----END PGP SIGNATURE-----
On Jan 15, 2009, at 10:46 AM, Kris Zyp wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Brendan Eich wrote:
On Jan 15, 2009, at 9:47 AM, Kris Zyp wrote:
Where is the loss coming from?
Decimal-using peer S1 encodes
{p:1.1, q:2.2}
Double-using peer C1 decodes, adds, and returns
{p:1.1, q:2.2, r:3.3000000000000003}
The sender then checks the result using decimal and finds an error. Meanwhile the same exchange between S1 and decimal-using peer C2 succeeds without error.
/be
Exactly, C1 introduces the error by it's use of binary.
This is not about blame allocation. The system has a problem because,
even though JSON leaves numeric representation unspecified, higher
layers fail to agree. That could be viewed as a JSON shortcoming, or
it could be the fault of the higher layers. I don't want to debate
which is to blame here right now (more below).
The point is that all the JS self-hosted JSON implementations I've
seen, and (crucially) the ES3.1 native JSON implementation, use
double, not decimal. This constitutes an interoperation hazard and it
constrains compatible future changes to ES-Harmony -- specifically, we
can't switch JSON from double to decimal by default, when decoding
or encoding.
The JSON encoding didn't introduce the error. JSON exactly represented the data it was given,
JSON the RFC is about syntax. It doesn't say more than "An
implementation may set limits on the range of numbers" regarding
semantics of implementations.
Actual implementations that use double and decimal will not
interoperate for all values. That means "not interoperate".
and the decimal decoding and encoding peer refrains from introducing errors as well.
Assuming no errors is nice. (What color is the sky in your world? :-P)
Meanwhile, back on this planet we were debating the best way to reduce
the likelihood of errors when adding decimal to JS. Back to that debate:
Are you arguing that all JSON interaction relies on other peers introduces errors according to binary floating computations? I already disproved that idea by pointing out there are existing implementations that use decimal and thus don't add such errors.
You didn't prove or disprove anything, you hand-waved. For all you or
I know, and Bob Ippolito agrees, there are latent, hard-to-find bugs
already out there.
Our bone of contention specifically was making (typeof 1.1m ==
"number"), and I am certain that this increases the likelihood of such
bugs. It cannot reduce the likelihood of such bugs. Whether it means
more work for Dojo and other frameworks that would need to adapt to
the addition of decimal to a future ES spec is secondary, or really
just TANSTAAFL.
If you are arguing that there are certain client-server couples have become dependent on these errors there are couple faults in this logic. First, the error dependencies experienced in languages are almost always going to be internalized by devs. Because there may be a number of internal error expectations in existence does not imply a significant number of inter-machine rounding error expectation dependency.
To the extent I understand what you mean here, I can only disagree --
completely!
You're arguing by assertion that rounding errors due to double's
finite binary precision, which are the most reported JS bug at bugzilla.mozilla.org
, are somehow insignificant when JSON transcoding is in the mix.
That's a bold assertion.
Second, I am not asserting that JSON decoding should automatically convert JSON numbers to binary, only that JSON encoding should serialize decimals to numbers.
What should JSON.parse use then, if not double (binary)? JSON.parse is
in ES3.1, and decimal is not.
In your example, if a server is sending these JSON values to C1 ES-harmony based peer for computation, the computations will still take place in binary,
Ok, hold that thought.
unless we were to add some mechanism for explicitly specifying how numbers should be decoded.
Let's say we don't. I showed interoperation failure if two peers, C1
and C2, fail to decode to the same numeric type. You now seem to be
agreeing that binary should be the one type JSON number syntax decodes
into. Great, except you want encoders to stringify decimal literals as
JSON numbers.
This means 1.1m and 1.1 both encode as '1.1' but do not compare as ==
or === (whatever happens with typeof and the typeof/==/===
invariant). It also means
123345678901234567890.1234567890123m
encodes as
'123345678901234567890.1234567890123'
but decodes as binary
123345678901234570000
which breaks round-tripping, which breaks interoperation.
Am I flogging a dead horse yet?
Long ago in the '80s there was an RPC competition between Sun and
Apollo (defunct Massachusetts-based company, but the RPC approach
ended up in DCE), with both sides attempting to use open specs and
even open source to build alliances. Bob Lyons of Sun argued
eloquently for one standard type system for senders and receivers.
Paul (I forget his last name) of Apollo argued for "receiver makes it
right" to allow the funky non-IEEE-754 floating point formats of the
day to be used at the convenience of the sender. E.g. Cray and DEC VAX
senders would not have to transcode to the IEEE-754 lingua franca,
they could just blat out the bytes in a number in some standard byte
order.
Bob Lyon's rebuttal as I recall it was two-fold: 1. "receiver makes it
right" is really "receiver makes it wrong", because the many types and
conversions are a fertile field for bugs and versionitis problems
among peers. 2. There should be indeed be a lingua franca -- you
should be able to put serialized data in a data vault for fifty years
and hope to read it, so long as we're still around, without having to
know which sender sent it, what floating point format that obsolete
sender used, etc.
The two points are distinct. Complexity makes for versionitis and bug
habitat, but lack of a single on-the-wire semantic standard makes
for a Tower of Babel scenario, which means data loss. Fat, buggy, and
lossy are no way to go through college!
See www.kohala.com/start/papers.others/rpc.comments.txt for more.
We are not condemned to repeat history if we pay attention to what
went before. JSON implementations in future ES specs cannot by default
switch either encoding or decoding to use decimal instead of number.
JSON does not follow the path of other formats that attempt to dictate tight language-type couplings. In all cases, peers can ultimately choose how they want to internally handle the data provided by JSON. JSON is pure data format, not computation prescription and won't dictate how computations are performed.
Yeah, yeah -- I know that and said as much. The argument is not about
JSON as a syntax spec, it's about what we do in the implementation in
ES3.1, where we have to make semantic choices. Including types and
typeof, including future proofing.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Brendan Eich wrote:
On Jan 15, 2009, at 10:46 AM, Kris Zyp wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Brendan Eich wrote:
On Jan 15, 2009, at 9:47 AM, Kris Zyp wrote:
Where is the loss coming from?
Decimal-using peer S1 encodes
{p:1.1, q:2.2}
Double-using peer C1 decodes, adds, and returns
{p:1.1, q:2.2, r:3.3000000000000003}
The sender then checks the result using decimal and finds an error. Meanwhile the same exchange between S1 and decimal-using peer C2 succeeds without error.
/be
Exactly, C1 introduces the error by it's use of binary.
This is not about blame allocation. The system has a problem because, even though JSON leaves numeric representation unspecified, higher layers fail to agree. That could be viewed as a JSON shortcoming, or it could be the fault of the higher layers. I don't want to debate which is to blame here right now (more below).
The point is that all the JS self-hosted JSON implementations I've seen, and (crucially) the ES3.1 native JSON implementation, use double, not decimal. This constitutes an interoperation hazard and it constrains compatible future changes to ES-Harmony -- specifically, we can't switch JSON from double to decimal by default, when decoding or encoding.
How do you switch to double or decimal by default on encoding? The input defines it, not any default setting.
The JSON encoding didn't introduce the error. JSON exactly represented the data it was given,
JSON the RFC is about syntax. It doesn't say more than "An implementation may set limits on the range of numbers" regarding semantics of implementations.
Actual implementations that use double and decimal will not interoperate for all values. That means "not interoperate".
and the decimal decoding and encoding peer refrains from introducing errors as well.
Assuming no errors is nice. (What color is the sky in your world? :-P) Meanwhile, back on this planet we were debating the best way to reduce the likelihood of errors when adding decimal to JS. Back to that debate:
3.3 is exactly the sum of 1.1 and 2.2 without errors as decimal math produces here in the blue sky world (I was going off your example). Utah may have unusually blue skies though, it is the desert :).
Are you arguing that all JSON interaction relies on other peers introduces errors according to binary floating computations? I already disproved that idea by pointing out there are existing implementations that use decimal and thus don't add such errors.
You didn't prove or disprove anything, you hand-waved. For all you or I know, and Bob Ippolito agrees, there are latent, hard-to-find bugs already out there.
You are saying there latent hard-to-find bugs because people believe that JSON somehow implies that the sum of {"p":1.1, "q":2.2} must be 3.3000000000000003 ? If people are returning 3.3, then the argument that JSON numbers are universally treated computed as binary is not valid. Is there a less hand-wavy way of stating that?
Our bone of contention specifically was making (typeof 1.1m == "number"), and I am certain that this increases the likelihood of such bugs. It cannot reduce the likelihood of such bugs. Whether it means more work for Dojo and other frameworks that would need to adapt to the addition of decimal to a future ES spec is secondary, or really just TANSTAAFL.
I thought JSON serialization and typeof results could be considered separate issues.
If you are arguing that there are certain client-server couples have become dependent on these errors there are couple faults in this logic. First, the error dependencies experienced in languages are almost always going to be internalized by devs. Because there may be a number of internal error expectations in existence does not imply a significant number of inter-machine rounding error expectation dependency.
To the extent I understand what you mean here, I can only disagree -- completely!
You're arguing by assertion that rounding errors due to double's finite binary precision, which are the most reported JS bug at bugzilla.mozilla.org, are somehow insignificant when JSON transcoding is in the mix. That's a bold assertion.
The issue here is relying on another machine to do a computation. I have trouble believing that all these people that are experiencing rounding errors are then using these client-side computations for their server. The compensation for rounding errors that we are concerned are usually going to be kept as close to the error as possible. Why would you build a client-server infrastructure around it?
Second, I am not asserting that JSON decoding should automatically convert JSON numbers to binary, only that JSON encoding should serialize decimals to numbers.
What should JSON.parse use then, if not double (binary)? JSON.parse is in ES3.1, and decimal is not.
It should use double. I presume that if a "use decimal" pragma or a switch was available, it might parse to decimal, but the default would be double, I would think.
In your example, if a server is sending these JSON values to C1 ES-harmony based peer for computation, the computations will still take place in binary,
Ok, hold that thought.
unless we were to add some mechanism for explicitly specifying how numbers should be decoded.
Let's say we don't. I showed interoperation failure if two peers, C1 and C2, fail to decode to the same numeric type. You now seem to be agreeing that binary should be the one type JSON number syntax decodes into. Great, except you want encoders to stringify decimal literals as JSON numbers.
This means 1.1m and 1.1 both encode as '1.1' but do not compare as == or === (whatever happens with typeof and the typeof/==/=== invariant). It also means
123345678901234567890.1234567890123m
encodes as
'123345678901234567890.1234567890123'
but decodes as binary
123345678901234570000
which breaks round-tripping, which breaks interoperation.
JSON doesn't round-trip JS, and it never will. There are plenty of exceptions. I presume that if a receiver had a "use decimal" pragma they could count as opt-in to parsing numbers into decimal and then you could round-trip decimals, but only if the sender was properly encoding decimals as JSON numbers (decimals). Encoding the decimals as strings is far worse.
Long ago in the '80s there was an RPC competition between Sun and Apollo (defunct Massachusetts-based company, but the RPC approach ended up in DCE), with both sides attempting to use open specs and even open source to build alliances. Bob Lyons of Sun argued eloquently for one standard type system for senders and receivers. Paul (I forget his last name) of Apollo argued for "receiver makes it right" to allow the funky non-IEEE-754 floating point formats of the day to be used at the convenience of the sender. E.g. Cray and DEC VAX senders would not have to transcode to the IEEE-754 lingua franca, they could just blat out the bytes in a number in some standard byte order.
Bob Lyon's rebuttal as I recall it was two-fold: 1. "receiver makes it right" is really "receiver makes it wrong", because the many types and conversions are a fertile field for bugs and versionitis problems among peers. 2. There should be indeed be a lingua franca -- you should be able to put serialized data in a data vault for fifty years and hope to read it, so long as we're still around, without having to know which sender sent it, what floating point format that obsolete sender used, etc.
The two points are distinct. Complexity makes for versionitis and bug habitat, but lack of a single on-the-wire semantic standard makes for a Tower of Babel scenario, which means data loss. Fat, buggy, and lossy are no way to go through college!
See www.kohala.com/start/papers.others/rpc.comments.txt for more.
We are not condemned to repeat history if we pay attention to what went before. JSON implementations in future ES specs cannot by default switch either encoding or decoding to use decimal instead of number.
The decimal number has been around much longer than the computer. Are saying that a particular language type has more permanence?
Kris -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (MingW32) Comment: Using GnuPG with Mozilla - enigmail.mozdev.org
iEYEARECAAYFAklvnXQACgkQ9VpNnHc4zAzN6gCeLPjKG6bUce2nwsmyF1sZy4Fj C7oAn2apUSsxXKTtd95WqKjNMCdClDPb =6JA6 -----END PGP SIGNATURE-----
On Thu, Jan 15, 2009 at 12:32 PM, Kris Zyp <kris at sitepen.com> wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Brendan Eich wrote:
On Jan 15, 2009, at 10:46 AM, Kris Zyp wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Brendan Eich wrote:
On Jan 15, 2009, at 9:47 AM, Kris Zyp wrote:
Where is the loss coming from?
Decimal-using peer S1 encodes
{p:1.1, q:2.2}
Double-using peer C1 decodes, adds, and returns
{p:1.1, q:2.2, r:3.3000000000000003}
The sender then checks the result using decimal and finds an error. Meanwhile the same exchange between S1 and decimal-using peer C2 succeeds without error.
/be
Exactly, C1 introduces the error by it's use of binary.
This is not about blame allocation. The system has a problem because, even though JSON leaves numeric representation unspecified, higher layers fail to agree. That could be viewed as a JSON shortcoming, or it could be the fault of the higher layers. I don't want to debate which is to blame here right now (more below).
The point is that all the JS self-hosted JSON implementations I've seen, and (crucially) the ES3.1 native JSON implementation, use double, not decimal. This constitutes an interoperation hazard and it constrains compatible future changes to ES-Harmony -- specifically, we can't switch JSON from double to decimal by default, when decoding or encoding. How do you switch to double or decimal by default on encoding? The input defines it, not any default setting.
The JSON encoding didn't introduce the error. JSON exactly represented the data it was given,
JSON the RFC is about syntax. It doesn't say more than "An implementation may set limits on the range of numbers" regarding semantics of implementations.
Actual implementations that use double and decimal will not interoperate for all values. That means "not interoperate".
and the decimal decoding and encoding peer refrains from introducing errors as well.
Assuming no errors is nice. (What color is the sky in your world? :-P) Meanwhile, back on this planet we were debating the best way to reduce the likelihood of errors when adding decimal to JS. Back to that debate: 3.3 is exactly the sum of 1.1 and 2.2 without errors as decimal math produces here in the blue sky world (I was going off your example). Utah may have unusually blue skies though, it is the desert :).
Depending on the algorithm that a double-using client side uses to print floats as decimal, they may not even be able to retain a decimal number even without doing any math operations.
simplejson.dumps(simplejson.loads('{"num": 3.3}')) '{"num": 3.2999999999999998}'
simplejson uses the repr() representation for encoding floats. I forget the exact inputs for which it is wrong, but Python's str() representation for float does not round-trip properly all of the time on all platforms.
On Thu, Jan 15, 2009 at 12:44 PM, Bob Ippolito <bob at redivi.com> wrote:
On Thu, Jan 15, 2009 at 12:32 PM, Kris Zyp <kris at sitepen.com> wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Brendan Eich wrote:
On Jan 15, 2009, at 10:46 AM, Kris Zyp wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Brendan Eich wrote:
On Jan 15, 2009, at 9:47 AM, Kris Zyp wrote:
Where is the loss coming from?
Decimal-using peer S1 encodes
{p:1.1, q:2.2}
Double-using peer C1 decodes, adds, and returns
{p:1.1, q:2.2, r:3.3000000000000003}
The sender then checks the result using decimal and finds an error. Meanwhile the same exchange between S1 and decimal-using peer C2 succeeds without error.
/be
Exactly, C1 introduces the error by it's use of binary.
This is not about blame allocation. The system has a problem because, even though JSON leaves numeric representation unspecified, higher layers fail to agree. That could be viewed as a JSON shortcoming, or it could be the fault of the higher layers. I don't want to debate which is to blame here right now (more below).
The point is that all the JS self-hosted JSON implementations I've seen, and (crucially) the ES3.1 native JSON implementation, use double, not decimal. This constitutes an interoperation hazard and it constrains compatible future changes to ES-Harmony -- specifically, we can't switch JSON from double to decimal by default, when decoding or encoding. How do you switch to double or decimal by default on encoding? The input defines it, not any default setting.
The JSON encoding didn't introduce the error. JSON exactly represented the data it was given,
JSON the RFC is about syntax. It doesn't say more than "An implementation may set limits on the range of numbers" regarding semantics of implementations.
Actual implementations that use double and decimal will not interoperate for all values. That means "not interoperate".
and the decimal decoding and encoding peer refrains from introducing errors as well.
Assuming no errors is nice. (What color is the sky in your world? :-P) Meanwhile, back on this planet we were debating the best way to reduce the likelihood of errors when adding decimal to JS. Back to that debate: 3.3 is exactly the sum of 1.1 and 2.2 without errors as decimal math produces here in the blue sky world (I was going off your example). Utah may have unusually blue skies though, it is the desert :).
Depending on the algorithm that a double-using client side uses to print floats as decimal, they may not even be able to retain a decimal number even without doing any math operations.
simplejson.dumps(simplejson.loads('{"num": 3.3}')) '{"num": 3.2999999999999998}'
simplejson uses the repr() representation for encoding floats. I forget the exact inputs for which it is wrong, but Python's str() representation for float does not round-trip properly all of the time on all platforms.
I was able to dig up a specific example that is reproducible here on Mac OS X with Python 2.5.2:
float(str(1617161771.7650001)) == 1617161771.7650001
False
Firefox 3.0.5 does a better job here:
parseFloat((1617161771.7650001).toString()) === 1617161771.7650001
true
On Jan 15, 2009, at 12:32 PM, Kris Zyp wrote:
we can't switch JSON from double to decimal by default, when decoding or encoding. How do you switch to double or decimal by default on encoding? The input defines it, not any default setting.
A JSON encoder in a current self-hosted or native ES3.1 implementation
sees a number (binary double precision) and encodes it using JSON
number syntax. If we add decimal, you want the encoder to stringify a
decimal value as a JSON number too. That is a choice -- a design
decision (and a mistake :-/).
The alternatives are to stringify or to throw, requiring a custom (non- default) hook to be used as Bob's simplejson package allows.
3.3 is exactly the sum of 1.1 and 2.2 without errors as decimal math produces
None of these numbers is exactly representable using binary finite
precision. Depending on the JSON codec implementation, they may not
even round trip to string and back -- see Bob's reply.
You are saying there latent hard-to-find bugs because people believe that JSON somehow implies that the sum of {"p":1.1, "q":2.2} must be 3.3000000000000003 ?
I never wrote any such thing.
Please look at the previous messages again. If C1 uses double to
decode JSON from S1 but C2 uses decimal, then results can differ
unexpectedly (from S1's point of view). Likewise, encoding decimal
using JSON's number syntax also breaks interoperation with
implementations using double to decode and (re-)encode.
If people are returning 3.3, then the argument that JSON numbers are universally treated computed as binary is not valid. Is there a less hand-wavy way of stating that?
I don't know what "treated computed as binary" means, even if I delete
one of "treated" and "computed". JSON number syntax may be encoded
from decimals by some implementations, and decoded into decimals too.
This is not interoperable with implementations that transcode using
doubles. Period, full stop.
I thought JSON serialization and typeof results could be considered separate issues.
You brought up Dojo code examples including Dojo's JSON codec as
evidence that defining typeof 1.1m == "number" would relieve you of
having to change that codec while at the same time preserving
correctness. I replied showing that the preserving correctness claim
in that case is false, and the relief from having to evolve the codec
was an obligation.
We then talked more about JSON than typeof, but the two are related:
in both JSON implementations and your proposed typeof 1.1m ==
"number" && typeof 1.1 == "number" world, incompatible number formats
are conflated. This is a mistake.
You're arguing by assertion that rounding errors due to double's finite binary precision, which are the most reported JS bug at bugzilla.mozilla.org, are somehow insignificant when JSON transcoding is in the mix. That's a bold assertion. The issue here is relying on another machine to do a computation. I have trouble believing that all these people that are experiencing rounding errors are then using these client-side computations for their server.
Please. No one wrote "all these people". We're talking about subtle
latent and future bugs, likelihoods of such bugs (vs. ruling them out
by not conflating incompatible number types). Correctness is not a
matter of wishful thinking or alleged "good enough" current-code
behavior.
The compensation for rounding errors that we are concerned are usually going to be kept as close to the error as possible. Why would you build a client-server infrastructure around
it?
People do financial stuff in JS. No medical equipment or rocket
control yet, AFAIK (I could be wrong). I'm told Google Finance uses
integral double values to count pennies. It would not be surprising if
JSON transcoding were already interposed between parts of such a
system. And it should be possible to do so, of course -- one can
always encode bignums or bigdecimals in strings.
What's at issue between us is whether the default encoding of decimal
should use JSON's number syntax. If someone builds client-server
infrastructure, uses JSON in the middle, and switches from double
today to decimal tomorrow, what can go wrong if we follow your
proposal and encode decimals using JSON number syntax? Assume the JSON
is not in the middle of a closed network where one entity controls the
version and quality of all peer software. We can't assume otherwise in
the standard.
What should JSON.parse use then, if not double (binary)? JSON.parse is in ES3.1, and decimal is not. It should use double. I presume that if a "use decimal" pragma or a switch was available, it might parse to decimal, but the default would be double, I would think.
Good, we agreed on decoding to double already but it's great to
confirm this.
which breaks round-tripping, which breaks interoperation. JSON doesn't round-trip JS, and it never will.
That's a complete straw man. Yes, Nan and the infinities won't round
trip. But number syntax in JSON per the RFC, in combination with
correct, Steele and Gay (ftp://ftp.ccs.neu.edu/pub/people/will/retrospective.pdf)conformant
dtoa and strtod code, can indeed round-trip finite values. This
should be reliable.
I presume that if a receiver had a "use decimal" pragma they could count as opt-in to parsing numbers into decimal and then you could round-trip decimals, but only if the sender was properly encoding decimals as JSON numbers (decimals).
Yeah, only if. Receiver makes it wrong. Nothing in the over-the-wire
data requires the receiver to use decimal, or fail if it lacks decimal
support.
You wrote in your last message:
I am not asserting that JSON decoding should automatically convert
JSON numbers to binary, only that JSON encoding should serialize
decimals to numbers.
This is likely to create real bugs in the face of decoders that lack
decimal. There is zero likelihood of such bugs if we don't
incompatibly encode decimal using JSON number syntax when adding
decimal to a future standard.
Encoding the decimals as strings is far worse.
Throwing by default -- in the absence of explicit opt-in by the
encoder-user -- is far better.
It's still up to the producer of the data to worry about tagging the
decimal type of the JSON number, or hoping that the data stays in its
silo where it's implicitly typed as decimal.But we won't have
accidents where implicitly -- by default -- decimal users encode JSON
data that is incorrectly decoded.
We are not condemned to repeat history if we pay attention to what went before. JSON implementations in future ES specs cannot by default switch either encoding or decoding to use decimal instead of number. The decimal number has been around much longer than the computer. Are saying that a particular language type has more permanence?
I think you know exactly what I'm saying. One (lingua franca, French
as the common diplomatic language) particular format is better than
many. And we are stuck with double today. So we cannot start encoding
decimals as JSON numbers tomorrow. Please stop ducking and weaving and
address this head on. If you really endorse "receiver makes it right",
give a spirited and explicit defense.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
You are saying there latent hard-to-find bugs because people believe that JSON somehow implies that the sum of {"p":1.1, "q":2.2} must be 3.3000000000000003 ?
I never wrote any such thing.
Please look at the previous messages again. If C1 uses double to decode JSON from S1 but C2 uses decimal, then results can differ unexpectedly (from S1's point of view). Likewise, encoding decimal using JSON's number syntax also breaks interoperation with implementations using double to decode and (re-)encode.
If people are returning 3.3, then the argument that JSON numbers are universally treated computed as binary is not valid. Is there a less hand-wavy way of stating that?
I don't know what "treated computed as binary" means, even if I delete one of "treated" and "computed". JSON number syntax may be encoded from decimals by some implementations, and decoded into decimals too. This is not interoperable with implementations that transcode using doubles. Period, full stop.
I thought JSON serialization and typeof results could be considered separate issues.
You brought up Dojo code examples including Dojo's JSON codec as evidence that defining typeof 1.1m == "number" would relieve you of having to change that codec while at the same time preserving correctness. I replied showing that the preserving correctness claim in that case is false, and the relief from having to evolve the codec was an obligation.
We then talked more about JSON than typeof, but the two are related: in both JSON implementations and your proposed typeof 1.1m == "number" && typeof 1.1 == "number" world, incompatible number formats are conflated. This is a mistake.
Yes, they are related, but we can make separate decisions on them. I more ambivalent on typeof 1.1m than on the what seems to me to be a more obvious mistake of throwing on JSON serialization of decimals.
You're arguing by assertion that rounding errors due to double's finite binary precision, which are the most reported JS bug at bugzilla.mozilla.org, are somehow insignificant when JSON transcoding is in the mix. That's a bold assertion. The issue here is relying on another machine to do a computation. I have trouble believing that all these people that are experiencing rounding errors are then using these client-side computations for their server.
Please. No one wrote "all these people". We're talking about subtle latent and future bugs, likelihoods of such bugs (vs. ruling them out by not conflating incompatible number types). Correctness is not a matter of wishful thinking or alleged "good enough" current-code behavior.
The compensation for rounding errors that we are concerned are usually going to be kept as close to the error as possible. Why would you build a client-server infrastructure around it?
People do financial stuff in JS. No medical equipment or rocket control yet, AFAIK (I could be wrong). I'm told Google Finance uses integral double values to count pennies. It would not be surprising if JSON transcoding were already interposed between parts of such a system. And it should be possible to do so, of course -- one can always encode bignums or bigdecimals in strings.
What's at issue between us is whether the default encoding of decimal should use JSON's number syntax. If someone builds client-server infrastructure, uses JSON in the middle, and switches from double today to decimal tomorrow, what can go wrong if we follow your proposal and encode decimals using JSON number syntax? Assume the JSON is not in the middle of a closed network where one entity controls the version and quality of all peer software. We can't assume otherwise in the standard.
What should JSON.parse use then, if not double (binary)? JSON.parse is in ES3.1, and decimal is not. It should use double. I presume that if a "use decimal" pragma or a switch was available, it might parse to decimal, but the default would be double, I would think.
Good, we agreed on decoding to double already but it's great to confirm this.
which breaks round-tripping, which breaks interoperation. JSON doesn't round-trip JS, and it never will.
That's a complete straw man. Yes, Nan and the infinities won't round trip. But number syntax in JSON per the RFC, in combination with correct, Steele and Gay (ftp://ftp.ccs.neu.edu/pub/people/will/retrospective.pdf)conformant ftp://ftp.ccs.neu.edu/pub/people/will/retrospective.pdf)conformant dtoa and strtod code, can indeed round-trip finite values. This should be reliable.
I presume that if a receiver had a "use decimal" pragma they could count as opt-in to parsing numbers into decimal and then you could round-trip decimals, but only if the sender was properly encoding decimals as JSON numbers (decimals).
Yeah, only if. Receiver makes it wrong. Nothing in the over-the-wire data requires the receiver to use decimal, or fail if it lacks decimal support.
You wrote in your last message:
I am not asserting that JSON decoding should automatically convert JSON numbers to binary, only that JSON encoding should serialize decimals to numbers.
This is likely to create real bugs in the face of decoders that lack decimal. There is zero likelihood of such bugs if we don't incompatibly encode decimal using JSON number syntax when adding decimal to a future standard.
Encoding the decimals as strings is far worse.
Throwing by default -- in the absence of explicit opt-in by the encoder-user -- is far better.
It's still up to the producer of the data to worry about tagging the decimal type of the JSON number, or hoping that the data stays in its silo where it's implicitly typed as decimal.But we won't have accidents where implicitly -- by default -- decimal users encode JSON data that is incorrectly decoded.
We are not condemned to repeat history if we pay attention to what went before. JSON implementations in future ES specs cannot by default switch either encoding or decoding to use decimal instead of number. The decimal number has been around much longer than the computer. Are saying that a particular language type has more permanence?
I think you know exactly what I'm saying. One (lingua franca, French as the common diplomatic language) particular format is better than many. And we are stuck with double today. So we cannot start encoding decimals as JSON numbers tomorrow. Please stop ducking and weaving and address this head on. If you really endorse "receiver makes it right", give a spirited and explicit defense.
JSON already encodes in decimal. Do you want a defense of how a receiver should make right what is already right? We can't argue about whether JSON should have a more explicit type system. JSON is frozen.
Are we really making any progress on these point by point arguments? If so I could continue the point-by-point discussion, but it seems like they are drifting away from the main issue. All decimal use is opt-in anyway, there is no breakage for existing code when the VM is upgraded. So the main issue what will be most reasonable and sensible for users who have explicitly opted to use decimals. If a user writes JSON.stringify({price:2m - 0.01m}) it seems by far most intuitive that they want to serialize to "{"price":1.99}". That we have a decimal in JSON and we wouldn't encode ES's decimal with that format just seems bizarre to me, even after reading all the arguments.
Thanks, Kris -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (MingW32) Comment: Using GnuPG with Mozilla - enigmail.mozdev.org
iEYEARECAAYFAklvwSIACgkQ9VpNnHc4zAz6HgCeOkF9AFiu81Hsr8w2ZxEb5nVE DCkAoLmBonA1bL2QjmyrviWICVP2dk0y =RtDQ -----END PGP SIGNATURE-----
On Jan 15, 2009, at 3:05 PM, Kris Zyp wrote:
I more ambivalent on typeof 1.1m than on the what seems to me to be a more obvious mistake of throwing on JSON serialization of decimals.
Good to hear. Ambivalence should not be a stable state, though. If we
can get to typeof agreement, let's do so. It seems to me ambivalence
should mean "no", not "yes".
[hundreds of cited lines deleted -- please trim when replying. /be]
If you really endorse "receiver makes it right", give a spirited and explicit defense. JSON already encodes in decimal.
No, you are assuming your conclusion again. "JSON already encodes in
decimal" meaning "JSON encoding writes base ten floating point with
optional exponent numbers" does not employ the same meaning of the
word "decimal" as the decimal type contemplated for Harmony. The
latter has semantics in addition to syntax. JSON's RFC says almost
nothing about semantics.
Do you want a defense of how a receiver should make right what is already right? We can't argue about whether JSON should have a more explicit type system. JSON is frozen.
Sorry, seems to me you are ducking again. Please address the
incompatible change from JSON in self-hosted codes and the native
ES3.1 codec encoding only doubles using JSON number syntax, vs. your
proposal where with decimal added to the language we encode decimal
and double as JSON number syntax.
All decimal use is opt-in anyway, there is no breakage for existing code when the VM is upgraded.
What do you mean by "opt-in"?
If JSON peer Alice starts using decimal and encoding financial data
sent to existing, double-based peer Bob, does Alice lose money? If no,
show how. If yes, then show why such a bug is not a fatal flaw in your
proposal.
Sam Ruby wrote:
For the time being, concentrating on other things that will be in ES3.1. That was the main point of removing Decimal, no?