Out-of-range decimal literals

# Igor Bukanov (17 years ago)

Should EcmaScript allow decimal literals that cannot be represented as 128-bit decimal? I.e should the following literals give a syntax error:

1.000000000000000000000000000000000000000000000000001m 1e1000000000m ?

IMO allowing such literals would just give another source of errors.

, Igor

# Sam Ruby (17 years ago)

On Thu, Sep 18, 2008 at 9:10 AM, Igor Bukanov <igor at mir2.org> wrote:

Should EcmaScript allow decimal literals that cannot be represented as 128-bit decimal? I.e should the following literals give a syntax error:

1.000000000000000000000000000000000000000000000000001m 1e1000000000m ?

IMO allowing such literals would just give another source of errors.

Languages have "personalities", and people build up expectations based on these characteristics. As much as possible, I'd like to suggest that ECMAScript be internally consistent, and not have one independent choice (binary vs decimal) have unexpected implications over another (signaling vs quiet operations).

As a tangent, both binary 64 and decimal 128 floating point provide "exact" results for a number of operations, they simply do so for different domains of numbers. 2**-52 can be represented exactly, for example, in binary 64 floating point, but not in decimal 128 floating point. It is only the prevalence of things like decimal literals, which naturally are in decimal, which tend to produce inexact but correctly rounded values in binary 64 and exact values in decimal 128, without a need for rounding.

As to your specific question, here's a few results from my branch of SpiderMonkey:

js> 1.000000000000000000000000000000000000000000000000001

1 js> 1e1000000000

Infinity js> 1.000000000000000000000000000000000000000000000000001m

1.000000000000000000000000000000000 js> 1e1000000000m

Infinity

, Igor

  • Sam Ruby
# Sam Ruby (17 years ago)

On Thu, Sep 18, 2008 at 9:49 AM, Sam Ruby <rubys at intertwingly.net> wrote:

On Thu, Sep 18, 2008 at 9:10 AM, Igor Bukanov <igor at mir2.org> wrote:

Should EcmaScript allow decimal literals that cannot be represented as 128-bit decimal? I.e should the following literals give a syntax error:

1.000000000000000000000000000000000000000000000000001m 1e1000000000m ?

IMO allowing such literals would just give another source of errors.

Languages have "personalities", and people build up expectations based on these characteristics. As much as possible, I'd like to suggest that ECMAScript be internally consistent, and not have one independent choice (binary vs decimal) have unexpected implications over another (signaling vs quiet operations).

Thinking about it further, rejecting literals with an expressed precision larger than the underlying data type can support might be something that could be considered with "use strict", particularly if applied to both binary and decimal floating point quantities.

As a tangent, both binary 64 and decimal 128 floating point provide "exact" results for a number of operations, they simply do so for different domains of numbers. 2**-52 can be represented exactly, for example, in binary 64 floating point, but not in decimal 128 floating point. It is only the prevalence of things like decimal literals, which naturally are in decimal, which tend to produce inexact but correctly rounded values in binary 64 and exact values in decimal 128, without a need for rounding.

As to your specific question, here's a few results from my branch of SpiderMonkey:

js> 1.000000000000000000000000000000000000000000000000001 1 js> 1e1000000000 Infinity js> 1.000000000000000000000000000000000000000000000000001m 1.000000000000000000000000000000000 js> 1e1000000000m Infinity

, Igor

  • Sam Ruby
  • Sam Ruby
# Igor Bukanov (17 years ago)

2008/9/18 Sam Ruby <rubys at intertwingly.net>:

Languages have "personalities", and people build up expectations based on these characteristics. As much as possible, I'd like to suggest that ECMAScript be internally consistent, and not have one independent choice (binary vs decimal) have unexpected implications over another (signaling vs quiet operations).

The decimal literals introduces a new feature, a type suffix for a number, to explicitly specify the type of the literal. It may well be possible that this is not the last suffix to be added to ES. If, for example, there would be a suffix to denote 32 bit int, it would be very strange to allow to write 5000000000i. Another possibility is support for exact binary floating point literals. Here again rounding out-of-range literals would look strange as the programmer would bother to write the number on purpose to get exact presentation.

# Mike Cowlishaw (17 years ago)

Languages have "personalities", and people build up expectations based on these characteristics. As much as possible, I'd like to suggest that ECMAScript be internally consistent, and not have one independent choice (binary vs decimal) have unexpected implications over another (signaling vs quiet operations).

The decimal literals introduces a new feature, a type suffix for a number, to explicitly specify the type of the literal. It may well be possible that this is not the last suffix to be added to ES. If, for example, there would be a suffix to denote 32 bit int, it would be very strange to allow to write 5000000000i. Another possibility is support for exact binary floating point literals. Here again rounding out-of-range literals would look strange as the programmer would bother to write the number on purpose to get exact presentation.

For exact binary floating point literals there probably isn't a problem as one would presumably use the hexadecimal format in IEEE 754 (the same as in C) -- and it's fairly clearly an error if too may bits are specified for the significand. However, I suppose longer significands could be allowed and rounded -- although that is very much against the concept of exact literals.

Mike

Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU

# Waldemar Horwat (17 years ago)

Mike Cowlishaw wrote:

Languages have "personalities", and people build up expectations based on these characteristics. As much as possible, I'd like to suggest that ECMAScript be internally consistent, and not have one independent choice (binary vs decimal) have unexpected implications over another (signaling vs quiet operations). The decimal literals introduces a new feature, a type suffix for a number, to explicitly specify the type of the literal. It may well be possible that this is not the last suffix to be added to ES. If, for example, there would be a suffix to denote 32 bit int, it would be very strange to allow to write 5000000000i. Another possibility is support for exact binary floating point literals. Here again rounding out-of-range literals would look strange as the programmer would bother to write the number on purpose to get exact presentation.

For exact binary floating point literals there probably isn't a problem as one would presumably use the hexadecimal format in IEEE 754 (the same as in C) -- and it's fairly clearly an error if too may bits are specified for the significand. However, I suppose longer significands could be allowed and rounded -- although that is very much against the concept of exact literals.

Speaking of which, that one is required in IEEE 754r. What is our implementation of it if we upgrade our reference from 754 to 754r?

Waldemar