NumberFormat maxSignificantDigits Limit

# Ethan Resnick (5 years ago)

Apologies if es-discuss is the wrong venue for this; I've tried first poring through the specs and asking online to no avail.

My question is: why is the limit for the maximumSignificantDigits option in the NumberFormat API set at 21? This seems rather arbitrary — and especially odd to me given that, iiuc, all Numbers in JS, as 64 bit floats, can only encode up to 17 significant decimal digits. Is this some sort of weird historical artifact of something? Should the rationale be documented anywhere?

Thanks!

Ethan

# Anders Rundgren (5 years ago)

On 2019-01-20 20:18, Ethan Resnick wrote:

Hi,

Apologies if es-discuss is the wrong venue for this; I've tried first poring through the specs and asking online to no avail.

My question is: why is the limit for the maximumSignificantDigits option in the NumberFormat API set at 21? This seems rather arbitrary — and especially odd to me given that, iiuc, all Numbers in JS, as 64 bit floats, can only encode up to 17 significant decimal digits. Is this some sort of weird historical artifact of something? Should the rationale be documented anywhere?

I don't know for sure but if you input this in a browser debugger it will indeed respond with the same 21 [sort of] significant digits 999999999999999900000

rgds, Anders

# Isiah Meadows (5 years ago)

I feel this is probably best asked at tc39/ecma402, since it seems to imply a potential spec bug.


Isiah Meadows contact at isiahmeadows.com, www.isiahmeadows.com

# Logan Smyth (5 years ago)

It does seem unclear why the limit is 21. Is that maybe the most you need to uniquely stringify any double value?

an only encode up to 17 significant decimal digits

That's true for decimal values, but the limit of 21 would also include the fractional portion of the double value as well, so would need more than 17, I think?

# Ethan Resnick (5 years ago)

if you input this in a browser debugger it will indeed respond with the same 21 [sort of] significant digits

999999999999999900000

I'm pretty sure the 0s don't count as significant digits www.wikiwand.com/en/Significant_figures (and, with floating point

numbers, it makes sense that they wouldn't).

l this is probably best asked at tc39/ecma402, since it

seems to imply a potential spec bug.

Although my question was framed in terms of NumberFormat, I don't actually think this is Ecma 402-specific. Specifically, I believe the limit started, or at least also applies to, the Number.prototype.toPrecision www.ecma-international.org/ecma-262/6.0/#sec-number.prototype.toprecision

API from Ecma 262 (where it is equally unexplained).

That's true for decimal values, but the limit of 21 would also include the

fractional portion of the double value as well, so would need more than 17, I think?

My understanding of floating point encoding is that 17 digits will also cover the fractional portion. The only case I can think of where 17 digits might not be enough is if the number system is not base 10; e.g., a base 6 number system would presumably require more digits. But, I don't see any such number systems as output options in the NumberFormat API, and such localization concerns don't really explain the limit in N.p.toPrecision linked above, which is definitely dealing with base 10.

# Anders Rundgren (5 years ago)

On 2019-01-21 06:54, Ethan Resnick wrote:

if you input this in a browser debugger it will indeed respond with the same 21 [sort of] significant digits

999999999999999900000

I'm pretty sure the 0s don't count as significant digits www.wikiwand.com/en/Significant_figures (and, with floating point numbers, it makes sense that they wouldn't).

Well, if you remove the trailing 0s you get an entirely different number. That's pretty significant. Note that this is the default ES serialization as well.

That's probably the origin of the 21 digit thing.

It MUST NOT be changed because then a coming IETF standard would break as well: tools.ietf.org/html/draft-rundgren-json-canonicalization-scheme-02#section-3.2.2.3

Anders

# Carsten Bormann (5 years ago)

On Jan 21, 2019, at 07:03, Anders Rundgren <anders.rundgren.net at gmail.com> wrote:

coming IETF standard

We don’t know that yet.

Grüße, Carsten

# Anders Rundgren (5 years ago)

This limit seems a bit strange though:

console.log(new Intl.NumberFormat('en', { maximumFractionDigits: 20 }).format(-0.0000033333333333333333));

Result: -0.00000333333333333333

That's actually two digits less than produced by the default ES serialization. "maximumFractionDigits" is limited to 20.

Anders

# Anders Rundgren (5 years ago)

On 2019-01-21 07:14, Carsten Bormann wrote:

On Jan 21, 2019, at 07:03, Anders Rundgren <anders.rundgren.net at gmail.com> wrote:

coming IETF standard

We don’t know that yet.

Right, but it looks rather promising in addition to being verified to work on multiple platforms.

The days of the Base64Url tyranny are numbered :-)

BTW, where is the XML-RFC to Kramdown converter I asked about?

Grüße, Carsten

Likewise, Anders

# Ethan Resnick (5 years ago)

Well, if you remove the trailing 0s you get an entirely different number. That's pretty significant. Note that this is the default ES serialization as well.

This makes no sense to me. Yes, removing trailing 0s, and therefore changing the magnitude of the number, changes its meaning. But significant digits are about capturing precision, not magnitude.

Let's make this concrete:

The number 134449999999999984510435328 happens to have an exact floating point representation. However, because that number is larger than the max safe integer, many other integers are best approximated by the same floating point value. 134449999999999980000000000 is one such number.

So, if you do:

134449999999999984510435328..toLocaleString('en', { maximumSignificantDigits: 21, useGrouping: false })

and

134449999999999980000000000..toLocaleString('en', { maximumSignificantDigits: 21, useGrouping: false })

you actually get the same output in each case, which makes sense, because both numbers are represented by the same floating point behind the scenes.

Now, it seems like the serialization logic in toLocaleString (or toPrecision) has two options.

First, it could assume that the number it's serializing started life as a decimal and got converted to the nearest floating point, in which case the serialization code doesn't know the original intended number. In this case, its best bet is probably to output 0s in those places where the original decimal digits are unknown (i.e., for all digits beyond the precision that was stored). This is actually what toLocaleString does; i.e., all digits after the 17th are 0, because 64-bit floating points can only store 17 decimal digits of precision. This is where my original question came in, though: if a float can only encode 17 digits of precision, why would the maximumSignificantDigits be capped at 21? It seems like the values 18–21 are all just equivalent to 17.

The other option is that the serialization code could assume that the number stored in the float is exactly the number the user intended (rather than a best approximation of some other decimal number). This is actually what toPrecision does. I.e., if you call toPrecision(21) on either of the numbers given above, you get 21 non-zero digits, matching the first 21 digits of the underlying float value: "1.34449999999999984510e+26". But, again, the limit of 21 seems odd here too. Because, if you're going to assume the float represents exactly the intended number, why not be willing to output all 27 significant digits in the decimal above? Or more than 27 digits for the decimal representation of bigger floats?

In other words, it seems like maximumSignificantDigits should either be capped at 17 (the real precision of the underlying float) or at 309 (the length of the decimal representation of the largest float). But neither of those are 21, hence my original question...

# Anders Rundgren (5 years ago)

On 2019-01-24 04:45, Ethan Resnick wrote:

Well, if you remove the trailing 0s you get an entirely different number.  That's pretty significant.
Note that this is the default ES serialization as well.

This makes no sense to me. Yes, removing trailing 0s, and therefore changing the magnitude of the number, changes its meaning. But significant digits are about capturing precision, not magnitude.

Hi Ethan, I'm neither the designer of this API nor have I looked at the implementations either I guess that 21 comes from how number serializer works without locale settings.

Let's make this concrete:

The number 134449999999999984510435328 happens to have an exact floating point representation. However, because that number is larger than the max safe integer, many other integers are best approximated by the same floating point value. 134449999999999980000000000 is one such number.

So, if you do:

134449999999999984510435328..toLocaleString('en', { maximumSignificantDigits: 21, useGrouping: false })

and

134449999999999980000000000..toLocaleString('en', { maximumSignificantDigits: 21, useGrouping: false })

you actually get the same output in each case, which makes sense, because both numbers are represented by the same floating point behind the scenes.

Right, the ES number serializer doesn't take these edge cases in consideration.

Now, it seems like the serialization logic in toLocaleString (or toPrecision) has two options.

First, it could assume that the number it's serializing started life as a decimal and got converted to the nearest floating point, in which case the serialization code doesn't know the original intended number. In this case, its best bet is probably to output 0s in those places where the original decimal digits are unknown (i.e., for all digits beyond the precision that was stored). This is actually what toLocaleString does; i.e., all digits after the 17th are 0, because 64-bit floating points can only store 17 decimal digits of precision. This is where my original question came in, though: if a float can only encode 17 digits of precision, why would the maximumSignificantDigits be capped at 21? It seems like the values 18–21 are all just equivalent to 17.

The other option is that the serialization code could assume that the number stored in the float is exactly the number the user intended (rather than a best approximation of some other decimal number). This is actually what toPrecision does. I.e., if you call toPrecision(21) on either of the numbers given above, you get 21 non-zero digits, matching the first 21 digits of the underlying float value: "1.34449999999999984510e+26". But, again, the limit of 21 seems odd here too. Because, if you're going to assume the float represents exactly the intended number, why not be willing to output all 27 significant digits in the decimal above? Or more than 27 digits for the decimal representation of bigger floats?

Did you try: (1.34449999999999984510e+250).toLocaleString('en', { maximumSignificantDigits: 21, useGrouping: false }) In Chrome I actually got 250 digits!

My conclusion is that the internationalization API wasn't designed for "scientific" work.

It was probably created for displaying "normal" numbers whatever that means :-)

Anders

# Isiah Meadows (5 years ago)

For all here interested, you might want to follow this Twitter conversation I started. My theory is a subtle spec bug that copied the number instead of recalculating the formula.

twitter.com/isiahmeadows1/status/1088517449488744448


Isiah Meadows contact at isiahmeadows.com, www.isiahmeadows.com


Isiah Meadows contact at isiahmeadows.com, www.isiahmeadows.com

# Ranando King (5 years ago)

There's another possibility. C++ was the language of choice back then. It had a type "long double", a 80-bit extended double type. It was meant to match Intel's implementation of IEEE 754. This number format can be safely and reversibly converted back and forth with 21 significant digits.

# Ranando King (5 years ago)

Here's a wikipedia link for what it's worth.

en.wikipedia.org/wiki/Extended_precision

Look at the Working Range section.

# Waldemar Horwat (5 years ago)

There is precedence for using numbers around 20 for significant digit cutoffs in the spec. For example, if you look at how number tokens are parsed in the spec (§11.8.3.1), the implementation has the option to ignore significant digits after the 20th. That's not a bug; we did that intentionally in the early days of ECMAScript as a compromise between always requiring exact precision and cutting off after the 17th digit, which would get a significant fraction of values wrong.

 Waldemar