Consistent decimal semantics

# Waldemar Horwat (16 years ago)

We're going round in circles on a few issues surrounding Decimal. Some of these have clear resolutions; others have a little wiggle room. Here's my summary:

  • Should decimal values behave as objects (pure library implementation) or as primitives?

If they behave as objects, then we'd get into situations such as 3m != 3m in some cases and 3m == 3m in other cases. Also, -0m != 0m would be necessary. This is clearly unworkable.

  • Should == and === distinguish cohort members?

The only reasonable answer is no, and is consistent with existing treatment of -0 and +0. Otherwise you'd break integer arithmetic: 1e12m - 1e12m === 0m would be false.

  • What should cross-type == do on 5 == 5m?

Here there are a couple sensible choices: It could treat all decimal values as different from all Number values, or it could convert both values to a common type (decimal because it's wider) and compare.

My inclination would be the latter, which would make 5 == 5m true. Note that, unless we choose option 2b below, 5.1 == 5.1m would be false because 5.1 is equal to 5.099999999999999644728632119949907m.

  • What should cross-type === do on 5 === 5m?

These objects are of different types, so it should return false.

  • How should mixed type arithmetic work in general?

There are a few consistent design alternatives:

  1. Always disallow it. For consistency you'd have to disallow == between Number and decimal as well.
  2. Allow it, converting to the wider type (Decimal128). There are a couple design choices when doing the conversion: 2a. Convert per the IEEE P754 spec: 5.1 turns into 5.099999999999999644728632119949907m. This is how most programming languages operate (C++, Java, Lisp, etc.) when converting among the built-in floating point values (float -> double etc.).

2b. Convert the Number to a string and then to a Decimal: 5.1 turns into 5.1m. As a special case, -0 would turn into -0m. This might work, but I haven't thought through the implications of this one. 3. Allow it, converting to the narrower type (double). This would break transitivity: 5.00000000000000001m != 5m, but they're both == to 5, so it's a non-starter.

  • Should trailing zeroes be printed when doing toString on decimal values?

No. If you print while distinguishing cohort members then you'll be required to print -0m as "-0" (which we don't do for Numbers), 1m/.1m as "1e1", and will get nasty results when using decimal numbers as array indices (barring introducing yet further complications into the language). Furthermore, were you to do an implementation with a "big red switch" that turns all numbers into decimal, everything would break because toString would print excess digits when printing even simple, unrounded values such as 1.00.

  • Should + concatenate decimal numbers?

No.

  • How many double NaNs should we have?

Edition 3 says there's only one, but IEEE P754's totalOrder now distinguishes between NaN and -NaN as well as different NaNs depending on how they got created. Depending on the implementation, this is a potential compatibility problem and an undesirable way for implementations to diverge.

  • How many decimal NaNs should we have?

Presumably as many as we have double NaNs....

Waldemar
# Brendan Eich (16 years ago)

On Aug 25, 2008, at 4:41 PM, Waldemar Horwat wrote:

We're going round in circles on a few issues surrounding Decimal.

It seems to me people have reached the conclusion you already did on
===. I'm not sure anyone advocated otherwise (I've missed most 3.1
calls lately, so can't be sure).

Some of these have clear resolutions; others have a little wiggle
room. Here's my summary:

Thanks, the FAQ format is helpful. I agree with your answers, but I
have a couple of questions.

  • Should decimal values behave as objects (pure library
    implementation) or as primitives?

If they behave as objects, then we'd get into situations such as
3m != 3m in some cases and 3m == 3m in other cases. Also, -0m !=
0m would be necessary. This is clearly unworkable.

What should be the result of (typeof 3m)?

[snip]

  • What should cross-type === do on 5 === 5m?

These objects are of different types, so it should return false.

Confusion: 5 is of "number" type (a primitive type, typeof agrees),
so "These objects" should have been "These values" -- right?

Sam and I, not going in circles (much), agree that typeof 3m should
be "object", and that we add hardwired operator support for such
Decimal objects.

  • How should mixed type arithmetic work in general?

There are a few consistent design alternatives:

  1. Always disallow it. For consistency you'd have to disallow ==
    between Number and decimal as well.

This was what Igor Bukanov and I were advocating last week.

  1. Allow it, converting to the wider type (Decimal128). There are
    a couple design choices when doing the conversion: 2a. Convert per the IEEE P754 spec: 5.1 turns into
    5.099999999999999644728632119949907m. This is how most programming
    languages operate (C++, Java, Lisp, etc.) when converting among the
    built-in floating point values (float -> double etc.).

(I'm asking anyone who thinks so, as well as asking for your
opinion:) Better than alternative 1? If so, why?

2b. Convert the Number to a string and then to a Decimal: 5.1
turns into 5.1m. As a special case, -0 would turn into -0m. This
might work, but I haven't thought through the implications of this
one.

Interesting idea.

# Waldemar Horwat (16 years ago)

Brendan Eich wrote:

  • Should decimal values behave as objects (pure library implementation) or as primitives?

If they behave as objects, then we'd get into situations such as 3m != 3m in some cases and 3m == 3m in other cases. Also, -0m != 0m would be necessary. This is clearly unworkable.

What should be the result of (typeof 3m)?

"decimal". It should not be "object" because it doesn't behave like other objects!

  1. Always disallow it. For consistency you'd have to disallow == between Number and decimal as well.

This was what Igor Bukanov and I were advocating last week.

  1. Allow it, converting to the wider type (Decimal128). There are a couple design choices when doing the conversion: 2a. Convert per the IEEE P754 spec: 5.1 turns into 5.099999999999999644728632119949907m. This is how most programming languages operate (C++, Java, Lisp, etc.) when converting among the built-in floating point values (float -> double etc.).

(I'm asking anyone who thinks so, as well as asking for your opinion:) Better than alternative 1? If so, why?

2b. Convert the Number to a string and then to a Decimal: 5.1 turns into 5.1m. As a special case, -0 would turn into -0m. This might work, but I haven't thought through the implications of this one.

Interesting idea.

Yes, I feel that either 2a or 2b would be better than alternative 1. Suppose you wanted to write a for-loop that works on both doubles and decimals:

This would presumably work: for (i = low; i < high; i++)

But if you wanted to go up by two, this wouldn't: for (i = low; i < high; i += 2)

Here (and in most other use cases) you're adding an integer, so rounding and such don't enter the picture. Similarly, generically checking whether

low < 0

would be annoying under option 1 if you didn't know whether low was a double or a decimal.

Not doing option 2a or 2b would make it hard to write generic methods, it would mean that you could use decimals to index arrays but not pass them to Array.prototype methods (unless we made those special as well), etc.

I'm off to Burning Man! I'll continue this discussion next week.

Waldemar
# Brendan Eich (16 years ago)

On Aug 25, 2008, at 5:25 PM, Waldemar Horwat wrote:

Brendan Eich wrote:

  • Should decimal values behave as objects (pure library implementation) or as primitives?

If they behave as objects, then we'd get into situations such as
3m != 3m in some cases and 3m == 3m in other cases. Also, -0m != 0m would be necessary. This is clearly unworkable.

What should be the result of (typeof 3m)?

"decimal". It should not be "object" because it doesn't behave
like other objects!

Ok, good (not "number"), but still a scary thing to contemplate if
your code switches on typeof and believes it handles all cases based
on ES1-3.

Not doing option 2a or 2b would make it hard to write generic
methods, it would mean that you could use decimals to index arrays
but not pass them to Array.prototype methods (unless we made those
special as well), etc.

Good point. Alternative 1 is really only for decimal-specific code.
Generic arithmetic wins if we can agree on 2a or 2b.

I'm off to Burning Man! I'll continue this discussion next week.

Have fun!

# Sam Ruby (16 years ago)

Waldemar Horwat wrote:

We're going round in circles on a few issues surrounding Decimal. Some of these have clear resolutions; others have a little wiggle room. Here's my summary:

I don't believe that we are going around in circles. I've got a tangible implementation that is tracking to the agreements. I published a list of results recently[1]. I've also published my code if you care to check it out[2].

  • Should decimal values behave as objects (pure library implementation) or as primitives?

If they behave as objects, then we'd get into situations such as 3m != 3m in some cases and 3m == 3m in other cases. Also, -0m != 0m would be necessary. This is clearly unworkable.

That would be unworkable, if you can suggest a case where this would be true. A tangible scenario would help.

  • Should == and === distinguish cohort members?

The only reasonable answer is no, and is consistent with existing treatment of -0 and +0. Otherwise you'd break integer arithmetic: 1e12m - 1e12m === 0m would be false.

While I disagree with your reasoning, I agree with your conclusion.

Given that IEEE defines a total ordering, and === is purportedly "strict" it would neither be unreasonable nor broken to define strictly equal in such a manner.

That being said, ES "strict" equals is not strict today, and there are other usability and theoretical and compatibility criteria involved; not all of them pointing in the same direction.

I'm comfortable with a definition of === where 1e12m - 1e12m === 0m is true.

  • What should cross-type == do on 5 == 5m?

Here there are a couple sensible choices: It could treat all decimal values as different from all Number values, or it could convert both values to a common type (decimal because it's wider) and compare.

Decimal is wider, but while every number expressible as a binary64 will map to a unique decimal128 value and will round-trip back to the same binary64 number[3], it is not the case that the numbers that a binary64 number maps in decimal128 to map to exactly the same point on the real number line as the original binary64 number[4].

My inclination would be the latter, which would make 5 == 5m true. Note that, unless we choose option 2b below, 5.1 == 5.1m would be false because 5.1 is equal to 5.099999999999999644728632119949907m.

Agree that 5 == 5m is true, and that 5.1 == 5.1m is false.

  • What should cross-type === do on 5 === 5m?

These objects are of different types, so it should return false.

Agreed.

  • How should mixed type arithmetic work in general?

There are a few consistent design alternatives:

  1. Always disallow it. For consistency you'd have to disallow == between Number and decimal as well. 2. Allow it, converting to the wider type (Decimal128). There are a couple design choices when doing the conversion: 2a. Convert per the IEEE P754 spec: 5.1 turns into 5.099999999999999644728632119949907m. This is how most programming languages operate (C++, Java, Lisp, etc.) when converting among the built-in floating point values (float -> double etc.). 2b. Convert the Number to a string and then to a Decimal: 5.1 turns into 5.1m. As a special case, -0 would turn into -0m. This might work, but I haven't thought through the implications of this one. 3. Allow it, converting to the narrower type (double). This would break transitivity: 5.00000000000000001m != 5m, but they're both == to 5, so it's a non-starter.

I think that 1 has negative usability characteristics that would rule it out.

2a is how I have currently implemented this function.

While 2b works for some values, it quickly degrades. What should 1.2-1.1+5m produce? Being consistent is probably better, as it will help people who care to do so to track down cases where mixed mode operations that they didn't intend to be mixed mode occur. Finally, having a toDecimal(n) that mirrors the existing toFixed(n) function would provide the ability for people to contain the cascading of unnecessary precision.

  • Should trailing zeroes be printed when doing toString on decimal values?

No. If you print while distinguishing cohort members then you'll be required to print -0m as "-0" (which we don't do for Numbers), 1m/.1m as "1e1", and will get nasty results when using decimal numbers as array indices (barring introducing yet further complications into the language). Furthermore, were you to do an implementation with a "big red switch" that turns all numbers into decimal, everything would break because toString would print excess digits when printing even simple, unrounded values such as 1.00.

You are mixing a number of different topics.

You and I simply disagree with your No when it comes to trailing zeros.

  • Should + concatenate decimal numbers?

No.

Agreed. Note that + should continue to act as a concatenation if the other operand is a string or a non-decimal object.

Furthermore 3.1m+true should be 4.1m.

  • How many double NaNs should we have?

Edition 3 says there's only one, but IEEE P754's totalOrder now distinguishes between NaN and -NaN as well as different NaNs depending on how they got created. Depending on the implementation, this is a potential compatibility problem and an undesirable way for implementations to diverge.

There are multiple layers to this.

At the physical layer, even binary64 provides the ability to have 2**52 different positive NaNs, and an equal number of negative NaNs.

At a conceptual layer, ES is described as having only having one NaN. This layer is of the least consequence, and most easily changed.

At an operational layer, the operations defined in ES4 are (largely?) unable to detect these differences. For backwards compatibility reasons, it would be undesirable to change any of these existing interfaces, unless there were a really, really, really compelling reason to do so.

Overall, as long as we don't violate the constraints presented by the physical and existing operational layers, we may be able to introduce new interfaces (such as Object.identity) that is able to distinguish things that were not previously distinguishable.

  • How many decimal NaNs should we have?

Presumably as many as we have double NaNs....

Actually, there are about 1046-252 more.

Waldemar

  • Sam Ruby

[1] esdiscuss/2008-August/007231 [2] code.intertwingly.net/public/hg/js-decimal [3] speleotrove.com/decimal/decifaq6.html#binapprox [4] speleotrove.com/decimal/decifaq6.html#bindigits

# Igor Bukanov (16 years ago)

2008/8/26 Brendan Eich <brendan at mozilla.org>:

Sam and I, not going in circles (much), agree that typeof 3m should be "object", and that we add hardwired operator support for such Decimal objects.

This would be very similar to QName and Namespace from E4X. typeof returns "object" for both but there is special support for value semantic in == and != operators for them so

new Namespace("url") == new Namespace("url")

gives true. Also E4X leaves up to the implementation to decide what should be the result of new Namespace("url") === new Namespace("url").

, Igor

# P T Withington (16 years ago)

On 2008-08-25, at 19:41EDT, Waldemar Horwat wrote:

  • Should decimal values behave as objects (pure library
    implementation) or as primitives?

If they behave as objects, then we'd get into situations such as 3m ! = 3m in some cases and 3m == 3m in other cases. Also, -0m != 0m
would be necessary. This is clearly unworkable.

In Dylan this conundrum is dealt with either by interning (which could
be done in JS by returning an existing object from the constructor),
or by overriding == (Which is easy in Dylan because == is an open
multi-method that can be extended by libraries, not yet in JS).

www.opendylan.org/books/dpg/db_346.html

# Brendan Eich (16 years ago)

On Aug 25, 2008, at 5:25 PM, Waldemar Horwat wrote:

Brendan Eich wrote:

  • Should decimal values behave as objects (pure library implementation) or as primitives?

If they behave as objects, then we'd get into situations such as
3m != 3m in some cases and 3m == 3m in other cases. Also, -0m != 0m would be necessary. This is clearly unworkable.

What should be the result of (typeof 3m)?

"decimal". It should not be "object" because it doesn't behave
like other objects!

Clearly, you are correct (but you knew that ;-).

Specifically,

  1. typeof x == "object" && !x => x == null.

  2. typeof x == typeof y => x == y <=> x === y

  3. 1.1m != 1.1 && 1.1m !== 1.1 (IEEE P754 mandates that binary floats
    be convertible to decimal floats and that the result of the
    conversion of 1.1 to decimal be 1.100000000000000088817841970012523m)

Therefore typeof 1.1m != "object" by 1, or else 0m could be mistaken
for null by existing code.

And typeof 1.1m != "number" by 3, given 2.

This leaves no choice but to add "decimal".

# Sam Ruby (16 years ago)

Brendan Eich wrote:

On Aug 25, 2008, at 5:25 PM, Waldemar Horwat wrote:

Brendan Eich wrote:

  • Should decimal values behave as objects (pure library implementation) or as primitives?

If they behave as objects, then we'd get into situations such as
3m != 3m in some cases and 3m == 3m in other cases. Also, -0m != 0m would be necessary. This is clearly unworkable. What should be the result of (typeof 3m)? "decimal". It should not be "object" because it doesn't behave
like other objects!

Clearly, you are correct (but you knew that ;-).

Specifically,

  1. typeof x == "object" && !x => x == null.

  2. typeof x == typeof y => x == y <=> x === y

  3. 1.1m != 1.1 && 1.1m !== 1.1 (IEEE P754 mandates that binary floats
    be convertible to decimal floats and that the result of the
    conversion of 1.1 to decimal be 1.100000000000000088817841970012523m)

Therefore typeof 1.1m != "object" by 1, or else 0m could be mistaken
for null by existing code.

And typeof 1.1m != "number" by 3, given 2.

This leaves no choice but to add "decimal".

:-)

You act as if logic applies here. (And, yes, I'm being facetious). If that were true, then a === b => 1/a === 1/b. But that's not the case

with ES3, when a=0 and b=-0.

1.1 and 1.1m, despite appearing quite similar actually identify two (marginally) different points on the real number line. This does not cause an issue with #2 above, any more than substituting the full equivalents for these constants causes an issue.

Similarly, the only apparent issue with #1 is the assumption that !(new Decimal(0)) is true. But !(new Number(0)) is false, so there is no reason that the same logic couldn't apply to Decimal.


From where I sit, there are at least three potentially logically consistent systems we can pick from.

We clearly want 0 == 0m to be true. Let's get that out of the way.

Do we want 0 === 0m to be true? If so, typeof(0m) should be "number". Otherwise, it should be something else, probably "object" or "decimal".

If typeof(0m) is "number", then !0m should be true. If typeof(0m) is "object", then !0m should be false. If typeof(0m) is "decimal", then we are free to decide what !0m should be.

My preference is for typeof(0m) to be "decimal" and for !0m to be true. But that is only a preference. I could live with typeof(0m) being "object" and !0m being false. I'm somewhat less comfortable with typeof(0m) being "number", only because it implies that methods made available to Decimal need to be made available to Number, and at some point some change is to any existing class, no matter how apparently innocuous, will end up breaking something important.

Yes, in theory code today could be depending on typeof returning a closed set of values. Is such an assumption warranted? Actually that question may not be as important as whether anything of value depends on this assumption.

If we are timid (or responsible, either way, it works out to be the same), then we should say no new values for typeof should ever be minted, and that means that all new data types are of type object, and none of them can ever have a value that is treated as false.

If we are bold (or foolhardy), then we should create new potential results for the typeof operator early and often.

  • Sam Ruby
# Brendan Eich (16 years ago)

On Sep 3, 2008, at 9:16 PM, Sam Ruby wrote:

You act as if logic applies here. (And, yes, I'm being
facetious). If that were true, then a === b => 1/a === 1/b. But
that's not the case with ES3, when a=0 and b=-0.

I didn't say "for all" -- the particular hard cases are 0m converting
to true, in light of real code that assumes typeof x == "object" && ! x => x is null; and 1.1m == 1.1 or other power-of-five problems. No

universal quantification here -- particular problems, counterexamples.

1.1 and 1.1m, despite appearing quite similar actually identify two
(marginally) different points on the real number line. This does
not cause an issue with #2 above, any more than substituting the
full equivalents for these constants causes an issue.

P754 says otherwise (3). (3) in light of (2) says typeof 1.1m !=
"number". Assume otherwise:

typeof 1.1m == "number" typeof 1.1 == "number" 1.1m == 1.1 && 1.1m === 1.1

but this contradicts (3) -- P754, the holy writ :-P.

Similarly, the only apparent issue with #1 is the assumption that ! (new Decimal(0)) is true. But !(new Number(0)) is false, so there
is no reason that the same logic couldn't apply to Decimal.

You are seriously proposing that !0m is false? Interesting.

If typeof(0m) is "number", then !0m should be true. If typeof(0m) is "object", then !0m should be false. If typeof(0m) is "decimal", then we are free to decide what !0m
should be.

My preference is for typeof(0m) to be "decimal" and for !0m to be
true.

That's my preference now too, but based on more than aesthetics.

Yes, in theory code today could be depending on typeof returning a
closed set of values. Is such an assumption warranted? Actually
that question may not be as important as whether anything of value
depends on this assumption.

Agreed, this is the hard case that will lose in court.

If we are timid (or responsible, either way, it works out to be the
same), then we should say no new values for typeof should ever be
minted, and that means that all new data types are of type object,
and none of them can ever have a value that is treated as false.

That's too timid, I agree.

If we are bold (or foolhardy), then we should create new potential
results for the typeof operator early and often.

I think we have to.

If !0m is false then generic code that works over falsy values will
fail on decimal, or when number-typed values are promoted or migrated
to decimal. This seems worse to me than the closed-set-of-typeof- codes violation. Not just based on real code that does generic falsy
testing via 'if (!x) ...', but because that is the right minimal way
to test falsy-ness. Ok, that was an aesthetic judgment in part, but
it's also based on minimizing terms and generalizing generic code fully.

Anyway, my logic exercise was not meant to be a proposition from
Wittgenstein, but I found it persuasive. Probably Waldemar has a
trump card to throw in favor of "decimal" as the new typeof code.

# Ingvar von Schoultz (16 years ago)

Sam Ruby skrev:

If we are timid (or responsible, either way, it works out to be the same), then we should say no new values for typeof should ever be minted, and that means that all new data types are of type object, and none of them can ever have a value that is treated as false.

If we are bold (or foolhardy), then we should create new potential results for the typeof operator early and often.

I think there's a false assumption at work here. It assumes that typeof returning "decimal" will break more programs than returning "object". The opposite is just as likely, in my view.

If my program can cope with all types, then I'll code for that /explicitly/. My switch-case will end with a default that deals with any type. If I don't write it that way, it's because my code can't cope with it that way.

In that case I might end my switch-case with a default that throws an error intended for the programmer, to notify me and help me upgrade. This is defeated if the language assumes that my program can cope with numbers claiming to be "object".

Either way, some programs will work fine with it and some will fail. When this happens I think the best approach is to ask "What is the most useful in the long run, in decades to come?"

In that perspective I find (typeof 3m) returning "object" strange and inconsistent, and "decimal" very acceptable.

# Sam Ruby (16 years ago)

Brendan Eich wrote:

On Sep 3, 2008, at 9:16 PM, Sam Ruby wrote:

If typeof(0m) is "number", then !0m should be true. If typeof(0m) is "object", then !0m should be false. If typeof(0m) is "decimal", then we are free to decide what !0m should be.

My preference is for typeof(0m) to be "decimal" and for !0m to be true.

That's my preference now too, but based on more than aesthetics.

Does that mean that the following need to be revisited?

intertwingly.net/stories/2008/08/29/estest.html#s11.4.3, intertwingly.net/stories/2008/08/29/estest.html#s11.8.6

In particular, does that imply the need for a wrapper class? First, here's existing behavior:

js> 1 instanceof Number

false js> Number(1) instanceof Number

false js> new Number(1) instanceof Number

true

And those results correspond to:

js> typeof 0

number js> typeof Number(1)

number js> typeof new Number(1)

object

So, what should the following return (where I've filled in the few cases where I think the answer is obvious):

js> 1m instanceof Decimal ??? js> Decimal(1m) instanceof Decimal ??? js> new Decimal(1m) instanceof Decimal

true

And the corresponding typeof results:

js> typeof 1m

decimal js> typeof Decimal(1m)

decimal js> typeof new Decimal(1) ???

Whatever the consensus is, I'll update my SpiderMonkey branch to match and then will post the updated test results.

  • Sam Ruby
# Mark S. Miller (16 years ago)

On Thu, Sep 4, 2008 at 8:30 AM, Sam Ruby <rubys at intertwingly.net> wrote:

In particular, does that imply the need for a wrapper class?

I think so. Given the new behaviors we all seem to have agreed on: typeof 1m === 'decimal' && !0m === true then decimals act like their own new primitive type. We do the least new damage to the language by having this work as parallel as possible to the existing primitive type. This means a corresponding new wrapper type. So I'll fill in my preferences at your ???s below.

First, here's existing behavior:

js> 1 instanceof Number false js> Number(1) instanceof Number false js> new Number(1) instanceof Number true

And those results correspond to:

js> typeof 0 number js> typeof Number(1) number js> typeof new Number(1) object

So, what should the following return (where I've filled in the few cases where I think the answer is obvious):

js> 1m instanceof Decimal ???

false

js> Decimal(1m) instanceof Decimal ???

false

js> new Decimal(1m) instanceof Decimal true

And the corresponding typeof results:

js> typeof 1m decimal js> typeof Decimal(1m) decimal js> typeof new Decimal(1) ???

object

Whatever the consensus is, I'll update my SpiderMonkey branch to match and then will post the updated test results.

Thanks.

# Brendan Eich (16 years ago)

On Sep 4, 2008, at 8:30 AM, Sam Ruby wrote:

In particular, does that imply the need for a wrapper class?

Yeah, if you want !0m == true then we are back to the future:

  • There's a decimal primitive type, autowrapped when used as an
    object by Decimal.
  • The decimal primitive acts like a value type.
  • Decimal wrapper instances are mutable objects manipulated by
    reference.
  • Decimal.prototype contains methods, etc.
  • Calling Decimal(x) as a function converts a la Number.

This was an outcome Lars Hansen explored earlier this year or late
last year, when we were trying to align ES4 decimal (primitive only,
shares Number.prototype) with ES3.1 Decimal.

Sorry I steered you down the "Decimal is an object" route. As you
argued, that plan is tenable if we can live with !0m == false, but it
seems unusably confusing and fatally frustrating to generic
programming (if (!falsy) ...).

# Sam Ruby (16 years ago)

2008/9/4 Brendan Eich <brendan at mozilla.org>:

On Sep 4, 2008, at 8:30 AM, Sam Ruby wrote:

In particular, does that imply the need for a wrapper class?

Yeah, if you want !0m == true then we are back to the future:

There's a reason why my weblog is called Intertwingly :-)

  • There's a decimal primitive type, autowrapped when used as an object by Decimal.
  • The decimal primitive acts like a value type.
  • Decimal wrapper instances are mutable objects manipulated by reference.
  • Decimal.prototype contains methods, etc.
  • Calling Decimal(x) as a function converts a la Number.

This was an outcome Lars Hansen explored earlier this year or late last year, when we were trying to align ES4 decimal (primitive only, shares Number.prototype) with ES3.1 Decimal.

Sorry I steered you down the "Decimal is an object" route. As you argued, that plan is tenable if we can live with !0m == false, but it seems unusably confusing and fatally frustrating to generic programming (if (!falsy) ...).

Much good came out of the exercise. Decimal primitives won't share Number.prototype. I believe that any new types that this committee may contemplate in the future will bump up against this same constraint: namely if that type has any values which should be considered false, then they too will need to provide both a primitive and a wrapper.

The end result is internally consistent, and simple. Number and Decimal are different types, but conversions (that are both precise and safely round trip) are applied when the two are mixed.

The amount of changes to the following are minimal:

intertwingly.net/stories/2008/08/29/estest.html

I will, however, be adding a few more tests. Specifically, I want to ensure that Decimal mimics the existing behavior for Number with respect to the following:

js> Number(1) === Number(1)

true js> Number(1) === new Number(1)

false

/be

  • Sam Ruby