Brendan Eich (2013-07-15T05:39:49.000Z)
Mark S. Miller wrote:
> First, I wholeheartedly agree. JS is increasingly being used as a 
> target of compilation. When I asking people doing so what they biggest 
> pain point is, the lack of support for integers is often the first 
> thing mentioned.

int64/uint64 come up faster when compiling from C/C++. I've prototyped 
these in SpiderMonkey in a patch that I'm still rebasing, but planning 
to land pretty soon:

https://bugzilla.mozilla.org/show_bug.cgi?id=749786

js> i = 0L
0L
js> u = 0UL
0UL
js> i = ~i
-1L
js> u = ~u
18446744073709551615UL
js> (i>>>1)
9223372036854775807L
js> (i>>>1)==(u>>1)
true
js> (i>>>1)===(u>>1)
false
js> i === int64(-1)
true

> That said, there are the usual open issues we'll need to settle, most 
> of which aren't even raised by this strawman.

I've of course had to address these, and posted about it in the past -- 
recap below.

> Most important is the same as one of the objections raised about 
> decimal: If we treat each individual new primitive type as a one-off 
> special case, we're digging ourselves a deeper abstraction mechanism 
> hole. We first need something like value proxies, so we understand how 
> unprivileged libraries would define new numeric types -- complex, 
> rational, matrix.

Don't forget ratplex ;-).

Seriously, we would like to support composition of rational and complex 
without having to wrap in order to add operators among the types.

I think "value proxy" is the wrong term, though. These aren't proxies. 
They must be tight, minimal-overhead objects with value semantics.

> Once we understand what the bignum API would look like were it defined 
> by an unprivileged library, we can proceed to add it as a primitive 
> anyway. We can even do so before actually accepting a value proxy 
> proposal if necessary, as long as we've settled what the use-API would be.

That was my approach in exploring int64/uint64. Everything I hacked 
under the hood is ready for other types: int32/uint32, float32, bignum, 
complex, ....

For operators, I implemented Christian P. Hansen's suggestion from 2006, 
ES4 era:

/*
  * Value objects specified by
  *
  *  http://wiki.ecmascript.org/doku.php?id=strawman:value_objects
  *
  * define a subset of frozen objects distinguished by the 
JSCLASS_VALUE_OBJECT
  * flag and given privileges by this prototype implementation.
  *
  * Value objects include int64 and uint64 instances and support the 
expected
  * arithmetic operators: | ^ & == < <= << >> >>> + - * / % (boolean test) ~
  * unary - and unary +.
  *
  * != and ! are not overloadable to preserve identities including
  *
  *  X ? A : B <=>  !X ? B : A
  *  !(X && Y) <=>  !X || !Y
  *  X != Y <=>  !(X == Y)
  *
  * Similarly, > and >= are derived from < and <= as follows:
  *
  *  A > B <=>  B < A
  *  A >= B <=>  B <= A
  *
  * We provide <= as well as < rather than derive A <= B from !(B < A) 
in order
  * to allow the <= overloading to match == semantics.
  *
  * The strict equality operators, === and !==, cannot be overloaded, 
but they
  * work on frozen-by-definition value objects via a structural 
recursive strict
  * equality test, rather than by testing same-reference. Same-reference 
remains
  * a fast-path optimization.
  *
  * Ecma TC39 has tended toward proposing double dispatch to implement 
binary
  * operators. However, double dispatch has notable drawbacks:
  *
  *  - Left-first asymmetry.
  *  - Exhaustive type enumeration in operator method bodies.
  *  - Consequent loss of compositionality (complex and rational cannot be
  *    composed to make ratplex without modifying source code or wrapping
  *    instances in proxies).
  *
  * So we eschew double dispatch for binary operator overloading in 
favor of a
  * cacheable variation on multimethod dispatch that was first proposed 
in 2009
  * by Christian Plesner Hansen:
  *
  *  https://mail.mozilla.org/pipermail/es-discuss/2009-June/009603.html
  *
  * Translating from that mail message:
  *
  * When executing the '+' operator in 'A + B' where A and B refer to value
  * objects, do the following:
  *
  *  1. Get the value of property LOP_PLUS in A, call the result P
  *  2. If P is not a list, throw a TypeError: no '+' operator
  *  3. Get the value of property ROP_PLUS in B, call the result Q
  *  4. If Q is not a list, throw a TypeError: no '+' operator
  *  5. Intersect the lists P and Q, call the result R
  *  6. If R is empty throw, a TypeError: no '+' operator
  *  7. If R has more than one element, throw a TypeError: ambiguity
  *  8. If R[0], call it F, is not a function, throw a TypeError
  *  9. Evaluate F(A, B) and return the result
  *
  * Rather than use JS-observable identifiers to label operator 
handedness as
  * in Christian's proposal ('this+', '+this'), we use SpecialId 
variants that
  * cannot be named in-language or observed by proxies.
  *
  * To support operator overloading in-language, we need only provide an API
  * similar to the one Christian proposed:
  *
  *  function addPointAndNumber(a, b) {
  *    return new Point(a.x + b, a.y + b);
  *  }
  *
  *  Function.defineOperator('+', addPointAndNumber, Point, Number);
  *
  *  function addNumberAndPoint(a, b) {
  *    return new Point(a + b.x, a + b.y);
  *  }
  *
  *  Function.defineOperator('+', addNumberAndPoint, Number, Point);
  *
  *  function addPoints(a, b) {
  *    return new Point(a.x + b.x, a.y + b.y);
  *  }
  *
  *  Function.defineOperator('+', addPoints, Point, Point);
  */

I recall you were warm to Christian's suggestion way back then. Let me 
know if anything above is opaque, I'll answer questions and feed changes 
into the comment in the patch.

> Is there a corresponding wrapping type? How does this wrapping type 
> relate to Number?

No wrapping object type -- those are legacy, to be avoided. See 
http://wiki.ecmascript.org/doku.php?id=strawman:value_objects. The main 
thing is value not reference semantics.

> I think the pragma is a bad idea but not a terrible idea. The problem 
> is that we have no choice that a floating point number that happens to 
> have an integral value prints as an integer. Given that, it would be 
> confusing to read simple integer literals into a different type.

Right. For int64/uint64 I followed C#'s L and UL suffixes. For bignum, 
'i' could be used. You remember the old value types idea we discussed of 
extensible suffixes.

> Should bignums toString with the i suffix? In general I would say yes. 
> The biggest argument that they should not is that we should be able to 
> use bignums to index into arrays.

That's fine, and we'll want {,u}int64 indexes first or sooner (or faster 
-- again the C++/C Emscripten/Mandreel compilation target language). For 
ES6 we had a hope of relaxing Array to use integer-domain double 
(number) index type instead of uint32, and Allen and I discussed this 
recently. It's really not clear whether real code (not testsuite code) 
depends on ToUint32. IE had boundary bugs in old JScript for years. Did 
any portable code reliably depend on ECMA-262 letter-of-law here?

> Contradicting myself above, if we do support such a pragma, we should 
> also have a JSON.parse option that reads integers into bignums, and 
> one that prints only bignums as integers. This JSON option would also 
> print all floats using only float notation.

JSON is precision agnostic, so this seems doable. One could imagine 
wanting a higher precision JSON decoding float type than double, too!

> Were we not introducing TypedArrays until ES7, would we have 
> typedArray.length be a bignum rather than a floating point number? If 
> so, is there anything we can do in ES6 to leave ourselves this option 
> in ES7?

That would be unwanted, overkill. All typed arrays want is memory 
capacity, and 64 bits is more than enough. Even 53 is enough, and that's 
where ES6 is going, last I talked to Allen. People do want >4MB typed 
arrays.

/be

>
>
>
>
>
> On Sat, Jul 13, 2013 at 8:46 PM, Brendan Eich <brendan at mozilla.com 
> <mailto:brendan at mozilla.com>> wrote:
>
>     We should just add
>
>     http://wiki.ecmascript.org/doku.php?id=strawman:bignums
>
>     to ES7.
>
>     /be
>
>
>     Jeff Walden wrote:
>
>         On 07/12/2013 04:53 PM, Mark S. Miller wrote:
>
>             I would like a better API -- both less likely to be used
>             unsafely and no harder (or not much harder) to use safely.
>             Suggestions?
>
>
>         In C++ you'd want MS's SafeInt, or WTF's CheckedInt, with
>         operator overloading and all that jazz.  Without operator
>         overloading the best I can think of are functions for every
>         operation, that have to be used, and if you use the raw
>         operators you take your life into your own hands.  Definitely
>         not as easy as just doing the math the "normal"-looking way.
>          I don't see a super-nice way to do this.  :-\
>
>         Jeff
>
>
>
>
> -- 
>     Cheers,
>     --MarkM
domenic at domenicdenicola.com (2013-07-17T19:03:12.135Z)
Mark S. Miller wrote:
> First, I wholeheartedly agree. JS is increasingly being used as a 
> target of compilation. When I asking people doing so what they biggest 
> pain point is, the lack of support for integers is often the first 
> thing mentioned.

int64/uint64 come up faster when compiling from C/C++. I've prototyped 
these in SpiderMonkey in a patch that I'm still rebasing, but planning 
to land pretty soon:

https://bugzilla.mozilla.org/show_bug.cgi?id=749786

```
js> i = 0L

0L
js> u = 0UL

0UL
js> i = ~i
-1L
js> u = ~u

18446744073709551615UL
js> (i>>>1)

9223372036854775807L
js> (i>>>1)==(u>>1)

true
js> (i>>>1)===(u>>1)

false
js> i === int64(-1)

true
```

> That said, there are the usual open issues we'll need to settle, most 
> of which aren't even raised by this strawman.

I've of course had to address these, and posted about it in the past -- recap below.

> Most important is the same as one of the objections raised about 
> decimal: If we treat each individual new primitive type as a one-off 
> special case, we're digging ourselves a deeper abstraction mechanism 
> hole. We first need something like value proxies, so we understand how 
> unprivileged libraries would define new numeric types -- complex, 
> rational, matrix.

Don't forget ratplex ;-).

Seriously, we would like to support composition of rational and complex 
without having to wrap in order to add operators among the types.

I think "value proxy" is the wrong term, though. These aren't proxies. 
They must be tight, minimal-overhead objects with value semantics.

> Once we understand what the bignum API would look like were it defined 
> by an unprivileged library, we can proceed to add it as a primitive 
> anyway. We can even do so before actually accepting a value proxy 
> proposal if necessary, as long as we've settled what the use-API would be.

That was my approach in exploring int64/uint64. Everything I hacked 
under the hood is ready for other types: int32/uint32, float32, bignum, 
complex, ....

For operators, I implemented Christian P. Hansen's suggestion from 2006, 
ES4 era:

```js
/*
  * Value objects specified by
  *
  *  http://wiki.ecmascript.org/doku.php?id=strawman:value_objects
  *
  * define a subset of frozen objects distinguished by the JSCLASS_VALUE_OBJECT
  * flag and given privileges by this prototype implementation.
  *
  * Value objects include int64 and uint64 instances and support the expected
  * arithmetic operators: | ^ & == < <= << >> >>> + - * / % (boolean test) ~
  * unary - and unary +.
  *
  * != and ! are not overloadable to preserve identities including
  *
  *  X ? A : B <=>  !X ? B : A
  *  !(X && Y) <=>  !X || !Y
  *  X != Y <=>  !(X == Y)
  *
  * Similarly, > and >= are derived from < and <= as follows:
  *
  *  A > B <=>  B < A
  *  A >= B <=>  B <= A
  *
  * We provide <= as well as < rather than derive A <= B from !(B < A) in order
  * to allow the <= overloading to match == semantics.
  *
  * The strict equality operators, === and !==, cannot be overloaded, but they
  * work on frozen-by-definition value objects via a structural recursive strict
  * equality test, rather than by testing same-reference. Same-reference remains
  * a fast-path optimization.
  *
  * Ecma TC39 has tended toward proposing double dispatch to implement binary
  * operators. However, double dispatch has notable drawbacks:
  *
  *  - Left-first asymmetry.
  *  - Exhaustive type enumeration in operator method bodies.
  *  - Consequent loss of compositionality (complex and rational cannot be
  *    composed to make ratplex without modifying source code or wrapping
  *    instances in proxies).
  *
  * So we eschew double dispatch for binary operator overloading in favor of a
  * cacheable variation on multimethod dispatch that was first proposed in 2009
  * by Christian Plesner Hansen:
  *
  *  https://mail.mozilla.org/pipermail/es-discuss/2009-June/009603.html
  *
  * Translating from that mail message:
  *
  * When executing the '+' operator in 'A + B' where A and B refer to value
  * objects, do the following:
  *
  *  1. Get the value of property LOP_PLUS in A, call the result P
  *  2. If P is not a list, throw a TypeError: no '+' operator
  *  3. Get the value of property ROP_PLUS in B, call the result Q
  *  4. If Q is not a list, throw a TypeError: no '+' operator
  *  5. Intersect the lists P and Q, call the result R
  *  6. If R is empty throw, a TypeError: no '+' operator
  *  7. If R has more than one element, throw a TypeError: ambiguity
  *  8. If R[0], call it F, is not a function, throw a TypeError
  *  9. Evaluate F(A, B) and return the result
  *
  * Rather than use JS-observable identifiers to label operator handedness as
  * in Christian's proposal ('this+', '+this'), we use SpecialId variants that
  * cannot be named in-language or observed by proxies.
  *
  * To support operator overloading in-language, we need only provide an API
  * similar to the one Christian proposed:
  *
  *  function addPointAndNumber(a, b) {
  *    return new Point(a.x + b, a.y + b);
  *  }
  *
  *  Function.defineOperator('+', addPointAndNumber, Point, Number);
  *
  *  function addNumberAndPoint(a, b) {
  *    return new Point(a + b.x, a + b.y);
  *  }
  *
  *  Function.defineOperator('+', addNumberAndPoint, Number, Point);
  *
  *  function addPoints(a, b) {
  *    return new Point(a.x + b.x, a.y + b.y);
  *  }
  *
  *  Function.defineOperator('+', addPoints, Point, Point);
  */
```

I recall you were warm to Christian's suggestion way back then. Let me 
know if anything above is opaque, I'll answer questions and feed changes 
into the comment in the patch.

> Is there a corresponding wrapping type? How does this wrapping type 
> relate to Number?

No wrapping object type -- those are legacy, to be avoided. See 
http://wiki.ecmascript.org/doku.php?id=strawman:value_objects. The main 
thing is value not reference semantics.

> I think the pragma is a bad idea but not a terrible idea. The problem 
> is that we have no choice that a floating point number that happens to 
> have an integral value prints as an integer. Given that, it would be 
> confusing to read simple integer literals into a different type.

Right. For int64/uint64 I followed C#'s L and UL suffixes. For bignum, 
'i' could be used. You remember the old value types idea we discussed of 
extensible suffixes.

> Should bignums toString with the i suffix? In general I would say yes. 
> The biggest argument that they should not is that we should be able to 
> use bignums to index into arrays.

That's fine, and we'll want {,u}int64 indexes first or sooner (or faster -- again the C++/C Emscripten/Mandreel compilation target language). For 
ES6 we had a hope of relaxing Array to use integer-domain double 
(number) index type instead of uint32, and Allen and I discussed this 
recently. It's really not clear whether real code (not testsuite code) 
depends on ToUint32. IE had boundary bugs in old JScript for years. Did 
any portable code reliably depend on ECMA-262 letter-of-law here?

> Contradicting myself above, if we do support such a pragma, we should 
> also have a JSON.parse option that reads integers into bignums, and 
> one that prints only bignums as integers. This JSON option would also 
> print all floats using only float notation.

JSON is precision agnostic, so this seems doable. One could imagine 
wanting a higher precision JSON decoding float type than double, too!

> Were we not introducing TypedArrays until ES7, would we have 
> typedArray.length be a bignum rather than a floating point number? If 
> so, is there anything we can do in ES6 to leave ourselves this option 
> in ES7?

That would be unwanted, overkill. All typed arrays want is memory 
capacity, and 64 bits is more than enough. Even 53 is enough, and that's 
where ES6 is going, last I talked to Allen. People do want >4MB typed 
arrays.