Mark S. Miller (2013-07-15T13:56:14.000Z)
domenic at domenicdenicola.com (2013-07-17T19:04:36.403Z)
On Sun, Jul 14, 2013 at 10:39 PM, Brendan Eich <brendan at mozilla.com> wrote: > int64/uint64 come up faster when compiling from C/C++. I've prototyped > these in SpiderMonkey in a patch that I'm still rebasing, but planning to > land pretty soon: > > https://bugzilla.mozilla.org/show_bug.cgi?id=749786 I don't think we should introduce precision limited integers into JS, ever. The only point would be to do something weird on overflow of the precision limit. The traditional C-like weirdness is wrapping -- the only advantage being that it avoids even needing to check for overflow. > I recall you were warm to Christian's suggestion way back then. Let me > know if anything above is opaque, I'll answer questions and feed changes > into the comment in the patch. I recall that I was generally warm but had complex reactions. I no longer recall what they were ;). Agreed that naturally composing complex and rational is a good test case. Will take a look. > That would be unwanted, overkill. All typed arrays want is memory > capacity, and 64 bits is more than enough. Even 53 is enough, and that's > where ES6 is going, last I talked to Allen. People do want >4MB typed > arrays. When a bignum represents a small integer, there's no reason that it needs to be any slower than a JS floating point number representing a small integer. Most JS implementations already optimize the latter to store the small integer in the "pointer" with a tag indicating that the non-tag bits are the small integer value. Exactly the same trick has been used for bignums in Lisps and Smalltalks for many decades. As long as the numbers represent small integers, I think the only differences would be semantics, not performance.