Brendan Eich (2013-07-15T16:01:25.000Z)
domenic at domenicdenicola.com (2013-07-17T19:06:27.929Z)
Mark S. Miller wrote: > I don't think we should introduce precision limited integers into JS, > ever. The only point would be to do something weird on overflow of the > precision limit. The traditional C-like weirdness is wrapping -- the > only advantage being that it avoids even needing to check for overflow. We already have precision limited integers in JS, both via the bitwise-logical and shift operators, and via typed arrays / binary data. We need 64-bit ints for the latter, scalar and SIMD-vector as well as arbitrary length typed array element type. This ship has sailed, bignums are not equivalent or a superset. These are in the hardware, they need to be in JS for the "memory safe low-road" role it plays for WebGL, asm.js, etc. > When a bignum represents a small integer, there's no reason that it > needs to be any slower than a JS floating point number representing a > small integer. Most JS implementations already optimize the latter to > store the small integer in the "pointer" with a tag indicating that > the non-tag bits are the small integer value. Exactly the same trick > has been used for bignums in Lisps and Smalltalks for many decades. Sure, I'm familiar -- but again those older implementations did not face the performance constraints of the "low road" that JS does, nor were they were quite so aggressively optimized. I'm not saying bignums couldn't be used as doubles are today -- they could. Rather, JS has number and always will, and the speculations toward int are sunk cost (good sunk cost in a practical sense). Waiting for bignums to come in before typed arrays, just to support types bigger than memory in any foreseeable future, and get the same optimizations as number, does not fly. We won't wait in ES6 -- we will be lucky to get integer-domain double instead of uint32 lengths for typed arrays if not arrays. > As long as the numbers represent small integers, I think the only > differences would be semantics, not performance. Both, in the "short run", in practice. But we aren't even doing bignums in ES6, and we are doing typed arrays / binary data. We need to satisfy >4G memory buffer use cases now, and be future-proof. 53 bits is enough.