Big integer, Big float, and operator overloading ideas

# Fabrice Bellard (8 months ago)

I have designed a Javascript extension which supports big integers, big floats and operator overloading. It is not 100% compatible with Javascript because I wanted no distinction between legacy javascript numbers and big integers or big floats, but most of the language semantics are preserved. I guess some ideas may be useful for a 100% compatible behavior, in particular for the operator overloading.

I published a summary of the changes at numcalc.com/jsbignum.pdf . A preliminary implementation running a numerical calculator is available at numcalc.com . The source code of the calculator ( numcalc.com/jscalc.js ) shows how the bignum extensions can be used to easily manipulate fractions, complex numbers, polynomials and matrixes.

Fabrice.

# David Teller (8 months ago)

That proposal is quite interesting, but I'm a bit scared about potential for breakage. Maybe Integer/Float values should be introduced only in your "use math" mode? Would that be sufficient?

# Fabrice Bellard (8 months ago)

On 03/21/2018 11:15 AM, David Teller wrote:

That proposal is quite interesting, but I'm a bit scared about potential for breakage. Maybe Integer/Float values should be introduced only in your "use math" mode? Would that be sufficient?

Yes, if 100% compatibility was needed it could be possible to support the Integer/Float values in a specific mode such as "use bignum" or "use math" (the current "use math" mode goes a bit further because it changes the behavior of some Javascript operators such as "^" (power instead of xor), "/" (fraction result if both operands are integers), "%" (Euclidian remainder)).

Best ,

Fabrice.

# Fabrice Bellard (6 months ago)

A new revised version of the "BigNum extensions" is available at numcalc.com/jsbignum.pdf . This new version is 100% compatible with standard Javascript with the addition of a "use bigint" mode. It is split into 4 proposals:

  1. Overloading of the standard operators to support new types such as complex numbers, fractions or matrixes.

  2. Bigint mode where arbitrarily large integers are available by default (no "n" suffix is necessary as in the BigInt proposal at tc39.github.io/proposal-bigint ).

  3. Arbitrarily large floating point numbers in base 2 using the IEEE 754 semantics.

  4. Optional "math" mode which modifies the semantics of the division, modulo and power operator. The division and power operator return a fraction with integer operands and the modulo operator is defined as the Euclidian remainder.

A complete demo is available at numcalc.com . The command "\mode [std|bigint|math]" can be used to switch between the standard javascript mode, bigint mode or math mode. In standard Javascript mode, the complete TC39 BigInt proposal is supported. In the demo, the default floating point precision is set to 128 bits. It can be set back to the default Javascript precision with "\p f64" or "\p 53 11".

Fabrice.

# kai zhu (6 months ago)

what you’ve done is interesting and impressive; but an integration-level concern (if tc39 is to consider standardizing your extension, rather than keep it userland) is how would a web-project go about baton-passing these arbitrary-precision numbers between browser <-> server <-> persistent-storage via JSON? what would happen if you pass an arbitrarily large float as a “number” type to current mysql (or native-module sqlite3) driver?

playing with your live web-demo @ numcalc.com, numcalc.com, it seems JSON.stringify has divergent behavior between math-equivalent large floats (preserves full-precision) and large integers (throws error as a bigint):

mjs > 12345678901234567890.0e0 === 12345678901234567890

true

mjs > typeof 12345678901234567890.0e0
"number"
mjs > JSON.stringify(12345678901234567890.0e0)
"12345678901234567890”

mjs > typeof 12345678901234567890
"bigint"
mjs > JSON.stringify(12345678901234567890)

TypeError: bigint are forbidden in JSON.stringify
    at to_str (stdlib.js)
    at stringify (stdlib.js)
    at <eval> (<evalScript>)
    at evalScript (native)
    at eval_and_print (repl.js)
    at setPrec (native)
    at handle_cmd (repl.js)
    at readline_handle_cmd (repl.js)
    at handle_key (repl.js)
    at handle_char (repl.js)
    at handle_byte (repl.js)

mjs > JSON.parse(JSON.stringify(1.12345678901234567890123456789e123456))

1.12345678901234567890123456789e+123456 // takes ~200ms to process

mjs > JSON.parse(JSON.stringify(1.12345678901234567890123456789e-123456))

1.12345678901234567890123456789e-123456 // takes ~200ms to process

mjs > JSON.stringify(1.1e1234567890) // unresponsive

mjs > JSON.stringify(1.1e-1234567890) // unresponsive

kai zhu kaizhu256 at gmail.com

# kai zhu (6 months ago)

oh, also i'm not a tc39 member if i made it sound like i was ^^;;;

kai zhu kaizhu256 at gmail.com

# Fabrice Bellard (6 months ago)

JSON.stringify is currently fully compatible with the TC39 BigInt proposal, so it throws an exception in the case of a bigint value. You can add:

BigInt.prototype.toJSON = function() { return this.toString(); }

to handle this case. It could also be possible to modify the behavior of JSON.stringify in bigint mode so that it does not throw an exception in case of a bigint value.

Regarding the unresponsive cases you noticed with large exponents in floating point literals, it is a known problem of the current code which will be corrected.

Best ,

Fabrice.

# Anders Rundgren (6 months ago)

On 2018-05-28 21:05, Fabrice Bellard wrote:

Hi,

JSON.stringify is currently fully compatible with the TC39 BigInt proposal, so it throws an exception in the case of a bigint value. You can add:

I haven't looked into the BigInt proposal but if it breaks I-JSON/JSON.parse() I would vote against it.

The JSON community is unfortunately already pretty confused and divided: cyberphone/I-JSON-Number-System#existing-solutions

Microsoft's JSON/.NET guru's view on JSON.parse(): JamesNK/Newtonsoft.Json#1706

Anders

# kai zhu (6 months ago)

@anders, i'm just brainstorming if JSON could meet most of industry-needs for large-numbers (with minimal changes), if

  1. JSON.stringify was enhanced to stringify fixed-precision BigInt64 and BigDecimal128 as strings (including suffix to make userland parsing easier)

  2. JSON.parse is left untouched

state = {
    "bigInt64": -9223372036854775807n,
    "bigDecimal128": -9.99999999999999999999999999999999e6144dd
}

// JSON.stringify will stringify BigInt64 and BigDecimal128 as strings
JSON.stringify(state) = '{\
    "bigInt64": "-9223372036854775807n",\
    "bigDecimal128": "-9.99999999999999999999999999999999e6144dd"\
}'

// JSON.parse will NOT un-stringify BigInt64 and BigDecimal128 (userland responsibility)
JSON.parse(JSON.stringify(state)) = {
    "bigInt64": "-9223372036854775807n",
    "bigDecimal128": "-9.99999999999999999999999999999999e6144dd"
}
# Anders Rundgren (6 months ago)

On 2018-05-29 10:32, kai zhu wrote:

@anders, i'm just brainstorming if JSON could meet most of industry-needs for large-numbers (with minimal changes), if

  1. JSON.stringify was enhanced to stringify fixed-precision BigInt64 and BigDecimal128 as strings (including suffix to make userland parsing easier)

  2. JSON.parse is left untouched

state = {
     "bigInt64": -9223372036854775807n,
     "bigDecimal128": -9.99999999999999999999999999999999e6144dd
}

// JSON.stringify will stringify BigInt64 and BigDecimal128 as strings
JSON.stringify(state) = '{\
     "bigInt64": "-9223372036854775807n",\
     "bigDecimal128": "-9.99999999999999999999999999999999e6144dd"\
}'

// JSON.parse will NOT un-stringify BigInt64 and BigDecimal128 (userland responsibility)
JSON.parse(JSON.stringify(state)) = {
     "bigInt64": "-9223372036854775807n",
     "bigDecimal128": "-9.99999999999999999999999999999999e6144dd"
}

This is exactly what I'm hoping on. Unfortunately there's no community dealing with big numbers in JSON. Microsoft's solution for .NET and Oracle's for Java have nothing in common and the OpenAPI/Swagger folks run their show as well.

It is really entirely in "userland" = do whatever works for you :-)

Anders