nits on BigInt Proposal
The Node team was one of the biggest supporters for this proposal as it enabled much better interoperability with the host environment.
My temporal proposal is kicking around depending on bigint to allow nanosecond precision timestamps, which are common in scientific applications.
For myself as a general purpose web dev, I like the idea of no frills integer math. Doing integer math with the number type is mildly annoying sometimes.
BigDecimal is a MUST for accounting.
Main reasons:
- JS number precision is too limited (16 digits)
- Decimal numbers are not represented "exactly" by JS numbers => comparisons gives surprising results (0.1 + 0.2 !== 0.3).
- Lossy roundtrips with SQL Databases: decimals have up to 38 digits precision in Oracle and SQL Server, 65 (!!) in MySQL.
JSON serialization is addressed by serializing to string. Like dates (no date literals in JS/JSON).
Same for SQL. In the absence of a BigDecimal type on JS side, values are passed as strings.
Bruno
On Fri, Aug 4, 2017 at 7:52 AM, kai zhu <kaizhu256 at gmail.com> wrote:
looking at the use-cases for this feature @ tc39 proposal-bigint#use-cases, i'm not convinced it improves everyday programming, or outweigh the benefit and simplicity having a single number type.
my nits are:
- will this break or complicate existing/future code that does typeof checks for numbers? what are the costs of retooling nodejs mongodb / mysql / etc drivers and the apps that use them?
from what I interpret, it's not a number; it's a different type entirely, and does not interop with existing numbers
- how will JSON.parse and JSON.stringify deal with BigInt? the mentioned use-cases for wire-protocols, guids, timestamps, and fixed-point BigDecimal aren’t very useful if it can’t easily be serialized / deserialized across db / persistent storage
Apparently it will tostring and require a reviver.
- are there actual common algorithmic use-cases in frontend programming or nodejs apps that need arithmetic on integers greater than 52-bits? should that rather be the domain of webassembly?
it's definitely NOT a webassembly thing, because it's a high level structure.
It would simplify computing large factorials... instead of manually chunking stuff to 5 decimal digits or 4 hex digits... not that it's much of a use case...
But; there's already a library for this www.npmjs.com/package/bigint. Why would this be something to add to the language any more than extending JSON?
Regarding currency manipulation; I don't see that as something that is as useful on the client side as it is on a server side... so it doesn't really need to be in every javascript implementation.
On Fri, Aug 4, 2017 at 9:10 AM, J Decker <d3ck0r at gmail.com> wrote:
On Fri, Aug 4, 2017 at 7:52 AM, kai zhu <kaizhu256 at gmail.com> wrote:
looking at the use-cases for this feature @ tc39/prop osal-bigint#use-cases, i'm not convinced it improves everyday programming, or outweigh the benefit and simplicity having a single number type.
my nits are:
- will this break or complicate existing/future code that does typeof checks for numbers? what are the costs of retooling nodejs mongodb / mysql / etc drivers and the apps that use them?
from what I interpret, it's not a number; it's a different type entirely, and does not interop with existing numbers
- how will JSON.parse and JSON.stringify deal with BigInt? the mentioned use-cases for wire-protocols, guids, timestamps, and fixed-point BigDecimal aren’t very useful if it can’t easily be serialized / deserialized across db / persistent storage
Apparently it will tostring and require a reviver.
- are there actual common algorithmic use-cases in frontend programming or nodejs apps that need arithmetic on integers greater than 52-bits? should that rather be the domain of webassembly?
it's definitely NOT a webassembly thing, because it's a high level structure.
It would simplify computing large factorials... instead of manually chunking stuff to 5 decimal digits or 4 hex digits... not that it's much of a use case...
But; there's already a library for this www.npmjs.com/package bigint. Why would this be something to add to the language any more than extending JSON?
Regarding currency manipulation; I don't see that as something that is as useful on the client side as it is on a server side... so it doesn't really need to be in every javascript implementation.
And, it seems more like a way to get around no operator overloading, by
saying 'this specific case warrants it' but not vectors or complex numbers.
An HTML attachment was scrubbed... URL: esdiscuss/attachments/20170804/4eca272b/attachment
On Fri, Aug 4, 2017 at 10:16 AM, Sebastian Malton <sebastian at malton.name>
wrote:
I remember that was a proposal for operator overloading. Was it decided against? I think that packages could solve this and many other problems if there was overloading.
I looked through outstanding proposals and didn't see any regarding operator overloading. I'm personally not a fan of operator overloading, although one can do clever things with abstract types like neurons and neural meshes to use ( like building a merged input using N1 = N2+N3, it's actually better to just use N1 = N2.Add( N3 ). which gives the ability to add other parameters in the mix anyway) I'm not even a fan of C++ function overloading, but rather, when porting backward from C++ to C, using well named functions for appropriate inputs turned out to be much better for clarity and maintenance.
With such vehement opposition to transparent things like extending JSON to support a wider range of valid inputs; or even adding an additional namespace for a separate version that does, I don't see how this has made it so far, which adds an entirely new type that is sort of like Numbers, but really nothing at all like Numbers.
Sebastian
- I remember that was a proposal for operator overloading. Was it decided > against? I think that packages could solve this and many other problems if > there was overloading.*
IMO Operator overloading is better than another built-in number type. It solves a wider range of problems: complex numbers, vectors, etc.
Even with decimals or big ints, overloading leaves more options open. Some people need unlimited precision (computing zillions of decimals of math constants). Others prefer a more compact, more efficient decimal type with only 38 decimals and may get picky about rounding rules. There may not be a "one size fits all" and it would be great to have the flexibility to package new types and their operator overloads so that they can be imported.
Operator overloading alone is not sufficient. With numbers, it is also nice to have a syntax for literals (like 12.75m in C#).
What is the most advanced proposal on this? There were some hints on "value class syntax" and "literal suffix support" in a slide deck from Brendan (www.slideshare.net/BrendanEich/int64 - slides 12 and 13). I found that very interesting but I don't know if it got any traction.
Bruno
Inline
On Fri, Aug 4, 2017 at 10:52 AM kai zhu <kaizhu256 at gmail.com> wrote:
looking at the use-cases for this feature @ tc39/proposal-bigint#use-cases, i'm not convinced it improves everyday programming, or outweigh the benefit and simplicity having a single number type.
...
- are there actual common algorithmic use-cases in frontend programming or nodejs apps that need arithmetic on integers greater than 52-bits? should that rather be the domain of webassembly?
The Bosch BMP280 barometric pressure sensor is capable of producing a high accuracy, compensated pressure value that requires 64 bits, which JavaScript is not presently capable of representing. This is only one of many similar examples in this domain.
FYI, you could create a BigFloat class using two BigInts, one for the value, the other for the decimal point position. You could go from there to model infinite-precision values, using a bit of math to ensure the two fields remain correct.
Isiah Meadows me at isiahmeadows.com
Looking for web consulting? Or a new website? Send me an email and we can get started. www.isiahmeadows.com
no, just use one BigInt padded with a bunch of zeroes (e.g. instead of $12.34, use 1234000000, and round to nearest integer for division). this is what i meant by “fixed point” decimal. for current javascript double-precision numbers, you can do integer arithmetic up to around 9,007,199,254,000,000 which at my previous company, was good-enough to use for currency conversions.
Which is theoretically equivalent. Either way, we can implement a BigDecimal (or similar) in terms of a BigInt, which was my point.
Isiah Meadows me at isiahmeadows.com
Looking for web consulting? Or a new website? Send me an email and we can get started. www.isiahmeadows.com
@bruno, i'm wondering if having a DecimalFloat128Array (based on ieee 754 standard) is good enough for accounting (with 34 decimal-digit precision)? like existing database drivers, you can only get/set strings, but infix / inplace operators wouldn’t have that restriction.
e.g.:
aa = new DecimalFloat128Array(['9823749.82742']);
// aa[0] can only be exposed as the string '9823749.82742'
console.assert(typeof aa[0] === 'string' && aa[0] === '9823749.82742');
// aa[0] can only be set using string as well
aa[0] = '87834398978.798';
aa = new DecimalFloat128Array(['.1']);
bb = new DecimalFloat128Array(['3']);
// cc is assigned the string '0.3',
// but the engines should be able to easily optimize hotspots that use infix and inplace operators
// with native-types,
// if implicit coercion is disallowed between DecimalFloat128Array and other types.
cc = aa[0] * bb[0];
aa[0] *= bb[0];
// guidance for database drivers would be to implement string get/set as well
aa = new DecimalFloat128Array(['97324927.8934723'])
mysqlDriver.execute(
'INSERT INTO mydecimaltable (?,?,?);',
['id1234', aa[0],'foo'],
function (error) {
mysqlDriver.execute(
'SELECT decimalValue,foo FROM mydecimaltable WHERE id=id1234;',
function (error, valueList) {
// db-driver exposes valueList[0] as the string '97324927.8934723'
console.assert(typeof valueList[0] === 'string' && valueList[0] === '97324927.8934723');
}
);
}
);
pros:
- requires no new language-syntax
- avoids introducing new typeof's to the javascript-language, which avoids compatibility-risks with existing database drivers (use strings to get/set values)
cons:
- arithmetic for scalars is weird: aa[0] + bb[0] (instead of aa + bb)
- does not support arbitrary precision (but are there common javascript use-cases requiring arbitrary precision?)
Kai, mind commenting about this in the proposal's repo (filing a new issue there), where you'll more likely get feedback?
On Feb 10, 2018, at 12:26 AM, Isiah Meadows <isiahmeadows at gmail.com> wrote:
Kai, mind commenting about this in the proposal's repo (filing a new issue there), where you'll more likely get feedback?
yea, and got feedback. turns out there’s a serious footgun [1] using string-only interface :(
@kai Yes, DecimalFloat128 is sufficient for financial applications.
If a DecimalFloat128Array type is added to the language, doesn't this imply that a DecimalFloat128 type also exists?
If, as you wrote, typeof aa[0] === 'string', then aa[0] + bb[0] would be string concatenation, which would be utterly confusing. typeof aa[0] must be 'DecimalFloat128'.
Bruno
looking at the use-cases for this feature @ tc39/proposal-bigint#use-cases, tc39/proposal-bigint#use-cases, i'm not convinced it improves everyday programming, or outweigh the benefit and simplicity having a single number type.
my nits are:
will this break or complicate existing/future code that does typeof checks for numbers? what are the costs of retooling nodejs mongodb / mysql / etc drivers and the apps that use them?
how will JSON.parse and JSON.stringify deal with BigInt? the mentioned use-cases for wire-protocols, guids, timestamps, and fixed-point BigDecimal aren’t very useful if it can’t easily be serialized / deserialized across db / persistent storage
are there actual common algorithmic use-cases in frontend programming or nodejs apps that need arithmetic