hexadecimal literals for floating-point
Note that some ES implementation uses NaN-based tagging to distinguish Number values from internal object pointers. Allowing generation of arbitrary bit pattern FP values would open the door for pointer spoofing. At the very least, implementation would have to reject any such hex fp literals that the implementation would interpret as a pointer.
Note that some ES implementation uses NaN-based tagging to distinguish Number values from internal object pointers. Allowing generation of arbitrary bit pattern FP values would open the door for pointer spoofing. At the very least, implementation would have to reject any such hex fp literals that the implementation would interpret as a pointer. Allen On Mar 18, 2012, at 6:27 PM, Roger Andrews wrote: > As a C-like language JavaScript has hexadecimal integer literals like 0x123ABC. > > C99 introduced hexadecimal floating-point literals, see: > http://publib.boulder.ibm.com/infocenter/zos/v1r12/topic/com.ibm.zos.r12.cbclx01/lit_fltpt.htm#lit_fltpt__hex_float_constants > > of the general form: > [sign] "0x" [hexdigits] ["." hexdigits] ["p" [sign] decdigits] > (with the proviso that at least one significant hexdigit must appear). > > The letter 'p' means "times 2 to the power of"; and the exponent field is signed decimal. Case is ignored. > > (In C the exponent field *must* appear in order to disambiguate a trailing 'f' = float flag, but this would not be necessary in JavaScript.) > > > Would it be a Big Ask to have hexadecimal floating-point literals in ES6? > It extends the existing hexadecimal integer literal syntax slightly to encompass all the finite Numbers. > > This gives the programmer the ability to precisely specify every bit of a Number in a well-understood form. And it is a natural match with the binary64 sign-significand-exponent fields. > For example: > var EPSILON = 1p-52, > MAX_POWTWO = 1p1023; > instead of > var EPSILON = 1 / 0x10000000000000, > MAX_POWTWO = 1 / (Number.MIN_VALUE * 0x8000000000000); > > JavaScript Numbers can be integer or fractional, decimal literals too, why do hexadecimal literals discriminate against fractions? > > _______________________________________________ > es-discuss mailing list > es-discuss at mozilla.org > https://mail.mozilla.org/listinfo/es-discuss >
Perhaps I spoke a little too loosely there. The hexadecimal exponential format allows control over the bits of a finite Number, not arbitrary bit patterns. Just like the decimal exponential format.
The hex format does not allow the programmer to mess with IEEE Infinities, NaNs, signaling NaNs, NaN payloads. (Except that 0x1p1024 overflows to Infinity, just like 1e310.)
PS: My two examples should be 0x1p-52 and 0x1p1023 (I forgot the "0x" prefix for hexadecimal).
Perhaps I spoke a little too loosely there. The hexadecimal exponential format allows control over the bits of a *finite* Number, not arbitrary bit patterns. Just like the decimal exponential format. The hex format does not allow the programmer to mess with IEEE Infinities, NaNs, signaling NaNs, NaN payloads. (Except that 0x1p1024 overflows to Infinity, just like 1e310.) PS: My two examples should be 0x1p-52 and 0x1p1023 (I forgot the "0x" prefix for hexadecimal). -------------------------------------------------- > Note that some ES implementation uses NaN-based tagging to distinguish > Number values from internal object pointers. Allowing generation of > arbitrary bit pattern FP values would open the door for pointer spoofing. > At the very least, implementation would have to reject any such hex fp > literals that the implementation would interpret as a pointer. > > Allen > > > On Mar 18, 2012, at 6:27 PM, Roger Andrews wrote: > >> As a C-like language JavaScript has hexadecimal integer literals like >> 0x123ABC. >> >> C99 introduced hexadecimal floating-point literals, see: >> http://publib.boulder.ibm.com/infocenter/zos/v1r12/topic/com.ibm.zos.r12.cbclx01/lit_fltpt.htm#lit_fltpt__hex_float_constants >> >> of the general form: >> [sign] "0x" [hexdigits] ["." hexdigits] ["p" [sign] decdigits] >> (with the proviso that at least one significant hexdigit must appear). >> >> The letter 'p' means "times 2 to the power of"; and the exponent field is >> signed decimal. Case is ignored. >> >> (In C the exponent field *must* appear in order to disambiguate a >> trailing 'f' = float flag, but this would not be necessary in >> JavaScript.) >> >> >> Would it be a Big Ask to have hexadecimal floating-point literals in ES6? >> It extends the existing hexadecimal integer literal syntax slightly to >> encompass all the finite Numbers. >> >> This gives the programmer the ability to precisely specify every bit of a >> Number in a well-understood form. And it is a natural match with the >> binary64 sign-significand-exponent fields. >> For example: >> var EPSILON = 1p-52, >> MAX_POWTWO = 1p1023; >> instead of >> var EPSILON = 1 / 0x10000000000000, >> MAX_POWTWO = 1 / (Number.MIN_VALUE * 0x8000000000000); >> >> JavaScript Numbers can be integer or fractional, decimal literals too, >> why do hexadecimal literals discriminate against fractions? >> >> _______________________________________________ >> es-discuss mailing list >> es-discuss at mozilla.org >> https://mail.mozilla.org/listinfo/es-discuss
This isn't a huge "Ask" to use your term, but it's also not a big win. We can talk about it off-agenda at next week's meeting, try to get a feel for odds of adding it. I'll bring it up. Thanks,
This isn't a huge "Ask" to use your term, but it's also not a big win. We can talk about it off-agenda at next week's meeting, try to get a feel for odds of adding it. I'll bring it up. Thanks, /be Roger Andrews wrote: > Perhaps I spoke a little too loosely there. > The hexadecimal exponential format allows control over the bits of a > *finite* Number, not arbitrary bit patterns. Just like the decimal > exponential format. > > The hex format does not allow the programmer to mess with IEEE > Infinities, NaNs, signaling NaNs, NaN payloads. (Except that 0x1p1024 > overflows to Infinity, just like 1e310.) > > > PS: My two examples should be 0x1p-52 and 0x1p1023 (I forgot the "0x" > prefix for hexadecimal). > > -------------------------------------------------- >> Note that some ES implementation uses NaN-based tagging to >> distinguish Number values from internal object pointers. Allowing >> generation of arbitrary bit pattern FP values would open the door for >> pointer spoofing. At the very least, implementation would have to >> reject any such hex fp literals that the implementation would >> interpret as a pointer. >> >> Allen >> >> >> On Mar 18, 2012, at 6:27 PM, Roger Andrews wrote: >> >>> As a C-like language JavaScript has hexadecimal integer literals >>> like 0x123ABC. >>> >>> C99 introduced hexadecimal floating-point literals, see: >>> http://publib.boulder.ibm.com/infocenter/zos/v1r12/topic/com.ibm.zos.r12.cbclx01/lit_fltpt.htm#lit_fltpt__hex_float_constants >>> >>> >>> of the general form: >>> [sign] "0x" [hexdigits] ["." hexdigits] ["p" [sign] decdigits] >>> (with the proviso that at least one significant hexdigit must appear). >>> >>> The letter 'p' means "times 2 to the power of"; and the exponent >>> field is signed decimal. Case is ignored. >>> >>> (In C the exponent field *must* appear in order to disambiguate a >>> trailing 'f' = float flag, but this would not be necessary in >>> JavaScript.) >>> >>> >>> Would it be a Big Ask to have hexadecimal floating-point literals in >>> ES6? >>> It extends the existing hexadecimal integer literal syntax slightly >>> to encompass all the finite Numbers. >>> >>> This gives the programmer the ability to precisely specify every bit >>> of a Number in a well-understood form. And it is a natural match >>> with the binary64 sign-significand-exponent fields. >>> For example: >>> var EPSILON = 1p-52, >>> MAX_POWTWO = 1p1023; >>> instead of >>> var EPSILON = 1 / 0x10000000000000, >>> MAX_POWTWO = 1 / (Number.MIN_VALUE * 0x8000000000000); >>> >>> JavaScript Numbers can be integer or fractional, decimal literals >>> too, why do hexadecimal literals discriminate against fractions? >>> >>> _______________________________________________ >>> es-discuss mailing list >>> es-discuss at mozilla.org >>> https://mail.mozilla.org/listinfo/es-discuss > > > _______________________________________________ > es-discuss mailing list > es-discuss at mozilla.org > https://mail.mozilla.org/listinfo/es-discuss >
As a C-like language JavaScript has hexadecimal integer literals like 0x123ABC.
C99 introduced hexadecimal floating-point literals, see: publib.boulder.ibm.com/infocenter/zos/v1r12/topic/com.ibm.zos.r12.cbclx01/lit_fltpt.htm#lit_fltpt__hex_float_constants
of the general form: [sign] "0x" [hexdigits] ["." hexdigits] ["p" [sign] decdigits] (with the proviso that at least one significant hexdigit must appear).
The letter 'p' means "times 2 to the power of"; and the exponent field is signed decimal. Case is ignored.
(In C the exponent field must appear in order to disambiguate a trailing 'f' = float flag, but this would not be necessary in JavaScript.)
Would it be a Big Ask to have hexadecimal floating-point literals in ES6? It extends the existing hexadecimal integer literal syntax slightly to encompass all the finite Numbers.
This gives the programmer the ability to precisely specify every bit of a Number in a well-understood form. And it is a natural match with the binary64 sign-significand-exponent fields. For example: var EPSILON = 1p-52, MAX_POWTWO = 1p1023; instead of var EPSILON = 1 / 0x10000000000000, MAX_POWTWO = 1 / (Number.MIN_VALUE * 0x8000000000000);
JavaScript Numbers can be integer or fractional, decimal literals too, why do hexadecimal literals discriminate against fractions?