# Float16Array

I think this is another missed train/opportunity for the long-time-ago discussed ArrayType which would give the ability to define any sort of typed collection together with StructType, somehow forgotten too, and new primitives such Type.int16, Type.uint16, Type.int32 and all possible others.

You can indeed find current types here: harmony:typed_objects and unfortunately float16 is missing, as well as int64 and uint64 ... I hope these three will be introduced too.

The numeric value types are:

- uint8, uint8Clamped : 8-bit unsigned integers
- uint16 : 16-bit unsigned integers
- uint32 : 32-bit unsigned integers
- int8 : 8-bit signed integers
- int16 : 16-bit signed integers
- int32 : 32-bit signed integers
- float32 : 32-bit IEEE754 floating-point numbers
- float64 : 64-bit IEEE754 floating-point numbers

Each numeric value type’s coercion is the standard coercion, except for uint8Clamped, which performs a saturating coercion.

The other value types are:

- boolean : ECMAScript primitive boolean
- string : ECMAScript primitive string

The coercions associated with these types are the standard ECMAScript algorithms [[ToBoolean]] and [[ToString]], respectively.

IIRC, OpenGL ES 3.0 (and therefore WebGL 2.0) requires full float texture support and full float32 operations in GLSL and hardware. Is there significant gains to use half floats on modern hardware or do they effectively end up on that same path as float32s?

My question becomes, is this a problem that will go away anyway or will it be long lived?

If there are performance gains to be had even for the most modern hardware, then I think we should be able to push this through.

In case it's not obvious, faster DMA and larger buffer/texture capacity vs. float32. Many applications benefit hugely from having floating point data but certainly do not need float32's range and precision - for those, half/float16 is a great choice.

On Wed, Jul 29, 2015 at 4:05 PM, Alexander Jones <alex at weej.com> wrote:

In case it's not obvious, faster DMA and larger buffer/texture capacity vs. float32. Many applications benefit hugely from having floating point data but certainly do not need float32's range and precision - for those, half/float16 is a great choice.

It is not obvious exactly what part you're targeting. E.g. Is it texture transfer over the network that is the biggest saving? Is it transfer/conversion cost to GPU memory? Is it computational complexity in the shader? Is it the fact that you can fit more data into GPU memory, i.e. you run out of space later?

Just because it is in the OpenGL spec doesn't mean that it is actually useful on modern hardware since implementations theoretically are free to expand it to full float. Maybe they don't, I don't know. That's my question.

It also the open question, if this is used in practice? E.g. float64 for SIMD was deemed as not important.

So, no, it is not obvious to some of us that are not continuously in this world, and my question hasn't been answered. I think it is plausible and I would like to help out, but we need to clarify exactly where this is a benefit and why that benefit isn't going away.

Everything you mentioned, apart from computational savings AFAIK.

(Half is used extensively in the visual effects industry.)

Float16 isn't a "legacy" format to Float32 much in the same way that unsigned int16 isn't a "legacy" format to unsigned int32.

Float16 is a mandatorily supported format in OpenGL ES 3.0 textures and vertex attributes.

If more precision than byte is required, but less than float32 precision is acceptable, benefits include all of:

- Using half as much network bandwidth
- Using half as much RAM
- Getting half as many GPU cache misses
- Using half as much GPU upload bandwidth
- Using half as much GPU download bandwidth
- Using half as much VRAM
- Using half as much GPU vertex streaming bandwidth
- Using half as much texel lookup bandwidth
- Using half as much fillrate

WebGL 2.0 supports half-precision float (16bit float) by default.

But now, we must use the following dirty hack using `Uint16Array`

.

```
// ref: http://stackoverflow.com/questions/32633585/how-do-you-convert-to-half-floats-in-javascript
var toHalf = (function() {
var floatView = new Float32Array(1);
var int32View = new Int32Array(floatView.buffer);
/* This method is faster than the OpenEXR implementation (very often
* used, eg. in Ogre), with the additional benefit of rounding, inspired
* by James Tursa?s half-precision code. */
return function toHalf(val) {
floatView[0] = val;
var x = int32View[0];
var bits = (x >> 16) & 0x8000; /* Get the sign */
var m = (x >> 12) & 0x07ff; /* Keep one extra bit for rounding */
var e = (x >> 23) & 0xff; /* Using int is faster here */
/* If zero, or denormal, or exponent underflows too much for a denormal
* half, return signed zero. */
if (e < 103) {
return bits;
}
/* If NaN, return NaN. If Inf or exponent overflow, return Inf. */
if (e > 142) {
bits |= 0x7c00;
/* If exponent was 0xff and one mantissa bit was set, it means NaN,
* not Inf, so make sure we set one mantissa bit too. */
bits |= ((e == 255) ? 0 : 1) && (x & 0x007fffff);
return bits;
}
/* If exponent underflows but not too much, return a denormal */
if (e < 113) {
m |= 0x0800;
/* Extra rounding may overflow and set mantissa to 0 and exponent
* to 1, which is OK. */
bits |= (m >> (114 - e)) + ((m >> (113 - e)) & 1);
return bits;
}
bits |= ((e - 112) << 10) | (m >> 1);
/* Extra rounding. An overflow will set mantissa to 0 and increment
* the exponent, which is OK. */
bits += m & 1;
return bits;
};
}());
var tex = new Uint16Array(4);
tex[0] = toHalf(0.5);
tex[1] = toHalf(1);
tex[2] = toHalf(123);
tex[3] = toHalf(-13);
```

The time has come.

I created a float16 package. www.npmjs.com/package/@petamoriken/float16

It exports `Float16Array`

, `DataView`

methods (`getFloat16`

, `setFloat16`

) and hfround like `Math.frond`

.

If there is a chance, please append EcmaScript Stage 0.

Why?

Adding just because it's cool is not enough to make it through the staging process.

The current spec is based on IEEE 754 and matches the floating point specs. What would Float16 be based on?

There is more chance for a Float128 but even this one would need more time and investigation.

On Wed, May 17, 2017 at 1:05 PM, Leo Balter <leonardo.balter at gmail.com>

wrote:

The current spec is based on IEEE 754 and matches the floating point specs. What would Float16 be based on?

There's a binary16 format specified by IEEE-754 2008.

But I agree with the "why" comment. 16-bit (aka "half precision") binary floating point is not very useful here in 2017 except for interop (森建 - is interop your motivation?)...

I'd be much more interested to see support for the "new" (nine years ago) decimal formats.

-- T.J. Crowder

On Wed, May 17, 2017 at 2:05 PM, Leo Balter <leonardo.balter at gmail.com>

wrote:

Why?

Adding just because it's cool is not enough to make it through the staging

process.

Because Float16 is a widely used format present on nearly all computers GPUs sold the last 10 years. It's widely used to improve performance and memory impact, which is of particular importance on mobiles. The format interacts with the host space when during vertex data upload, texture data upload, uniform data upload, transform & feedback readback and texture data readback.

The current spec is based on IEEE 754 and matches the floating point specs. What would Float16 be based on?

IEEE 754-2008 binary16 en.wikipedia.org/wiki/IEEE_floating_point#IEEE_754-2008 (which is

what GPUs universally implement).

I'd missed that this was the continuation of a thread. Florian Bösch started it with interop in mind, in fact. So that makes sense.

-- T.J. Crowder

Also keep in mind that Float16 is the only viable format for some operations. Mobiles in particular may not support any or a combination of:

- Basic (nearest, clamped, non mipmapped) Float32 textures
- Linearly interpolating Float32 textures
- Rendering to Float32 textures
- Reading back Float32 textures
- Repeating (ST) Float32 textures
- Mipmapping Float32 textures
- Not clamping (numerically between 0 and 1) Float32 textures

If your usecase requires any of these capabilities and you need to either upload or readback the data, then there is no primitive you can use to perform computations with the data in JS. The present day way this is handled is you upload/readback UInt16 and then have conversion routines that convert these to floating point values you can work with on the CPU or convert back for use by the GPU.

@moriken awesome work -- looks like a solid implementation.

I'm torn about whether this should be subsumed into ECMAScript. Doing so implies that all ES engines are going to have to incorporate a significant amount of code to convert to and from half-floats. Also, the tests will have to be a lot more thorough then they might otherwise be. If compiling pure ES code can achieve most of the efficiency, I'd advocate for leaving this as a library. In C/C++, developers use libraries for this purpose.

Sorry for my previous quick - through the phone - answer.

I wanted to provoke and find a reason why to add this as a proposal. Of course I missed all the context from 2 years ago on this email thread, my bad.

I'll support this and bring it to the next meeting. I'll keep you posted and updated with the results.

It's in the agenda now. tc39/agendas/blob/master/2017/05.md

Any update of this topic since 2017-09 TC39 meeting?

rwaldron/tc39-notes/blob/637ecb13af98e480cbb0415b9d724365038d4cc0/es8/2017-09/sep-28.md#float16-typed-arrays

The proposal for canvas to use float16 in CanvasPixelFormat is being discussed,

IMHO `ImageData.data`

should return `Float16Array`

in this case.

WICG/canvas-color-space/blob/master/CanvasColorSpaceProposal.md

I hope progress. Thanks.

I've got not much for public updates so far. I've been trying to get some traction for the feature with implementors but it's being hard to get everyone positive about this being a native implementation rather than a non-native custom API.

Hmm. I tried to implement `Flaat16Array`

on SpiderMonkey but it seemed hard. If the concern of implementors is due to that, it might be better to split this proposal into `DataView`

methods and `Float16Array`

.

Typed arrays today are specified with support for Float32Array, Float32Array.

A useful additional data type would be Float16Array. It's useful because GPUs (particularly mobiles) have support for it, and because VRAM and bandwidth limits are a legitimate concern (particularly for mobiles).

GPU support for it appears in the following forms:

These types are defined per IEEE 753-2008

It is possible today to service half-float data types trough JS, by performing the conversion in JS. The code for this is however rather complex and it's difficult to implement all of IEEE 753-2008 correctly (my version doesn't support rounding quite correctly codeflow.org/experiment/half-float/main.js). It may also be unnecessarily slow. Convenience is also an issuebecause JS can't override the array access [] operator to do the job.

I'd like to propose adding Float16Array to the typed array specification. If JS accesses arrays like these, appropriate conversion is applied (as is the case for any other non JS-native numerical type)