ES3.1 Proposal Working Draft

# Maciej Stachowiak (18 years ago)

(Moving from another mailing list.)

On Feb 20, 2008, at 10:54 AM, Kris Zyp wrote:

"No new syntax" actually does create a meaningful benefit, which is
ability to do graceful degradation in browsers that don't support
the new language. New builtin types, properties and methods can be
tested for from script before using it, but new syntax can't since
the presence of it alone will cause a syntax error at parse time.
So it is less arbitrary than some other possible rules. True in the immediate future, but there will be a reverse effect
down the road. At some (hopefully) ES3.1 and higher will be
pervasive enough that devs just use it without detection, and only
some browsers will support ES4. At this point having syntax already
in ES3.1 means the syntax can be used, and the methods/properties
that we deferred to ES4 can be detected and used optionally. At this
point in the future, syntax that we don't include won't be useful
for the reason you mention, but properties/methods we don't include,
can be detected and optionally used.

I'm not sure if the concrete benefit of "no new syntax" outweighs the
benefits of possible pieces of new syntax. I just wanted to explain
why I don't think it is totally arbitrary. If not this rule, then I am
not sure what rule we would set. If we start adding selected new
syntax to ES3.1, then it just turns into "the parts of ES4 that
Microsoft is willing to implement". While I can see how this would be
of practical interest to scripters, I do not think it is a principled
way to design a specification.

ECMAScript in general has an issue with addition of syntax that the
other major web standards don't. CSS and HTML both have a simple
surface syntax and a well-defined way to handle unknown terms. I think
we should consider adding features to ES4 and ES3.1 to allow future
extension of the syntax in a way that degrades gracefully.

, Maciej

# Mark S. Miller (18 years ago)

[Maciej's latest message is a continuation of the following thread. I have removed email addresses from the correspondence below to avoid helping spammers. This conversation took place on e-TC39 -at- ecma-international.org ]

Forwarded conversation Subject: ES3.1 Proposal Working Draft

From: Kris Zyp Date: Feb 20, 2008 9:22 AM

I have been going through ES3.1 working draft and making some comments and additions. However, one of the issues is rather broad. From what I understand (and hope), ES3.1 is supposed to forward compatible with ES4, but it appears there are a large number of violations of this principle in the ES3.1 Proposal Working Draft. These include:

  • Secure Eval - (I commented on the page more specifics)
  • Targeted additions - (I commented on the page more specifics)
  • typeOf - Doesn't exist in ES4
  • richer reflection - Doesn't exist in ES4 (that I am aware of)
  • arguments as Array - Simply needs to be corrected per Lars's comments.
  • Deprecation section - These don't cite any ES4 deprecations, so I am not sure if these are really deprecated in ES4 (which is a requirement for deprecation in ES3.1)

I also added Getters and Setters and Destructuring Assignment sections as we believe these are very high priority additions for ES3.1. Thanks, Kris


From: Mark S. Miller Date: Feb 20, 2008 9:30 AM

I'm confused. Doesn't this violate the "no new syntax in ES3.1" design rule?

-- Cheers, --MarkM

From: Kris Zyp Date: Feb 20, 2008 9:36 AM

Indeed you are right. Is this really a core value of 3.1 to be preserved? It is quite possible for non-syntactical changes to have a larger impact than syntactical changes. Syntax seems like a very arbitrary rule for deciding on inclusion of additions. Kris

From: Mark S. Miller Date: Feb 20, 2008 10:18 AM

It is arbitrary. I would be happy to replace it with another non- or less-arbitrary rule if we can quickly agree on one. But if we rely only on our taste for minimalism, then how do we guard against the following dynamic that I call "The Tragedy of the Common Lisp":

Each of us has some pet addition we think would be a great addition to the language. "const", "decimal", getters and setters, destructing assignment -- all these have come up just this morning!. Each of these makes the language larger and more complex, imposing a general diffuse cost on everyone. When arguing about any one of these by itself in the absence of a rule, each of us individually cares more to see our pet feature adopted than to prevent someone else's particular pet feature from being adopted. This is one of the reasons design by committee often goes bad.

Only by adopting some rule do we raise the stakes. We all know that to agree to a feature that would violate the rule is to set a bad precedent and let open the floodgates of featuritis.

Language design should be more like writing a sonnet and less like writing a phone book.

Again, shouldn't we be having this discussion on es4-discuss?

-- Cheers, --MarkM

From: Kris Zyp Date: Feb 20, 2008 10:35 AM

I understand, although I think it is difficult to come up with a reasonable succint rule that can be applied effectively, when each feature addition is really an ROI decision. We probably come up with a myriad of useless features for any given rule. We may just need to be very stingy in our ROI evaluation. On the otherhand, one rule that I think may be very valuable, is limiting to prior implementation. Prior implementation precedence does provides a very finite, limited set of possible features to choose from, and these features are of extra value since they improve cross-browser interoperability and therefore have accelerated adoption opportunity. They also have benefitted from real-world testing. I am not insisting on a strict following of this rule, but I do think it could be a very useful rule and definitely keep the features to a small set. There are also a number of features in the current working draft that could be omitted on the basis of this rule (typeOf, reflection, tail recursion, etc). I will let someone else make that call, definitely a much larger mailing list :).

Kris


From: Maciej Stachowiak Date: Feb 20, 2008 10:41 AM

"No new syntax" actually does create a meaningful benefit, which is ability to do graceful degradation in browsers that don't support the new language. New builtin types, properties and methods can be tested for from script before using it, but new syntax can't since the presence of it alone will cause a syntax error at parse time. So it is less arbitrary than some other possible rules.

, Maciej


From: Mark S. Miller Date: Feb 20, 2008 10:54 AM

Ok, would anyone here mind if I forward the conversation so far to es4-discuss?

-- Cheers, --MarkM

From: Kris Zyp Date: Feb 20, 2008 10:54 AM

True in the immediate future, but there will be a reverse effect down the road. At some (hopefully) ES3.1 and higher will be pervasive enough that devs just use it without detection, and only some browsers will support ES4. At this point having syntax already in ES3.1 means the syntax can be used, and the methods/properties that we deferred to ES4 can be detected and used optionally. At this point in the future, syntax that we don't include won't be useful for the reason you mention, but properties/methods we don't include, can be detected and optionally used. Kris


From: Maciej Stachowiak Date: Feb 20, 2008 11:53 AM

I am all for moving the discussion to es4-discuss.

  • Maciej

From: Maciej Stachowiak Date: Feb 20, 2008 11:54 AM

For information of those who might not be subscribed there yet, I'll reply on es4-discuss.

# Adam Peller (18 years ago)

Each of us has some pet addition we think would be a great addition to the language. "const", "decimal", getters and setters, destructing assignment -- all these have come up just this morning!. Each of these makes the language larger and more complex, imposing a general diffuse cost on everyone.

Mark, as I recall, the discussion at the March meeting in Newton involved implementing decimal arithmetic in ES3.1 to replace the floating point implementation in ES3, thus "no new syntax". Yes, this would have unexpected results for those who actually have code logic which expects a value of "46.19" pounds, in Mike's example (see Numbers thread) but the benefits here seemed to far outweigh this discrepancy. I can't speak to the technical level of detail that Mike can, but at a high level it's seen as a bug by the vast majority of users, and for all practical purposes, that's what it is.

# Mark S. Miller (18 years ago)

2008/2/20 Adam Peller <apeller at us.ibm.com>:

Mark, as I recall, the discussion at the March meeting in Newton involved implementing decimal arithmetic in ES3.1 to replace the floating point implementation in ES3, thus "no new syntax". Yes, this would have unexpected results for those who actually have code logic which expects a value of "46.19" pounds, in Mike's example (see Numbers thread) but the benefits here seemed to far outweigh this discrepancy. I can't speak to the technical level of detail that Mike can, but at a high level it's seen as a bug by the vast majority of users, and for all practical purposes, that's what it is.

I was not at the March meeting. If decimal is sufficiently compatible with binary double precision floating point to keep old programs working, I might be willing to consider replacing double with decimal. How compatible are they? What numbers are representable as double but not decimal? Does decimal have NaN, Infinity, -Infinity, and -0.0? (Btw, I never liked -0.0. And I especially dislike ES3's behavior that 0.0 === -0.0. However, I would argue against making incompatible changes to this.)

The other design constraint is that ES4 be a compatible superset of ES3.1. In light of your message, I checked the ES4 wiki pages. doku.php?id=proposals:numbers&s=decimal

seems to imply that use of ES3 syntax for numbers is to be interpreted (approximately) according to ES3 rules. Are the ES4 folks willing to replace binary floating point with decimal and drop the decimal literal syntax?

If not, then I don't see how we could allow decimal into ES3.1.

# Maciej Stachowiak (18 years ago)

On Feb 20, 2008, at 1:00 PM, Adam Peller wrote:

Each of us has some pet addition we think would be a great addition
to the language. "const", "decimal", getters and setters, destructing assignment -- all these have come up just this morning!. Each of
these makes the language larger and more complex, imposing a general
diffuse cost on everyone.

Mark, as I recall, the discussion at the March meeting in Newton
involved implementing decimal arithmetic in ES3.1 to replace the
floating point implementation in ES3, thus "no new syntax". Yes,
this would have unexpected results for those who actually have code
logic which expects a value of "46.19" pounds, in Mike's example
(see Numbers thread) but the benefits here seemed to far outweigh
this discrepancy. I can't speak to the technical level of detail
that Mike can, but at a high level it's seen as a bug by the vast
majority of users, and for all practical purposes, that's what it is

Besides compatibility issues, this would be a significant performance
regression for math-heavy code. I would consider this a showstopper to
implementing such a change.

I also agree with Mark's comment that arbitrary-precision integers and
arbitrary-precision rationals seem like more generally useful types
than decimal floating point, if any numeric types are to be added, but
that seems like an issue more for ES4 than 3.1.

# Brendan Eich (18 years ago)

On Feb 20, 2008, at 1:00 PM, Adam Peller wrote:

Each of us has some pet addition we think would be a great
addition to the language. "const", "decimal", getters and setters, destructing assignment -- all these have come up just this morning!. Each of
these makes the language larger and more complex, imposing a general
diffuse cost on everyone.

Mark, as I recall, the discussion at the March meeting in Newton
involved implementing decimal arithmetic in ES3.1 to replace the
floating point implementation in ES3, thus "no new syntax". Yes,
this would have unexpected results for those who actually have code
logic which expects a value of "46.19" pounds, in Mike's example
(see Numbers thread).

Hi Adam, Mike:

I'm not sure what was to blame for that Machester car-park example --
IEEE double can multiply 4.2 by 11 and round properly:

js> 11*4.2

46.2 js> (11*4.2).toFixed(2)

46.20

Cc'ing Mike in case he knows the full story (it's a fun example and
useful real-world evidence of something, I bet).

but the benefits here seemed to far outweigh this discrepancy.

No, sorry -- too much real-world code, not to mention synthetic
benchmarks, depend on hardware-implemented floating point. There are
also enough numerical and semi-numerical JS apps around that count on
IEEE-754 double precision quirks that we cannot change the number
type without opt-in versioning.

Anyway, the idea of ES3.1 as I understand it (and at least Mark
Miller agrees) is not to promulgate a new, incompatible version with
a distinct MIME type (including version= parameter). ES3.1, everyone
involved at the last Ecma TC39 meeting seemed to agree, could be done
as an Ecma TR (Technical Report). I'm against it becoming the next
(4th) edition and making its way to ISO, especially if it has few
changes from ES3, and I believe others involved in TC39 are also
opposed to that.

Being vague about 3.1 possibly including ES4 features is a sure way
to delay both any useful 3.1 TR and the full ES4, which is now
entering a multiple-interoperating-implementations phase of
standardization. If we have to keep monitoring and arguing about
what's in 3.1 that might not be exactly the same in 4, to preserve
the 3.1 < 4 subset relation, we all lose (by my definition of "lose").

I can't speak to the technical level of detail that Mike can, but
at a high level it's seen as a bug by the vast majority of users,
and for all practical purposes, that's what it is.

Yes, I keep reciting its status as the most duplicated JavaScript
Engine bug on file at bugzilla.mozilla.org (to wit, https:// bugzilla.mozilla.org/show_bug.cgi?id=5856). But that does not mean it
can be fixed with an incompatible change. The thinking for ES4 was to
support a 'use decimal' pragma, for block- or wider-scoped explicit
"opt in". This proposal,

proposals:decimal

with this discussion page

discussion:decimal

stood for a while, but was superseded by

bugs.ecmascript.org/ticket/222

And I believe there was an email conversation or two in which Mike
was included. At this point, I would find it helpful to summarize the
thinking on usable alteratives for decimal in ES4, and try to reach a
consensus in this list. But again, I do not believe we can change the
number type incompatibly -- that ship sailed in 1995. :-(

# Brendan Eich (18 years ago)

On Feb 20, 2008, at 1:25 PM, Mark S. Miller wrote:

What numbers are representable as double but not decimal?

Mike Cowlishaw's page at www2.hursley.ibm.com/decimal is
extremely informative, especially www2.hursley.ibm.com/decimal decifaq.html; see also the link to grouper.ieee.org/groups/754.

See www2.hursley.ibm.com/decimal/decifaq6.html#bindigits and
www2.hursley.ibm.com/decimal/decifaq6.html#binapprox for
double to decimal conversion answers. As doku.php?id=proposals:decimal summarizes:

  • A very small amount of precision which may be present in a double- precision binary fp number is lost during a double → decimal
    promotion, but that: o No precision is lost in an integral → decimal promotion. o Reading a numeric lexeme as a decimal preserves more precision
    than reading it as a double.

Does decimal have NaN, Infinity, -Infinity,

Yes -- from the ES4 RI:

-1.0m/0.0m -Infinity

1.0m/0.0m Infinity

0.0m/0.0m NaN

and -0.0?

Yes:

1.0m/-0.0m -Infinity

(Btw, I never liked -0.0. And I especially dislike ES3's behavior that 0.0 === -0.0. However, I would argue against making incompatible changes to this.)

Guy Steele edited Edition 1 of ECMA-262 and argued for both of these
parts of the standard, based on precedent in related programming
languages, as well as advice in IEEE-754 itself. He pointed out
something important to numerical programmers: you can walk around the
four quadrants using signed zeros with atan2:

js> Math.atan2(0,0)

0 js> Math.atan2(-0,0)

0 js> Math.atan2(-0,-0) -3.141592653589793 js> Math.atan2(0,-0)

3.141592653589793

# Mike Cowlishaw (18 years ago)

Maciej wrote on Wed Feb 20 14:28:33 PST 2008:

Besides compatibility issues, this would be a significant performance regression for math-heavy code. I would consider this a showstopper to implementing such a change.

I'm inclined to agree that it is (unfortunately) probably not a good idea to simply replace the default binary arithmetic with decimal128 -- even though this would give better precision for math applications, as well as all the other benefits of decimal arithmetic.

But I don't buy the performance argument -- decimal math packages are respectably fast nowadays. See, for example, the measurements at www2.hursley.ibm.com/decimal/dnperf.html -- a decDouble add is a couple of hundred cycles in software. That's roughly the same speed on current processors as the hardware binary floating-point available when ECMAScript was first written.

In today's (unpipelined) decimal FP hardware it is much faster than those software measurements, of course, and there's no reason why future implementations should not be within 10%-15% of binary FP.

I also agree with Mark's comment that arbitrary-precision integers and arbitrary-precision rationals seem like more generally useful types than decimal floating point, if any numeric types are to be added, but that seems like an issue more for ES4 than 3.1.

I really do not understand that comment. Almost every numerate human being on the planet uses decimal arithmetic every day; very few need or use arbitrary-precision integers or rationals of more than a few (decimal) digits. And almost every commercial website and server deals with currency, prices, and measurements.

It's true that many websites use encryption -- research for which uses BigNums extensively -- but websites don't need or use a general-purpose integer package for that.

Mike


Mike Cowlishaw, IBM Fellow IBM UK (MP8), PO Box 31, Birmingham Road, Warwick, CV34 5JL mailto:mfc at uk.ibm.com -- www2.hursley.ibm.com/mfcsumm.html

Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU

# Maciej Stachowiak (18 years ago)

On Feb 21, 2008, at 2:46 AM, Mike Cowlishaw wrote:

Maciej wrote on Wed Feb 20 14:28:33 PST 2008:

Besides compatibility issues, this would be a significant performance regression for math-heavy code. I would consider this a showstopper
to implementing such a change.

I'm inclined to agree that it is (unfortunately) probably not a good
idea to simply replace the default binary arithmetic with decimal128 --
even though this would give better precision for math applications, as
well as all the other benefits of decimal arithmetic.

But I don't buy the performance argument -- decimal math packages are respectably fast nowadays. See, for example, the measurements at www2.hursley.ibm.com/decimal/dnperf.html -- a decDouble add
is a couple of hundred cycles in software.

That benchmark isn't very useful because it doesn't compare to
hardware binary floating point, and also because they are
microbenchmarks so it's hard to tell how much impact there would be on
a real app. However, hundreds of cycles even for simple operations
like add sounds to me like it would be hundreds of times slower than
hardware floating point.

That's roughly the same speed on current processors as the hardware binary floating-point available
when ECMAScript was first written.

That's not really a relevant comparison. When ECMAScript was first
written, people weren't using it to write complex web apps. Nowadays
it would be be unacceptable even for a high-end phone to deliver the
ECMAScript performance as slow as consumer desktops from that era.

In today's (unpipelined) decimal FP hardware it is much faster than
those software measurements, of course, and there's no reason why future implementations should not be within 10%-15% of binary FP.

I do all my browsing on a MacBook Pro and an iPhone. As far as I know,
neither of these has any kind of decimal FP hardware, nor do I expect
their successors to support it any time soon (though I don't have
inside knowledge on this). These systems are towards the high end of
what is available to consumers.

I also agree with Mark's comment that arbitrary-precision integers
and arbitrary-precision rationals seem like more generally useful types than decimal floating point, if any numeric types are to be added,
but that seems like an issue more for ES4 than 3.1.

I really do not understand that comment. Almost every numerate human being on the planet uses decimal arithmetic every day; very few need
or use arbitrary-precision integers or rationals of more than a few
(decimal) digits. And almost every commercial website and server deals with currency, prices, and measurements.

I don't think currency calculations are the only interesting kind of
math. So if we need to add a software-implemented more accurate math
type, why not go all the way? At least that is my first impression.

This is not directly related to my main point, which is about
performance and which I think still stands.

, Maciej

# Mike Cowlishaw (18 years ago)

Maciej Stachowiak <mjs at apple.com> wrote

# Nathan de Vries (18 years ago)

On Mon, 2008-02-25 at 10:15 +0000, Mike Cowlishaw wrote:

Currency calculations are not very interesting at all :-). But (outside HPC and specialized processors such as graphics cards) they are by far the most common.

Surely the Adobe fellows have something to say about this :). Flash tweening and 3D Flash libraries like Papervision3D make extensive use of floating point math. In an environment where tweaks like multiplying fixed points instead of dividing yields substantial performance increases, the "currency calculations" you speak of fade into insignificance.

Cheers,

-- Nathan de Vries

# Maciej Stachowiak (18 years ago)

On Feb 25, 2008, at 2:15 AM, Mike Cowlishaw wrote:

Pentium basic arithmetic operations take from 1 cycle (pipelined add, rarely achieved in practice) up to 39 cycles (divide). The figures
at the URL above for decimal FP software are worst-cases (for example, for
Add, a full-length subtraction that requires pre-alignment and post- rounding). A simple x=x+1 is much faster.

Then I will ignore the details of the chart and assume "lots slower"
unless you have better data.

That's roughly the same speed on current processors as the hardware binary floating-point available when ECMAScript was first written.

That's not really a relevant comparison. When ECMAScript was first written, people weren't using it to write complex web apps. Nowadays it would be be unacceptable even for a high-end phone to deliver the ECMAScript performance as slow as consumer desktops from that era.

That's a fair comment (phones). However, the path length for
rendering (say) a web page is huge compared to the time spent in arithmetic.
(I did a search for 'math-heavy' examples of several programming languages 3 years ago and didn't find any ECMAScript examples.)

There can be many factors affecting page loading and web application
performance. There are many cases where JavaScript execution time is a
significant component.

But if arithmetic performance really is an issue, one could provide
an option or attribute to request binary arithmetic, perhaps.

No, shipping a huge performance regression with an opt-out switch is
not an acceptable option.

In today's (unpipelined) decimal FP hardware it is much faster than those software measurements, of course, and there's no reason why future implementations should not be within 10%-15% of binary FP.

I do all my browsing on a MacBook Pro and an iPhone. As far as I
know, neither of these has any kind of decimal FP hardware, nor do I expect their successors to support it any time soon (though I don't have inside knowledge on this). These systems are towards the high end of what is available to consumers.

Intel are studying decimal FP hardware, but have not announced any
plans. Of course, PowerPC (as of POWER6) has a decimal FPU...

Apple completed the Intel switch some time ago, since then PowerPC has
not really been relevant to the devices on which people browse the
web. My point remains, decimal FP hardware is not relevant for any
current performance evaluations and will not be for some time.

This is not directly related to my main point, which is about performance and which I think still stands.

In summary: software floating point (binary or decimal) is between
one and two orders of magnitude slower than hardware for individual
instructions. If (say) 5% of the instructions in an application are floating-point arithmetic (a high estimate for applications such as parsers and
browsers, I suspect), that means the application would be about twice as slow
using software FP arithmetic. That's not really a 'showstopper' (but might justify a 'do it the old way' switch).

If you don't think imposing a 2x slowdown on web apps is a showstopper
then clearly we have very different views on performance. (Note, using
your high estimate of two orders of magnitude it would be a 6x
slowdown if 5% of an application's time [not instructions] is spent in
floating point arithmetic.)

From my point of view, this would be a massive regression and
conclusively rules out the idea of replacing binary floating point
with decimal floating point in ECMAScript.

, Maciej

# Mike Cowlishaw (18 years ago)

If you don't think imposing a 2x slowdown on web apps is a showstopper then clearly we have very different views on performance. (Note, using your high estimate of two orders of magnitude it would be a 6x slowdown if 5% of an application's time [not instructions] is spent in floating point arithmetic.)

And if it were 1% in FP arithmetic and one order of magnitude, it would be a 1.1x (10%) slowdown.

From my point of view, this would be a massive regression and conclusively rules out the idea of replacing binary floating point with decimal floating point in ECMAScript.

I too would like to see high performance and the correct results in calculations. And I would rather see correct results with slower performance than fast results with incorrect results. As a result of the latter, many apps are forced to go back to the server for 'business logic', which has a disastrous effect on response times to the user (but at least, on the server, apps now have the option of decimal FP hardware).

But as you are more concerned with client-side performance, I think, can you show us a real script that spends anything like 5% of its time in floating-point arithmetic? I failed to create an artificial one that used as much as that (but that was a few years ago; today's ECMAScript engines might have different characteristics).

Mike

Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU