types

# Peter Michaux (17 years ago)

Reading the recent news about ES4 removing more of its old features and morphing into Harmony, it seems that the related ideas of classes, types and type checking are the surviving major new features. Personally I've never quite understood why classes, types and type-checking have been such a fundamental part of the proposal. I've kept my fingers still about this for several reasons (mostly does my opinion even matter?) but all of this type business seems more trouble than it is worth to me (i.e. a daily JavaScript programmer.) Are these type-related features what the community of ECMAScript 3 programmers were really asking for emphatically years ago? Is the community really asking for now with the surge of functional programming?


Looking in the new wiki page on types

harmony:types

I have a question about the use in browsers where multiple windows/frames are involved. If a Date object, "d", is created in one window and passed to a second window, will "Date.contains(d)" in that second window be true or false? I would want it to be true, in this case. The same question applies to non-built-in classes where the same class name may be used in two windows with different class definitions. I would want this to be false. Because of the second example, it seems the result would necessarily be false. This makes the test somewhat useless and goes back to duck typing where the code using an object needs to inspect the object for the methods it will call. The problem with duck typing is when different classes provide identically named methods and the reason that in ES3, one way to test if an object is an array is too look for an oddly named method like Array.prototype.splice. I suppose what I'm asking is, given ECMAScript is mostly in the browser and multiple windows/frames are part of that environment, is this "contains" type check really good enough?

Peter

# Neil Mix (17 years ago)

On Aug 13, 2008, at 8:02 PM, Peter Michaux wrote:

Are these type-related features what the community of ECMAScript 3 programmers were really asking for emphatically years ago? Is the community really asking for now with the surge of functional programming?

One hacker's opinion: Yes, absolutely. The big pain point for me has
been scaling an app out from a team of 1 with <10K LOC to a team of
many with 100K+ LOC. Even with an incredibly rigorous testing
infrastructure (comparable in code size to the product itself),
development on the application just hasn't scaled well. Many JS
hackers don't build apps at a large scale, but JS-based apps will only
get bigger in scope over time, so this pain point will get more acute.

It's all about time spent. A large app is impossible for a new-hire
programmer to comprehend end-to-end. We mitigate this by using unit
tests as a safety barrier, but over time our tests have grown to take
about 10 minutes or so to run. That's way too long to wait for a
pesky misspelling bug. To the extent that there's static type
checking available to catch errors early on, it's a huge productivity
gain. (Although not necessarily one that's easily noticed -- one tends
not to see the problems that grow and kill slowly.)

Your statement above implies that types and functional programming are
mutually exclusive. Are they? I don't see it that way.

# Brendan Eich (17 years ago)

On Aug 13, 2008, at 8:02 PM, Peter Michaux wrote:

Reading the recent news about ES4 removing more of its old features and morphing into Harmony, it seems that the related ideas of classes, types and type checking are the surviving major new features.

It would be more accurate to say that members of the committee see
value in some or all of:

  1. Classes as higher-integrity factories than constructor functions
    that use closures for private variables; 1(a). possibly related by single or multiple inheritance, but in the
    simplest case with zero inheritance. 1(b) with this-bound (self-bound) methods, non-extensible instances,
    and other possibly differences from function constructors.

  2. Like patterns, which are syntactic sugar for "shape tests",
    assertions about the structural type at the present moment of a
    given object.

  3. Structural types, related by record width subtyping.

  4. Optional type annotations, which provide both 4(a). a write barrier on a variable to prevent it from referring to
    an object that does not satisfy the types constraints; 4(b) a guarantee that the object denoted by the annotated variable
    will not mutate away from the type's constraints.

Opinions vary. I think it's fair to say some favor 1 (think
Smalltalk) and possibly 3 (structural types do not require shared
type definitions that are related by name; this can be winning in
large systems involving multiple decoupled programmers), but are
skeptical of 4.

Some (overlapping people here) point out the benefits of nominal
types for security: nominal type annotations can act as auditors or
guards that can't be bypassed from outside the lexical scope in which
the type name is hidden; private nominal types bound the set of
implementations that have to be inspected; generative nominal types
can act as keys/nonces; nominal types make feasible information flow
analyses that otherwise can't bound effects along all flows, so must
leave the program counter tainted.

Most seem in favor of like patterns, from what I heard in Oslo. These
are not types anyway, but convenient syntax for the kinds of shape- tests that are open-coded (sometimes with coverage gaps, sometimes
skipped altogether) in modern Ajax libraries (isArrayLike, etc. --
you call this "duck typing" below).

Like patterns and structural types could be thought of as JSON
schemas. If you only need a spot-check that an object tree matches a
structural pattern, use a like test or annotation. If you need the
kind of monotonic guarantees listed in 4(a-b), then use structural
types.

Personally I've never quite understood why classes, types and type-checking have been such a fundamental part of the proposal. I've kept my fingers still about this for several reasons (mostly does my opinion even matter?) but all of this type business seems more trouble than it is worth to me (i.e. a daily JavaScript programmer.)

How large are the JS programs you write? What other programming
languages do you use, and at what scale?

JS programmers write type checks and code with latent type systems in
mind already. You do it. Doing this by hand, and often implicitly,
can be pleasant at smaller scale, with no need for shorthands such as
like patterns, or always/everywhere guarantees such as type
annotations provide. Closures and constructors are enough for (1).

Scaling up across people, space, and time (even with only one
programmer, time is a problem; one forgets latent types that the
program relies on) leads many people to want more automation. YMMV.

Automation of type checks or shape tests does not mean static typing.
Runtime checks and more advanced systems such as PLT-Scheme's
Contracts can go a long way. But writing all the checks by hand is a
tedious waste of programmer time, and it's costly enough that
programmers tend not to do it. This seems good when prototyping, but
it backfires at scale.

Are these type-related features what the community of ECMAScript 3 programmers were really asking for emphatically years ago?

Some are, you've heard from Neil already.

But you're dismissing the other things than classes and types that I
mentioned, which could be grouped into two categories:

  1. Better modularity and integrity through name hiding. This includes
    the generative Name object idea, but also let as the new var for
    block scope (const and function in block too).

  2. Conveniences: destructuring, spread-operator/rest-params,
    iterators/generators, expression closures.

Is the community really asking for now with the surge of functional programming?

False dichotomy alert. Please look into functional programming more,
including SML and OCaml; also TypedScheme and prior optionally-typed
Scheme systems.

Not only is "functional programming" not antithetical to "type
system" (even static type system), it covers a range of languages
from Lisp to Dylan to JS to Scala. You use it as if it means one
thing, "not typed", but that's false on both counts ("one thing" and
"not typed").

Looking in the new wiki page on types

harmony:types

First, let me say that this is just a sketch, not yet reviewed and
not ready for prime time. I think Cormac would be happy to answer
questions about it, although it may be better to wait a bit before
diving in.

# Neil Mix (17 years ago)

(n.b. Peter responded to me privately. I'm assuming he meant to cc
the list in his reply to me. Accordingly I have not removed any part
of his response, and replied inline below.)

On Aug 13, 2008, at 9:57 PM, Peter Michaux wrote:

On Wed, Aug 13, 2008 at 8:45 PM, Neil Mix <nmix at pandora.com> wrote:

On Aug 13, 2008, at 8:02 PM, Peter Michaux wrote:

Are these type-related features what the community of ECMAScript 3 programmers were really asking for emphatically years ago? Is the community
really asking for now with the surge of functional programming?

One hacker's opinion: Yes, absolutely. The big pain point for me
has been scaling an app out from a team of 1 with <10K LOC to a team of many
with 100K+ LOC. Even with an incredibly rigorous testing infrastructure (comparable in code size to the product itself), development on the application just hasn't scaled well. Many JS hackers don't build
apps at a large scale, but JS-based apps will only get bigger in scope over
time, so this pain point will get more acute.

I do deal with a large code base (somewhere around 100K of JavaScript) but never 100K lines in a single web page. Even for the heavy pages the call stacks are usually quite shallow. I actually have never found it very difficult to debug a type error. I've found my other mistakes far more painful. :-)

Pandora uses OpenLaszlo, ES3 compiled to Flash, thus our application
is substantially bigger that what you'd find on a typical page. What
actually runs in-page isn't 100K LOC, but when you add in offline
supporting functions such as test infrastructure (i.e. end-to-end
pieces that need to have knowledge of each other's parts) we're on the
order of 100K.

I agree that debugging such errors isn't difficult. But my
observations are about productivity rather than difficulty. The time
consumed in debugging such errors is costly, if not mind-busting.
Especially relative to static type checking.

I do get the idea that a consumer of an API is helped with some good error messages. Sometimes I have public API functions of a library that start something like

function(a, b) { /* begin debug / isInt(a); // throws isStr(b); // throws / end debug */ ...

The debugging code is stripped when deploying for production for better download and runtime performance.

It's all about time spent. A large app is impossible for a new-hire programmer to comprehend end-to-end. We mitigate this by using
unit tests as a safety barrier, but over time our tests have grown to take
about 10 minutes or so to run. That's way too long to wait for a pesky
misspelling bug.

Does type checking catch misspelling bugs?

To the extent that there's static type checking available to catch errors early on, it's a huge productivity gain.

Is Harmony type checking even going to be static?

These are great questions that I don't know the answer to. Clearly
this needs time to sort out in committee. But I'll weigh in with an
early opinion that the option to do static type checking in JavaScript
would, IMO, be a very good thing.

[snip]

Your statement above implies that types and functional programming
are mutually exclusive. Are they? I don't see it that way.

No they aren't and I didn't mean to imply that. I intending to compare the interest in class-based OOP that was all the rage many years ago when ES4 first started and now where there is more emphasis on a functional style which can, and often do, still have types.

I see. To which I say, why not both? First class functions may be
all the rage with the kids these days, but I don't take that to mean
OO is dead in the water. It's just not new and exciting -- you simply
depend on it without thinking about it. ;)

I've been deep into Objective-C lately. I sorely miss the closures,
but the "bendable" static typing sure works out elegantly. I've
already gotten through some pretty intense refactorings that "just
worked" once they compiled without warning. That never happens to
me in JavaScript. Do that a few times, suddenly I'm gaining
confidence in my ability to refactor, and before I know it the code
doesn't feel so big and complicated anymore. I'm not fearing the dark
corners. It's a nice feeling.

# Peter Michaux (17 years ago)

On Wed, Aug 13, 2008 at 10:48 PM, Brendan Eich <brendan at mozilla.org> wrote:

On Aug 13, 2008, at 8:02 PM, Peter Michaux wrote:

Reading the recent news about ES4 removing more of its old features and morphing into Harmony, it seems that the related ideas of classes, types and type checking are the surviving major new features.

It would be more accurate to say that members of the committee see value in some or all of:

  1. Classes as higher-integrity factories than constructor functions that use closures for private variables; 1(a). possibly related by single or multiple inheritance, but in the simplest case with zero inheritance. 1(b) with this-bound (self-bound) methods, non-extensible instances, and other possibly differences from function constructors.

  2. Like patterns, which are syntactic sugar for "shape tests", assertions about the structural type at the present moment of a given object.

  3. Structural types, related by record width subtyping.

  4. Optional type annotations, which provide both 4(a). a write barrier on a variable to prevent it from referring to an object that does not satisfy the types constraints; 4(b) a guarantee that the object denoted by the annotated variable will not mutate away from the type's constraints.

[snip]

How large are the JS programs you write? What other programming languages do you use, and at what scale?

I'm not sure if these two questions are rhetorical or not but anyway...

A single web page can be somewhere up to 10K lines which is reasonably hefty page given JavaScript is not very verbose.

Just out of curiosity, are there many folks on the committee that write JavaScript applications? I ask that question with zero snark factor. I have the impression the majority of members are language implementers and that may be a completely unfounded impression.

I don't use any other language at a larger scale partly because JavaScript is the language in the browser and that is where I currently work. Some spare time is spent in Lisp/Scheme.

I often wonder if it is necessary that ECMAScript is a language which "covers all bases" and it good at all scales. We have plenty of languages that already try to do that (and don't.)

JS programmers write type checks and code with latent type systems in mind already. You do it. Doing this by hand, and often implicitly, can be pleasant at smaller scale, with no need for shorthands such as like patterns, or always/everywhere guarantees such as type annotations provide. Closures and constructors are enough for (1).

Scaling up across people, space, and time (even with only one programmer, time is a problem; one forgets latent types that the program relies on) leads many people to want more automation. YMMV.

Automation of type checks or shape tests does not mean static typing. Runtime checks and more advanced systems such as PLT-Scheme's Contracts can go a long way. But writing all the checks by hand is a tedious waste of programmer time, and it's costly enough that programmers tend not to do it.

I'm not arguing against type annotations with this comment but I don't see much difference in terms of remembering to have type checks between writing

function(a) {isInt(a); ...

and

function(a:int) { ...

I believe that a:int actually is intended to assign a type to the variable a and the isInt only checks the contents of a at the moment. I'd rather the second be sugar for the former and the variable is not typed at all.

This seems good when prototyping, but it backfires at scale.

Are these type-related features what the community of ECMAScript 3 programmers were really asking for emphatically years ago?

Some are, you've heard from Neil already.

But you're dismissing the other things than classes and types that I mentioned, which could be grouped into two categories:

  1. Better modularity and integrity through name hiding. This includes the generative Name object idea, but also let as the new var for block scope (const and function in block too).

  2. Conveniences: destructuring, spread-operator/rest-params, iterators/generators, expression closures.

Yes these generally seem like good features.

Is the community really asking for now with the surge of functional programming?

False dichotomy alert.

[snip]

As I mentioned in my reply to Neil, my statements were misleading. Having types in a functional language is clearly possible and even common. I don't have anything against objects having types.

Your points 1-4 above and the differences of opinion and preferences seem to be a big obstacle to overcome in a committee by consensus.

Peter

# Brendan Eich (17 years ago)

On Aug 13, 2008, at 11:24 PM, Neil Mix wrote:

I do get the idea that a consumer of an API is helped with some good error messages. Sometimes I have public API functions of a library that start something like

function(a, b) { /* begin debug / isInt(a); // throws isStr(b); // throws / end debug */ ...

The debugging code is stripped when deploying for production for better download and runtime performance.

Dating myself here: I had to deal with Fortran, long ago. Kernighan
and Plauger created a structured programming language based on
Fortran, which translated to Fortran: Ratfor ("Rational Fortran").

Not saying JS is Fortran, but really, unless it floats your boat, why
not improve the language directly so it can support, declaratively if
possible, things you have to do with hand tools and magic comments
and offline sanitizers and so on?

Those extra things may be necessary now, but not forever. And they're
not a good use of programmer time, even if the initial development
cost is sunk (see sunk cost fallacy).

Does type checking catch misspelling bugs?

It can, yes. Without some notion of an object type, var o = foo();
o.mispelled can't be caught (suppose foo returns {misspelled: 42}).
AS3 does this, even for unqualified references if you are within a
package. But see below.

To the extent that there's static type checking available to catch errors early on, it's a huge productivity gain.

Is Harmony type checking even going to be static?

These are great questions that I don't know the answer to.

It's ironic that ES4 already dropped its optional static type
checker. The 'use strict' proposal that was circulated, which was the
subject of efforts at subsetting and unification with 3.1 folks, was
like Perl's "use strict" -- use good taste and sanity. It was not
AS3's static type + sanity checker.

I think the fear and hate around any whiff of static typing tained
ES4 -- I'm glad to get past it, with Harmony if not with our earlier,
unnoticed cuts.

Clearly this needs time to sort out in committee. But I'll weigh in with an early opinion that the option to do static type checking in JavaScript would, IMO, be a very good thing.

Although ES4 dropped the optional static type checker, ActionScript
was going to keep its checker, hooked up via the toolchain IIRC.
There's nothing preventing such extensions, but I don't foresee the
committee specifying an optional checker. The runtime type system,
however, needs to be fully specified, and if it's done right, then
any static type checker will be a conservative partial evaluation of
some sort.

I've written that interoperation problems could result if different
browsers used different degrees of conservatism and partial
evaluation, but I don't expect static checker extensions in most
browsers, and I'm also not too worried about people testing in the
more lenient one where the stricter one has enough market share for
the interop failure to be a problem. The issue will be what works in
browsers with their dynamic type checking (whatever it ends up being).

Before we cut the type checker from ES4, we were wrestling with when
a browser might try to check. Code is loaded often; windows and
frames can reference one another; documents and script tags come and
go. Cormac suggested just checking when things seem stable or
quiescent, repeatedly, and issue warnings. Not your "must go to
editor, fix stupid error, and recompile" experience, but arguably a
better one than Java heads have.

# Peter Michaux (17 years ago)

On Wed, Aug 13, 2008 at 11:24 PM, Neil Mix <nmix at pandora.com> wrote:

[snip]

Your statement above implies that types and functional programming are mutually exclusive. Are they? I don't see it that way.

No they aren't and I didn't mean to imply that. I intending to compare the interest in class-based OOP that was all the rage many years ago when ES4 first started and now where there is more emphasis on a functional style which can, and often do, still have types.

I see. To which I say, why not both?

First-class, lexical functions are a relatively easy thing upon which to agree and were added to the language before such a large process was put in place for evolving the language.

A committee agreeing by consensus on a OOP system seems like a nightmare that may not happen. The single-inheritance, multiple-inheritance, mixins, interfaces, public, private, protected mess seems intractable since no one seems to think any language does it all perfectly. A CLOS-type system might be best. Who really knows? It is a huge problem and perhaps requires dictator-style leadership (and that may not work either as multiple implementations must get on board.)

First class functions may be all the rage with the kids these days, but I don't take that to mean OO is dead in the water. It's just not new and exciting -- you simply depend on it without thinking about it. ;)

I use OOP frequently in JavaScript but it isn't usually the style in ES3 or class-based like proposed ES4. It's the style I think is appropriate to the situation. That may draw a slight performance penalty but the code is certainly more robust than the ES3 style.

I've been deep into Objective-C lately. I sorely miss the closures, but the "bendable" static typing sure works out elegantly. I've already gotten through some pretty intense refactorings that "just worked" once they compiled without warning. That never happens to me in JavaScript. Do that a few times, suddenly I'm gaining confidence in my ability to refactor, and before I know it the code doesn't feel so big and complicated anymore. I'm not fearing the dark corners. It's a nice feeling.

Different strokes :)

Peter

# Brendan Eich (17 years ago)

On Aug 13, 2008, at 11:37 PM, Peter Michaux wrote:

How large are the JS programs you write? What other programming
languages do you use, and at what scale?

I'm not sure if these two questions are rhetorical or not but
anyway...

No, I was curious. It's helpful to get "scale data" samples from
different folks in the community.

Just out of curiosity, are there many folks on the committee that write JavaScript applications? I ask that question with zero snark factor. I have the impression the majority of members are language implementers and that may be a completely unfounded impression.

That's fair, and Doug made the point at the first meeting he
attended. We have included folks like Alex Russell and now Kris Zyp
from Dojo; also Adam Peller of IBM, a Dojo committer.

The committee members are not just language implementors or experts,
but representatives for their organizations. Apple is all about web
as platform. The Mozilla community's JS hackers let me know what they
think and set me straight often. Opera has a web apps / widgets team.
Etc.

But like many other programming language standards bodies, selection
for language implementors is inevitable. I've also invited experts
from academia, specifically Dave Herman and Cormac Flanagan, to help
with specification and provide balance. Without growing too large, I
favor a mix from several disciplines on the committee.

Majority rules is not the committee's way, rather consensus (general
agreement). The majority of JS hackers may not agree on much, from
what I can tell.

I'll say this: I've seen an effect with some JS hackers, possibly a
Stockholm Syndrome variant, where because the language was frozen for
nine years, and because one has to struggle to get things to work
cross-browser, necessity becomes a virtue, and change is viewed as a
threat. "Don't turn it into Java" or "I like billing hours working
around current deficiencies" are two kinds of comments (paraphrased
slightly, or somewhat pointedly in the second case) that I hear.

This effect seems entirely malign to me. I'm not talking about you
here, note well! My point is that the current JS users have been
abused: stuck with something that didn't grow at all after a
premature standardization cycle in the '90s. Many are worth hearing
from, via this email list, in blogs, at sites like ajaxian.com; but
some (sore abused and loving it ;-) are not the best guide on how to
get "unstuck".

I often wonder if it is necessary that ECMAScript is a language which "covers all bases" and it good at all scales. We have plenty of languages that already try to do that (and don't.)

No language should try to be everything.

My position is that JS needs to evolve to meet users' needs, not be
frozen or stunted, or have its users reeducated (in camps? that won't
work) to use it as is, or in subset form. If one has only Fortran,
then a subset if not a pre-processor (Ratfor!) may be a win. But if
JS can evolve in an open-standards fashion, then it should, and we
can put away some of the hand-tools and braces.

Hand-tools as a metaphor should not be taken to disrespect doing
things by hand or using simple tools where possible. Students should
definitely do that, and JS users do it well when prototyping or even
deploying small-ish codebases. Some do it well at large large scale.
But it's not for everyone and all programs.

I'm not arguing against type annotations with this comment but I don't see much difference in terms of remembering to have type checks between writing

function(a) {isInt(a); ...

and

function(a:int) { ...

There are big differences. With the first, a might be reassigned
later in the function and isInt wouldn't know. Also, for a mutable
object type, 'a' might remain the same reference, but the object it
refers to might mutate to violate the type constraint. Mutation is
not your friend.

You're thinking more of 'like', and indeed, we proposed

function (a) { if (!(a like T)) throw new TypeError; ... }

But that's obviously too much boilerplate, so the short-hand

function (a like T) { ... }

has been proposed too, in ES4 and (by me, to some favorable comment
depending on how T is defined -- specifically that it can be a value
expression, an object or array containing types for example -- no
distinct type expression grammar, binding rules, or evaluation model)
in Oslo.

I believe that a:int actually is intended to assign a type to the variable a and the isInt only checks the contents of a at the moment.

Right.

I'd rather the second be sugar for the former and the variable is not typed at all.

That's not a good meaning to assign to an 'a:int' annotation, based
on precedent in many languages. It's also not the right "minimal
syntax" default meaning when mutable objects types are in play.

Another difference is the ability for some system (not necessarily
part of the standard) to do ahead-of-time type checking. You can't do
that efficiently or at all without the "always" and "monotonic"
meanings of type annotations and subtype relations, as compared to
'like' annotations (which are sugar for shape tests, more like your
isInt call on function entry).

Another difference, for the future but worth future-proofing against:
having the syntax gives us further hooks for contacts, with their
accurate blame arrows when runtime tests fail. This is separate from
the static checkability of a:int due to the always/monotic meaning.

Your points 1-4 above and the differences of opinion and preferences seem to be a big obstacle to overcome in a committee by consensus.

We shall see. The committee now has the chance to reason together, to
understand one anothers' positions and state them fairly. This
happened in Oslo more than in any past meeting, IMHO. This is grounds
for greater hope than I had before Harmony.

It's possible we'll get just (1) and (2), but we discussed legitimate
use-cases for all of (1-4) in Oslo. Time preference for a successor
spec will help defer the less useful ideas, if we work together well.
Implementation before standardization, and user-testing, will help.

There's no silver bullet, but what's the alternative? Rejectionist
clinging to ES3-till-2018? Again the main thing for me is to let the
language grow in principled fashion to address pressing use-cases and
usability problems -- not to grow artificially to any particular
large size, but to grow beyond the artificial last-written-standard
state, plus or minus de-facto standards/bugs, that it has been stuck at.

# Brendan Eich (17 years ago)

On Aug 13, 2008, at 11:52 PM, Peter Michaux wrote:

I see. To which I say, why not both?

First-class, lexical functions are a relatively easy thing upon which to agree and were added to the language before such a large process was put in place for evolving the language.

Lambda can do anything, we know this. But that doesn't mean it should
be the only tool in the belt.

Macros won't be ready for the next major edition. The research on
well-defined, sound hygiene is just being done now. Then there's the
lovely C-based syntax. Macros may eventually make it into some future
edition (my gut says). Recasting sugar added before they arrive, once
they're there, should be possible.

In the mean time, we have extensions from the not-large-process
Mozilla JS1.x world. We have concrete proposals and sketches from a
few people on the committee acting more as champions than dictators,
but not subjecting everything to design-by-committee. And we have a
pretty high-quality committee.

Nevertheless, I think you are right on to warn:

A committee agreeing by consensus on a OOP system seems like a nightmare that may not happen. The single-inheritance, multiple-inheritance, mixins, interfaces, public, private, protected mess seems intractable since no one seems to think any language does it all perfectly. A CLOS-type system might be best. Who really knows? It is a huge problem and perhaps requires dictator-style leadership (and that may not work either as multiple implementations must get on board.)

We won't get the benevolent dictator, and we won't try for the big
CLOS-style system (this latter point is part of Harmony). We can't
iterate the standard too quickly because of intrinsic overhead and
latency, and the tendency to enshrine mistakes and get painted into
corners by iterating too quickly (ignoring overhead/latency).

I'm a realist, but I've pushed dialectically against the tendency of
languages to stagnate and go into decline at high switching cost
(dying only years later, if ever). To the extent that JS1.x in
Mozilla, AS3 in Flash/Flex, and ES4 as a love-it-or-hate-it proposal
have moved the needle away from stagnation, great. Without Firefox
and Safari growing market share, we probably wouldn't even be talking
right now about JS futures. Who knew four years ago that any of this
was possible?

So cheer up! ;-) Things are looking brighter, in spite of the
inevitable standards-committee setting.

I use OOP frequently in JavaScript but it isn't usually the style in ES3 or class-based like proposed ES4. It's the style I think is appropriate to the situation. That may draw a slight performance penalty but the code is certainly more robust than the ES3 style.

Sorry, curious again: what do you mean by ES3 style?

# Brendan Eich (17 years ago)

On Aug 14, 2008, at 12:33 AM, Brendan Eich wrote:

Another difference, for the future but worth future-proofing against: having the syntax gives us further hooks for contacts

Grr, when spellcheckers fail: "contracts".

# Nicolas Cannasse (17 years ago)

A committee agreeing by consensus on a OOP system seems like a nightmare that may not happen.

Please allow me to jump in the discussion at this point.

I totally agree with what have been told here.

Getting many different people to decide on an a possible ideal OOP language seems very unlikely. This will definitely need some kind of political choices ("I drop this feature if you accept this one") that are maybe the last thing you want to do when you design a sound programming language. And it seems like this is what is happening right now...

Maybe I will repeat myself here, but I think that next JS should focus on being a good runtime, easily targeted by 3rd party programming languages such as haXe, GWT or AS3, and be able to run JS-3.1 on top of it as well.

This is the only way to ensure diversity among programming languages, because I'm convinced that there is not a single answer to "what is the best language". I think that anybody should put aside their own ideas about what is good/bad in programming and let users choose what is best for them in the end. Every attempt to enforce a given PL would be a huge step backward for the web openness.

ECMA has a huge responsibility, which is to ensure that the platform used to add interactivity to web pages gets better and more suitable for various developers : from the beginner to the advanced programmer. A failure to do that would really hurt web developers as a whole.

# Peter Michaux (17 years ago)

On Thu, Aug 14, 2008 at 12:33 AM, Brendan Eich <brendan at mozilla.org> wrote:

On Aug 13, 2008, at 11:37 PM, Peter Michaux wrote:

[snip]

Your points 1-4 above and the differences of opinion and preferences seem to be a big obstacle to overcome in a committee by consensus.

We shall see. The committee now has the chance to reason together, to understand one anothers' positions and state them fairly. This happened in Oslo more than in any past meeting, IMHO. This is grounds for greater hope than I had before Harmony.

It's possible we'll get just (1) and (2), but we discussed legitimate use-cases for all of (1-4) in Oslo. Time preference for a successor spec will help defer the less useful ideas, if we work together well. Implementation before standardization, and user-testing, will help.

There's no silver bullet, but what's the alternative? Rejectionist clinging to ES3-till-2018?

I want the language to move forward also. It seems as though the ES4 of a year ago bit off too much and various pieces have been dropping slowly over time. The recent news makes it seem like the group has decided to take a fresh look at a much smaller set of goals for Harmony. If getting a new major revision of ECMAScript out is important in a timely fashion then perhaps the whole types and classes part doesn't belong in Harmony? The Harmony version, which I suppose will be numbered "4" already stands to be very different from ES3.

  • New standard format with improved clarity
  • Reference implementation
  • programming in the small changes (destructuring, let, rest argument, ||=, etc)
  • new object properties (e.g. Array extras)

these additions to the language directly address things that programmers are working around daily. If a standard could be completed sooner, and have Microsoft's agreement and implementation, that may be more valuable than waiting to get consensus on the type/classes issue which would also take longer to implement.

It seems to me that the parts of JavaScript which have kept it relevant over the years have been the parts it shares with Scheme: garbage collection, lambdas, untyped variables. If it hadn't been for these features, I think projects like GWT would have sprung up earlier to abstract away what would be a very painful experience programming directly in JavaScript. It seems natural to me to build on that success of borrowing from Scheme with the Array extras and let, for example. Proper tail calls were another feature generally agreed to be in ES4 for a long time but dropped in favor of type-checks and I hope tail calls make a return, as you know.

With regard to the types/classes issue, has anyone proposed a solution to the multiple window problem in my original post where an object is created in one window and its type is being tested or compared with similar objects created in another window? If that problem cannot be solved elegantly, isn't the whole types/classes issue a bit of a dud anyway? The whole point of types is to be able to identify an object by it's type and if that cannot be done across windows it seems to be a big problem to me.

[snip]

Peter

# Neil Mix (17 years ago)

On Aug 13, 2008, at 11:44 PM, Brendan Eich wrote:

I've written that interoperation problems could result if different
browsers used different degrees of conservatism and partial
evaluation, but I don't expect static checker extensions in most
browsers, and I'm also not too worried about people testing in the
more lenient one where the stricter one has enough market share for
the interop failure to be a problem. The issue will be what works in
browsers with their dynamic type checking (whatever it ends up being).

I think my lack of PLT background is seeping through here, so I'll
clarify a bit. It sounds like static type checking infers a certain
amount of "hard failure," i.e. you can't run this until you fix your
code. That's not really what I'm voting for. I just want it to be
possible, somehow, to catch simple type errors very early in the
development process, and then to run the same type annotated code
unchanged in the browser. To that end an offline lint-like tool would
suffice. If type-annotated ES-Harmony is capable of supporting that,
I'll be satisfied. I'm not a member of the "our language knows what's
best for you" camp. I just want another tool for my tool belt, to use
judiciously for my own purposes without impacting others.

# Peter Michaux (17 years ago)

On Thu, Aug 14, 2008 at 12:49 AM, Brendan Eich <brendan at mozilla.org> wrote:

On Aug 13, 2008, at 11:52 PM, Peter Michaux wrote:

[snip]

I use OOP frequently in JavaScript but it isn't usually the style in ES3 or class-based like proposed ES4. It's the style I think is appropriate to the situation. That may draw a slight performance penalty but the code is certainly more robust than the ES3 style.

Sorry, curious again: what do you mean by ES3 style?

The direct use of the constructor/prototype system

function MyClass() {} MyClass.prototype.foo = function(){};


Most of the time, I'm using "hashes of closures" as Paul Graham calls them or "durable objects" as Douglas Crockford calls them. One very useful part of this system is the function-valued properties of the returned object can be moved to another object or otherwise passed around. "this" doesn't matter or need to be fixed if one of these properties is used as a callback because there is no use of "this" and it is not a message passing OOP system.

Another option is returning a single function from a constructor where the returned function is a dispatch and takes message strings. This manually built message passing system can give getters, setters, catch-alls, multiple inheritance, mixins, private, public, etc without needing the language to support any of them. If JavaScript didn't have prototype-based inheritance it could be built this way.

A multi-method system could be built quite easily.

There are so many ways to do OOP and so many opinions on how it should be done, it is great the language allows people to build the system appropriate to their situation.

I can understand the desire to have some form of OOP support built into the language and that already exists with prototypes. It seems that now the idea for Harmony is adding sugar with Object.freeze on top to give the appearance of classes partly so that type checking is possible. I suppose people can use this built-in sugar when it appropriate and continue to build their own OOP systems as needed (perhaps less frequently. Getting agreement on what the class sugar should provide, if that is going to be the approach. But could this goal delay Harmony too long and cause another Oslo a year from now? By that time, if types/classes are out of Harmony, it may be possible to have some of the valuable "programming in the small" features scheduled to be included in IE9. Actually seeing any Harmony features in IE9 will be a sign of major success, I think.

Peter

# Steven Johnson (17 years ago)

On 8/14/08 6:25 AM, "Neil Mix" <nmix at pandora.com> wrote:

It sounds like static type checking infers a certain amount of "hard failure," i.e. you can't run this until you fix your code. That's not really what I'm voting for. I just want it to be possible, somehow, to catch simple type errors very early in the development process, and then to run the same type annotated code unchanged in the browser.

Depends on what you mean by "hard failure". If we have a function like

// n must be a numeric type
function sqrt(n) { ... }

then the ability to say

function sqrt(n:Number) { ... }

only requires that n by convertible-to-Number at runtime. (If not, ES4/AS3 required that a TypeError be thrown.)

This does make it possible to do some static type checking at compile-time, of course (and the AS3 compiler can optionally do this).

Personally, I like catching stupid mistakes of this sort as early as possible, so I tend to use type annotations everywhere I can when I code in AS3. Your mileage may vary.

# P T Withington (17 years ago)

I'd like to relate an interesting data point from Dylan. We found that
with generic functions, where you have to specify parameter type for
dispatch, and a decent type inferencer, you seldom needed to specify
types anywhere else. When people specified types on locals, often the
compiler would ignore the declaration because it already knew, or it
would complain that it would have to insert a runtime check -- often
because the programmer had underestimated the actual type.

# Brendan Eich (17 years ago)

On Aug 14, 2008, at 10:40 AM, Steven Johnson wrote:

On 8/14/08 6:25 AM, "Neil Mix" <nmix at pandora.com> wrote:

It sounds like static type checking infers a certain amount of "hard failure," i.e. you can't run this until you fix your code. That's not really what I'm voting for. I just want it to be possible, somehow, to catch simple type errors very early in the development process, and then to run the same type annotated code unchanged in the browser.

Depends on what you mean by "hard failure". If we have a function like

// n must be a numeric type
function sqrt(n) { ... }

then the ability to say

function sqrt(n:Number) { ... }

only requires that n by convertible-to-Number at runtime. (If not,
ES4/AS3 required that a TypeError be thrown.)

That allows sqrt("foo") to compute the square root of a NaN, which is
a NaN.

For this reason, ES4 last year moved to avoid implicit conversions
except among number types.

This does make it possible to do some static type checking at
compile-time, of course (and the AS3 compiler can optionally do this).

IIRC that's because AS3's strict mode (a static type checker) changes
the rules to avoid the implicit conversion (from the standard or
dynamic mode, which converts freely as you say). May be water under
the bridge, but at least for ES4 we chose to get rid of the strict
vs. standard implicit conversion difference, and require a number in
both.

Personally, I like catching stupid mistakes of this sort as early as possible, so I tend to use type annotations everywhere I can when I
code in AS3. Your mileage may vary.

There are pluses and minuses. We've all been saved by simple types in
C, compared to assembly or pre-historic C which allowed int and
pointers to convert, and treated struct member names as offset macros
usable with any int or pointer. But JS is a higher level language,
and one doesn't need to annotate types to avoid memory safety bugs.
For numeric code, there can even be a de-optimizing effect if one
uses int or uint in AS3, I'm told, because evaluation still promotes
everything to Number (double). Is this right?

I took Neil's point to favor not only a separate lint-like tool
(which some find painful to procure and remember to run), but
possibly something like Cormac's idea I mentioned a few messages ago:
a type checker that runs when the code loaded in a web app seems
complete, and gives warnings to some console -- but which does not
prevent code from running.

Neil, how'd that strike you? It could be built into some developer- extension for the browser, so you wouldn't have to remember to run
jslint.

# Neil Mix (17 years ago)

On Aug 14, 2008, at 12:30 PM, Brendan Eich wrote:

I took Neil's point to favor not only a separate lint-like tool
(which some find painful to procure and remember to run), but
possibly something like Cormac's idea I mentioned a few messages
ago: a type checker that runs when the code loaded in a web app
seems complete, and gives warnings to some console -- but which does
not prevent code from running.

Neil, how'd that strike you? It could be built into some developer- extension for the browser, so you wouldn't have to remember to run
jslint.

Logistically I prefer to run the checks before it gets to the browser
(i.e. via Makefile as the first step of build-package-deploy in a
complicated (read: mature) in-house development environment) simply
because it shaves precious time off the edit-build-test-debug cycle.
But that's personal preference, works styles vary. The concept is
spot-on as far as I'm concerned, and I see no reason why the same
underlying tool couldn't be built to suit both environments.

(Thanks for asking.)

# Peter Michaux (17 years ago)

On Thu, Aug 14, 2008 at 12:58 PM, Neil Mix <nmix at pandora.com> wrote:

On Aug 14, 2008, at 12:30 PM, Brendan Eich wrote:

I took Neil's point to favor not only a separate lint-like tool (which some find painful to procure and remember to run), but possibly something like Cormac's idea I mentioned a few messages ago: a type checker that runs when the code loaded in a web app seems complete, and gives warnings to some console -- but which does not prevent code from running.

Neil, how'd that strike you? It could be built into some developer- extension for the browser, so you wouldn't have to remember to run jslint.

Logistically I prefer to run the checks before it gets to the browser (i.e. via Makefile as the first step of build-package-deploy in a complicated (read: mature) in-house development environment)

Each time you make a minor change to your code you need go through build-package-deploy in order to test? I wouldn't stand for that kind of overhead! ;-) Avoiding this overhead is one of the advantages of using a language like JavaScript.


If you are interested in a lint program to do your checking then the following could be made to work now with ES3

function(a /* : int /, b / : string */)

[snip]

Peter

# Steven Johnson (17 years ago)

On 8/14/08 12:30 PM, "Brendan Eich" <brendan at mozilla.org> wrote:

That allows sqrt("foo") to compute the square root of a NaN, which is a NaN.

Doh. Yeah. (Though, interestingly, the AS3 compiler will reject this in strict mode as implicit coercion of unrelated types, although no runtime error is generated.) A better example would be

// a must be Array
function foo(a:Array) {}

foo("uh-oh")

For this reason, ES4 last year moved to avoid implicit conversions except among number types.

Good call.

IIRC that's because AS3's strict mode (a static type checker) changes the rules to avoid the implicit conversion (from the standard or dynamic mode, which converts freely as you say). May be water under the bridge, but at least for ES4 we chose to get rid of the strict vs. standard implicit conversion difference, and require a number in both.

Yep. Consistency between the two is a better choice. But the concept is still nice (statically determining that the code would always produce a runtime error).

There are pluses and minuses. We've all been saved by simple types in C, compared to assembly or pre-historic C which allowed int and pointers to convert, and treated struct member names as offset macros usable with any int or pointer. But JS is a higher level language, and one doesn't need to annotate types to avoid memory safety bugs.

True, but that's not the only reason to use type annotations, at least in the context of AS3/ES4 -- they open up possibilities for early binding, efficient storage decisions, etc that would be hard(er) to deduce otherwise.

For numeric code, there can even be a de-optimizing effect if one uses int or uint in AS3, I'm told, because evaluation still promotes everything to Number (double). Is this right?

In the current Tamarin-Central VM this is the case, since we maintain ES3 numeric semantics for int and uint. However, there's some evidence that this could be substantially improved by using integer operations and checking for overflow (and reverting to double only in that case)... this was experimented with in Tamarin-Tracing but I'm not sure if it's live or not.

# Peter Michaux (17 years ago)

On Thu, Aug 14, 2008 at 12:33 AM, Brendan Eich <brendan at mozilla.org> wrote:

That's fair, and Doug made the point at the first meeting he attended. We have included folks like Alex Russell and now Kris Zyp from Dojo; also Adam Peller of IBM, a Dojo committer.

How are members of the committee chosen? Given the there must be consensus, it seems that adding just a single member could make a large impact to the process.

Peter

# Brendan Eich (17 years ago)

On Aug 16, 2008, at 8:58 PM, Peter Michaux wrote:

On Thu, Aug 14, 2008 at 12:33 AM, Brendan Eich
<brendan at mozilla.org> wrote:

That's fair, and Doug made the point at the first meeting he
attended. We have included folks like Alex Russell and now Kris Zyp from Dojo;
also Adam Peller of IBM, a Dojo committer.

How are members of the committee chosen? Given the there must be consensus, it seems that adding just a single member could make a large impact to the process.

That's true. The members invited as experts generally come from not- for-profits and universities, which Ecma invites to join in a
lightweight fashion (so far, all Ecma required was a letter on the
foundation or university's letterhead; I understand that this may
change for NFPs but not for universities).

Standards bodies are not democracies. They suffer from at least two
kinds of bias: 1. vested interests who can pay to play, indeed pay to
have sometimes undue influence; 2. small players and marginal or even
kooky factions can gain undue influence, either by paying or (in the
case of the NFP for free Ecma deal in place till this year) by
forming a not-for-profit.

Of course, big players support standards bodies precisely because of
item 1. I'm thinking of a five letter acronym starting with OO and
ending with L here.

This is not a perfect world, and you could argue that it needs reform
(I do, but in separate forums).

Given the nature of standards bodies (Ecma is not uniquely challenged
in this regard), the process of making standards is like making
sausage. Don't look too closely, but do insist on hygiene and good
ingredients.

That means design by committee, inevitable to some degree, should be
a matter of hammering out compromises on secondary issues: bike- shedding the best name for a new method, e.g. The primary design
elements should come from actual implementations, general created by
individual design/implementation leads, and already proven to some
extent with non-trivial user communities.

There's no natural law that says all invention must come from
startups or big, monied interests, however. It's perfectly plausible
that a university research team, an open source project, a lone super- hacker hobbyist, etc., can invent something great. So long as it has
enough users who are not in a silo cut off from all other users of
the proposed or evolved standard, it is fair game. ("Enough users"
has no absolute numeric bound, but one generally knows it when one
sees it.)

That's why to the extent possible, I've pushed for inclusion of
heretofore-uninvited folks from the Ajax world and from academia. But
it's impossible to include everyone not sufficiently "invested" or
otherwise advantaged. De-jure standardization is not a wide-open field.

# Brendan Eich (17 years ago)

On Aug 17, 2008, at 10:44 AM, Brendan Eich wrote:

Given the nature of standards bodies (Ecma is not uniquely challenged in this regard), the process of making standards is like making sausage. Don't look too closely, but do insist on hygiene and good ingredients.

That means design by committee, inevitable to some degree, should be a matter of hammering out compromises on secondary issues: bike- shedding the best name for a new method, e.g. The primary design elements should come from actual implementations, general created by individual design/implementation leads, and already proven to some extent with non-trivial user communities.

Insisting on hygiene and good ingredients isn't enough -- you (the
interested public in general) must be able to inspect the committee's
sausage-making. Hence our desire for transparency via a public wiki
with detailed meeting minutes.

This reminds me: the redaction (if any) and publication of meeting
minutes has fallen behind lately. I'll work with TC39 on this.

# Garrett Smith (17 years ago)

On Wed, Aug 13, 2008 at 8:45 PM, Neil Mix <nmix at pandora.com> wrote:

On Aug 13, 2008, at 8:02 PM, Peter Michaux wrote:

It's all about time spent. A large app is impossible for a new-hire programmer to comprehend end-to-end. We mitigate this by using unit tests as a safety barrier, but over time our tests have grown to take about 10 minutes or so to run.

A test suite with a lot of asynchronous testing could potentially take 10 min to run. The test runner could probably be modified to allow it to continue when async callbacks come back sooner (e.g Selenium's "waitForCondition")

A test case should normally run in under one second.

A new feature/change shouldn't have many dependencies; it should be simple to unit test.

That's way too long to wait for a pesky misspelling bug.

I could see how autocomplete could use typed programs to provide warnings (squiggly red underline).

To the extent that there's static type checking available to catch errors early on, it's a huge productivity gain. (Although not necessarily one that's easily noticed -- one tends not to see the problems that grow and kill slowly.)

The compiler is not a substitute for unit tests.

Garrett

# Robert Sayre (17 years ago)

On Wed, Aug 13, 2008 at 11:02 PM, Peter Michaux <petermichaux at gmail.com> wrote:

Are these type-related features what the community of ECMAScript 3 programmers were really asking for emphatically years ago?

No need to take a poll here. It's better to look at existing programs and examine whether a proposal for static type syntax would make them shorter at the function level and as a whole. Sufficiently shorter programs justify the implementation and standardization burden, because of the number of bugs that would go unwritten.

I've seen and written enough allegedly functionally-enlightened JavaScript to know that programs written in that style usually spend a high proportion of lines doing type inspection at run time. It's usually what prevents such programs from being incredibly concise. Of course, it's certainly possible that any effective syntax for static types in ES could make programs longer and more verbose, so I don't think it's worth debating the notion in the abstract.

Also, I think comparisons with Java miss the mark, since Java doesn't have lexical closures, first-class functions, convenient property access, or convenient object creation syntax, and on and on. Its type system is just one wart among many, and it's hard to isolate from the language's other flaws.

# Neil Mix (17 years ago)

It's all about time spent. A large app is impossible for a new-hire programmer to comprehend end-to-end. We mitigate this by using unit tests as a safety barrier, but over time our tests have grown to take about 10 minutes or so to run.

A test suite with a lot of asynchronous testing could potentially take 10 min to run. The test runner could probably be modified to allow it to continue when async callbacks come back sooner (e.g Selenium's "waitForCondition")

There's a few assumptions inherent in this observation that don't hold
true for our test suite. But that's beside the point. Even if we
shave off an order-of-magnitude through tricky optimizations, the test
runs are still time I'd like to spend elsewhere. As I said, work
styles vary, mine likes to use types as a tool to speed up engineering
tasks.

A test case should normally run in under one second.

A new feature/change shouldn't have many dependencies; it should be simple to unit test.

Should, yes. Does...well... I'm not always a perfect programmer and
architect. (Rarely, actually.) I need a little help sometimes. ;)

There's a mild undertone to your statement that I'm wary of, which is
the idea that "you can work around problem X by being a better
engineer." While true, it presumes that I'm capable of being a better
engineer than I am now. But what if that's not true of me? What if
I'm already performing to my utmost? If I have trouble working around
a problem, and a language feature can help extend my capabilities,
well you can see why I might be in favor of that feature.

That's way too long to wait for a pesky misspelling bug.

I could see how autocomplete could use typed programs to provide warnings (squiggly red underline).

Yeah, that's always nice to have.

To the extent that there's static type checking available to catch errors early on, it's a huge productivity gain. (Although not necessarily one that's easily noticed -- one
tends not to see the problems that grow and kill slowly.)

The compiler is not a substitute for unit tests.

No argument here, and I was very careful to avoid implying that. When
I said "grow and kill slowly," I was referring to length of edit- compile-test-debug cycles.

# Zexx (11 years ago)

Hello,

Isn't optional typing (similar to TypeScript) a good thing?

Let's say I want to do some image processing. And images can be really huge nowadays. If I allocate an array of dynamic variables, that array may be 10x larger than if it was an array of bytes. And I need bytes to store and process R,G,B,A values separately. So, the array itself could have a descriptor of the type, and each member of the array can then contain only the value.

As things are now, a 32 megapixel image in ES would allocate 32,000,000 x 4 = 128 million dynamic variables. If each variant takes up 8 bytes, that's 1 GB per image. Let's say I have two source images, one mask and one output. That's 4 gigabytes just in images, not counting the application, libraries, web browser, other running apps, OS, drivers, etc.

That's a waste of resources. Native apps would need only about 0.5 GB for all 4 source images, so it would run on a wide variety of devices as opposed to the ES variant. And thus the author would sell more copies and earn more money. Instead of getting comments like "Why did you made this crappy slow app. Learn to program. Competition is much faster".

If ES is to become a proper, truly versatile language, shouldn't programmer be allowed to use it's knowledge of the app design to optimize resources?

Best ,

Zex

# Domenic Denicola (11 years ago)

Use typed arrays.

# Allen Wirfs-Brock (11 years ago)
# Axel Rauschmayer (11 years ago)

TypeScript is about static type checking (and IDE support), what you want dynamic types that are better fits for problem domains. This is a step in that direction (in addition to typed arrays in ES6):

dslomov-chromium/typed-objects-es7, dslomov-chromium/typed-objects-es7