IDE support?
Language's don't need direct support for IDE's to have good IDEs (most of the good Java IDEs worked through reflection and manual parsing of the source files and parsing of defacto standard comment formatting).
I mostly agree. However, I have yet to find an IDE that makes source code navigation and refactoring as fun as Eclipse (as long as you don’t install plugins…). Note that language-wise, I much prefer JS to Java. The problem seems to be that every JS framework invents its own inheritance system. Something that will hopefully slowly disappear with class literals (and proxies, because then you don’t need clumsy work-arounds to do data binding for GUIs).
For Java, static typing helps as well, as does a standard way of organizing packages. For JavaScript, type guards would be a nice complement to type inference. And modules will be wonderful, anyway.
Type Inference can do a lot. See doctorjs.org.
We do not want people adding guards all over their code, IMHO. Anyway, we don't know have guards ES6.
I'm hopeful that type inference combined with class syntax and an (eventual) traits system will get us "there", so that you can use structural type tests for enforcement and that the IDE can get the benefit of hints through inference. That said, I think the "I want to refactor automatically" use-case is sort of over-played.
We do not want people adding guards all over their code, IMHO.
In many cases, adding type information to my code helps me to think more clearly about it (e.g.: signature of a function-valued parameter and arrays whose elements all have the same type). In other cases, not having to do it is a blessing. There are things that bother me about Java’s rigid type system, but declaring parameter types was not one of them (excluding some of the trickier parts of generics).
Supposedly, any function/method one defines will be called somewhere (especially with test-driven development). Then an IDE can infer types and display them next to the function. That should be a really cool experience.
Anyway, we don't know have guards ES6.
Guards definitely won’t be in ES6? Or will be?
I'm hopeful that type inference combined with class syntax and an (eventual) traits system will get us "there", so that you can use structural type tests for enforcement and that the IDE can get the benefit of hints through inference.
Yes. Can’t wait.
That said, I think the "I want to refactor automatically" use-case is sort of over-played.
Indeed. The type-system related features I use most often in Eclipse are:
- Safely renaming a method or function. I do this frequently to keep the vocabulary used in methods etc. fresh. All JS IDEs that I have tried are not much fun in this regard.
- Jump to the definition of a method/function.
- Find all invocations of a method/function. Often, I just change the name of a method (e.g. by appending an "x") and go through all the errors that that produces.
Anything else (extracting or inlining a method, etc.) is nice to have, but not as crucial.
I love that JS’s more dynamic nature often lets me keep inheritance hierarchies flatter.
On Sep 11, 2011, at 5:35 PM, Axel Rauschmayer wrote:
Anyway, we don't know have guards ES6.
Guards definitely won’t be in ES6? Or will be?
Sorry, that was garbled -- but you can always consult the strawman:strawman and harmony:proposals pages to confirm that guards did not make ES6.
On Mon, Sep 12, 2011 at 1:04 AM, Brendan Eich <brendan at mozilla.com> wrote:
Type Inference can do a lot. See doctorjs.org.
We do not want people adding guards all over their code, IMHO. Anyway, we don't know have guards ES6.
While true, I've found that you can't really overcome the dynamic property access unless you can somehow infer that you're operating on an array of some fixed type or something like that. And function calls and result types are actually far more difficult (though not completely impossible) to do in es than in a strongly typed language. Although I don't really see any clear way of getting around this, without some kind of annotation (be it in the language or in comments).
Sorry, that was garbled -- but you can always consult the strawman:strawman and harmony:proposals pages to confirm that guards did not make ES6.
Right. Those tell me that guards are not currently scheduled for ES6. Have all decisions in this regard already been made?
On Sep 12, 2011, at 12:51 AM, Peter van der Zee wrote:
On Mon, Sep 12, 2011 at 1:04 AM, Brendan Eich <brendan at mozilla.com> wrote:
Type Inference can do a lot. See doctorjs.org.
We do not want people adding guards all over their code, IMHO. Anyway, we don't know have guards ES6.
While true, I've found that you can't really overcome the dynamic property access unless you can somehow infer that you're operating on an array of some fixed type or something like that.
Have you tried pasting code into DoctorJS? It infers monomorphic array types, parameter types, return types. It's a global analysis.
For example, on SunSpider's crypto-aes.js, DoctorJS figures out that
13: Cipher : function(Array[number], Array[Array[number]]) → Array[number]
given
function Cipher(input, w) { // main Cipher function [§5.1] ... }
Neat, eh?
Yes, eval and other dynamic code loading facilities require re-analysis. So a tool will have to use hybrid or JIT techniques. Still doable.
And function calls and result types are actually far more difficult (though not completely impossible)
The difficulty is a challenge to the IDE implementor, not to all JS hackers to annotate their function signatures.
With Flash and AS3, developers over-rotated and annotated everything. Of course AS3 lacks inference of even a local or inside-function-only kind. But really, the IDE developers can take one for the team, IMHO.
On Sep 12, 2011, at 12:58 AM, Axel Rauschmayer wrote:
Sorry, that was garbled -- but you can always consult the strawman:strawman and harmony:proposals pages to confirm that guards did not make ES6.
Right. Those tell me that guards are not currently scheduled for ES6. Have all decisions in this regard already been made?
Pretty much -- the guards and trademarking proposals came in at the May meeting, kind of last minute. We could start worrying about Dart and try to revive them. Would that be good for ES6? Would it actually "work" in some sense to keep up with Dart?
I think it would be classic mission creep or "jump", and a disorderly way to manage the pipeline of stuff flowing into JS. Better to get some prototypes going based
I'm hopeful that type inference combined with class syntax and an (eventual) traits system will get us "there", so that you can use structural type tests for enforcement and that the IDE can get the benefit of hints through inference.
Improved, statically checkable, types would also help mitigate Javascript's silent failures (silently breaking code when trying to refactor, fixing bugs, or adding features). Unless the type system is fairly advanced, type safety only expresses a thin veneer of invariants, but coverage is total, and automatic. Together with static scoping, that tends to avoid a good deal of accidental changes that might go unnoticed when coding in current Javascript.
As the Erlang folks discovered several times, retrofitting a useful type system onto a language whose code patterns evolved in a weak type system is hard. It isn't too difficult to come up with a type system that gives some info, but my personal impression was that coding styles have to adapt before the benefits of type inference can be reaped.
There are some half dozen or more papers on Javascript type inference or static analysis (hmm, is there a central wiki or bibliography where we could record and collect such JS-related references? should I post here what I've found so far?).
On the practical side, I seem to recall that Netbeans tried using type inference to help JS completion. Probably from this post:
blogs.oracle.com/tor/entry/javascript_type_inference
DoctorJS's mozilla/doctorjs analysis has
also been used to provide code completion for some projects. That seems to have been put on the backburner during the recent cloud editor convergence.
That said, I think the "I want to refactor automatically" use-case is sort of over-played.
That depends. Automatic "refactoring", in the strict sense of changing code structure without changing code behaviour, is nearly impossible for Javascript, as there are hardly any valid code equivalences (none if you count reflection).
That still leaves the wide area of automated code transformations: just as we like to have libraries to query and transform the DOM, many of us like to have support for querying and transforming source (or at least ASTs). This kind of language-aware grep/sed would be automated, but not automatic (the results would need to be verified).
That said, part of defining refactorings is to be explicit about what aspects of code behaviour one wants to preserve. Behaviour preservation is rarely absolute, and if one is happy with limited guarantees, automatic refactorings become possible. For instance, see
Tool-supported Refactoring for JavaScript
http://cs.au.dk/~amoeller/papers/jsrefactor/
for a discussion of some of the challenges involved in "simple" refactorings.
Indeed. The type-system related features I use most often in Eclipse are:
- Safely renaming a method or function. I do this frequently to keep the vocabulary used in methods etc. fresh. All JS IDEs that I have tried are not much fun in this regard.
Surprisingly difficult, see reference above. I believe jetbrains have some JS refactoring support:
www.jetbrains.com/editors/javascript_editor.jsp?ide=idea#js
- Jump to the definition of a method/function.
Not too difficult for functions (modulo complex, statically undecidable import/export patterns), though you do need a parser and proper scope tracking - try DoctorJS (uses Narcissus) for tags file generation.
It used to track object properties better (which was used for navigation and completion), but apparently that got broken a while ago. This is the part that would improve completion and navigation to methods or module exports.
I'm in the process of adding scoped tags support to DoctorJS, so that in Vim you can also navigate to local definitions and parameters (in principle, that would also give access to any local information produced by DoctorJS's analysis, but I'm not sure whether that information is recorded anywhere). Generation and navigation already works, I just have to push to github, figure out pull requests and submodules, and write a blog post to explain the whole thing..
- Find all invocations of a method/function. Often, I just change the name of a method (e.g. by appending an "x") and go through all the errors that that produces.
Again, that is undecidable in general, due to Javascript's flexibility, though approximations will be useful.
Claus (who'd love to work on JS-in-JS tooling;-) clausreinke.github.com
On Sep 12, 2011, at 8:38 AM, Claus Reinke wrote:
I'm hopeful that type inference combined with class syntax and an (eventual) traits system will get us "there", so that you can use structural type tests for enforcement and that the IDE can get the benefit of hints through inference.
Improved, statically checkable, types would also help mitigate Javascript's silent failures (silently breaking code when trying to refactor, fixing bugs, or adding features). Unless the type system is fairly advanced, type safety only expresses a thin veneer of invariants, but coverage is total, and automatic.
I'm sorry, but as much as I want a type system, the idea of a "thin veneer" that imposes an ongoing tax at all declaration and usage sites is really distasteful. JS should do better than Java and it's ilk have managed.
Together with static scoping, that tends to avoid a good deal of accidental changes that might go unnoticed when coding in current Javascript.
As the Erlang folks discovered several times, retrofitting a useful type system onto a language whose code patterns evolved in a weak type system is hard. It isn't too difficult to come up with a type system that gives some info, but my personal impression was that coding styles have to adapt before the benefits of type inference can be reaped.
That's correct, and it's not "wrong". The trick here is to make sure that the folks who want to pay the tax can, while leaving others free to avoid it.
There are some half dozen or more papers on Javascript type inference or static analysis (hmm, is there a central wiki or bibliography where we could record and collect such JS-related references? should I post here what I've found so far?).
On the practical side, I seem to recall that Netbeans tried using type inference to help JS completion. Probably from this post:
blogs.oracle.com/tor/entry/javascript_type_inference
DoctorJS's mozilla/doctorjs analysis has also been used to provide code completion for some projects. That seems to have been put on the backburner during the recent cloud editor convergence.
That said, I think the "I want to refactor automatically" use-case is sort of over-played.
That depends. Automatic "refactoring", in the strict sense of changing code structure without changing code behaviour, is nearly impossible for Javascript, as there are hardly any valid code equivalences (none if you count reflection).
That still leaves the wide area of automated code transformations: just as we like to have libraries to query and transform the DOM, many of us like to have support for querying and transforming source (or at least ASTs). This kind of language-aware grep/sed would be automated, but not automatic (the results would need to be verified).
That said, part of defining refactorings is to be explicit about what aspects of code behaviour one wants to preserve. Behaviour preservation is rarely absolute, and if one is happy with limited guarantees, automatic refactorings become possible. For instance, see
Tool-supported Refactoring for JavaScript cs.au.dk/~amoeller/papers/jsrefactor
for a discussion of some of the challenges involved in "simple" refactorings.
Indeed. The type-system related features I use most often in Eclipse are:
- Safely renaming a method or function. I do this frequently to keep the vocabulary used in methods etc. fresh. All JS IDEs that I have tried are not much fun in this regard.
Surprisingly difficult, see reference above. I believe jetbrains have some JS refactoring support:
www.jetbrains.com/editors/javascript_editor.jsp?ide=idea#js
- Jump to the definition of a method/function.
Not too difficult for functions (modulo complex, statically undecidable import/export patterns), though you do need a parser and proper scope tracking - try DoctorJS (uses Narcissus) for tags file generation.
It used to track object properties better (which was used for navigation and completion), but apparently that got broken a while ago. This is the part that would improve completion and navigation to methods or module exports.
I'm in the process of adding scoped tags support to DoctorJS, so that in Vim you can also navigate to local definitions and parameters (in principle, that would also give access to any local information produced by DoctorJS's analysis, but I'm not sure whether that information is recorded anywhere). Generation and navigation already works, I just have to push to github, figure out pull requests and submodules, and write a blog post to explain the whole thing..
- Find all invocations of a method/function. Often, I just change the name of a method (e.g. by appending an "x") and go through all the errors that that produces.
Again, that is undecidable in general, due to Javascript's flexibility, though approximations will be useful.
Claus (who'd love to work on JS-in-JS tooling;-) clausreinke.github.com
-- Alex Russell slightlyoff at google.com slightlyoff at chromium.org alex at dojotoolkit.org BE03 E88D EABB 2116 CC49 8259 CF78 E242 59C3 9723
There are some half dozen or more papers on Javascript type inference or static analysis (hmm, is there a central wiki or bibliography where we could record and collect such JS-related references? should I post here what I've found so far?).
For as far as you haven't already, I'd love to see more of them.
That said, I think the "I want to refactor automatically" use-case is sort of over-played.
That depends. Automatic "refactoring", in the strict sense of changing code structure without changing code behaviour, is nearly impossible for Javascript, as there are hardly any valid code equivalences (none if you count reflection).
The "Extract method" is ok, if you don't count context or eval. (Selecting a snippet of code which will be created as a function, adding boiler plate code to make sure new/changed variables are reflected in the original code position. You would have to make sure that the calling context is correct, as that's pretty much impossible to infer. Of course safe "eval" cases are out as well.) Granted, that's still not absolute.
There are more transformations but they are more on a smaller scale. Like those in uglifyjs and google closure compiler.
Indeed. The type-system related features I use most often in Eclipse are:
- Safely renaming a method or function. I do this frequently to keep the vocabulary used in methods etc. fresh. All JS IDEs that I have tried are not much fun in this regard.
Surprisingly difficult, see reference above. I believe jetbrains have some
Obviously, renaming variables is very safe and relatively easy to do outside the with or eval context. The with case becomes dreadfully more difficult, but there are some options there. Eval is completely out.
It used to track object properties better (which was used for navigation and completion), but apparently that got broken a while ago. This is the part that would improve completion and navigation to methods or module exports.
- Find all invocations of a method/function. Often, I just change the name of a method (e.g. by appending an "x") and go through all the errors that that produces.
Again, that is undecidable in general, due to Javascript's flexibility, though approximations will be useful.
What I do in Zeon is I have a "scope centric" tracking object for each variable. And a "tracking object centric" tracking object for properties of a variable. Dito for properties on a property. This of course stops at dynamic access (although numbers and other literals can be worked around with and you could also work around it with annotations). But when you take this central tracking object and store every reference you encounter for it in it, including marking the point of declaration(s), you end up with a very easy navigation structure. If you infer that tracking object to be a function, you have your declaration, usage and invocations.
More generic, I've come up with a kind of xpath for js (jspath?) where
you can refer to specific elements in the code. (It might actually not
be new, I haven't done an exhaustive search. I'd be surprised if it
were.) Since you work with scopes and each variable in a scope is
unique, you can create reference points to variables in scopes. You
can also create reference points to specific literals like functions,
objects and arrays. If foo is a global function, /foo/bar would refer
to the variable bar
in the function foo
. More complex patterns are
possible of course with absolute and relative paths, this is just an
example. I'll publish an article on this soon. Anyways, you can use
this system to have a central tracking method for typing, annotation
and for navigation.
What I'm trying to say is that while there is definitely room for improvement, it's not entirely hopeless. Although for a lot of cases it is, as you say, not very "absolute". I've found the worst cases for js (with respect to static analysis) to be context and dynamic property access. I'm not counting eval because I don't think eval is relevant in static analysis (as in, if eval, warn for the worst). Maybe some kind of annotation could fix the eval black hole... But I'd love some more tools to help here.
On Mon, Sep 12, 2011 at 11:38 AM, Claus Reinke <claus.reinke at talk21.com>wrote:
I'm hopeful that type inference combined with class syntax and
an (eventual) traits system will get us "there", so that you can use structural type tests for enforcement and that the IDE can get the benefit of hints through inference.
Improved, statically checkable, types would also help mitigate Javascript's silent failures (silently breaking code when trying to refactor, fixing bugs, or adding features).
Unit tests with comprehensive and thorough code coverage does this today.
On Mon, Sep 12, 2011 at 12:00 PM, <es-discuss-request at mozilla.org> wrote:
Some of the discussion on this thread amounts to "IDEs work great for typed languages so let's make JS typed". What if we started with "What would be great for JavaScript developers"? Then we would not waste a lot of time talking about static analysis. It's the wrong tool.
jjb
I was super confused as to why good tools and IDEs would require typing.
A bunch of typed languages do have great IDEs but I feel like that's mainly because programming in a typed language can be considerably more difficult which puts more emphasis on creating great tools to help deal with it.
This reminds me of the ES4 discussion around typing being a requirement of making the language more performant, that clearly wasn't true.
I'm going to throw out a crazy idea. Maybe the language is fine in this regard and creating better tools is done by creating better tools, not by changing the language. Maybe changing the language has a negative effect on creating better tools because it increases the surface area and complexity those tools need have to solve.
I'm not saying changing the language is universally bad, only that some problems are better solved outside of ECMA and that there is a big community out here that is working to solve this problem and they don't need anything from ECMA to do it.
Right. Those tell me that guards are not currently scheduled for ES6. Have all decisions in this regard already been made?
Pretty much -- the guards and trademarking proposals came in at the May meeting, kind of last minute. We could start worrying about Dart and try to revive them. Would that be good for ES6? Would it actually "work" in some sense to keep up with Dart?
I think it would be classic mission creep or "jump", and a disorderly way to manage the pipeline of stuff flowing into JS. Better to get some prototypes going based on the strawman proposals, user-test them, and come back to TC39 better informed. If Dart also provides insights, great.
If type guards are not ready yet, then the decision is easy – don’t do it. Otherwise, getting it in seems worth the hassle (more than for any other strawman proposal). This decision also depends on how long ES7 is going to take.
Regarding tool support, adding type annotations in comments (JSDoc-style) should be enough, mid-term.
For IDEs, it would really help if there weren’t so many competing standards for even the most basic things in the JS world. For example:
(1) Doc comments (2) Unit tests (3) Type annotations [ Will become language features: inheritance API, module API ]
Is there really no way to standardize these? Some kind of best practice, recommended by TC39? The current state of affairs really hurts JavaScript. If Java can standardize (1) and have the quasi-standard JUnit für (2), why can’t JavaScript?
Types are not only "desirable" to borrow concepts from current IDEs. We know from DotNET that a language running in a VM can be pretty fast, close to a native language. But it has to be simple for the computer. JavaScript is simple for the developer, but sometimes its flexibility makes it impossible to "optimize" code properly. And it makes JavaScript slower.
Or, to be exact, it doesn't allow to make 95% of our code faster because it would break the other 5%. The more a compiler understand what you're doing, the more it will be confident that optimizing is safe. Types may be part of the data a compiler may use. It doesn't need to be the only one, but it can be a very important one.
At a time where optimization, fluidity and speed are in everyone's mouth, it may be worth considering the addition of types in ECMAScript. DotNET has shown that you can continue to mix strongly and dynamicly types in the same code and that it can work seamlessy.
In ES6, "with" and indirect eval will be prohibited, and direct eval will not be able to create new variables in the current scope. That's huge. We're losing features, but it will allow a new range of optimization that were impossible before. Adding types to ECMAScript can do that. If you don't want an optimized code, you don't need to. But giving the possibility to the developers to choose for a faster solution should be important to this Discussion Group.
-----Message d'origine---
On 12 September 2011 16:31, François REMY <fremycompany_pub at yahoo.fr> wrote:
JavaScript is simple for the developer, but sometimes its flexibility makes it impossible to "optimize" code properly. And it makes JavaScript slower.
I made this graph earlier this year, coincident with the Firefox 4 release: www.page.ca/~wes/SpiderMonkey/Perf/sunspider_history.png
"Look at how much faster JS is now than it was just a few years ago! It's practically a logarithmic curve!"
...and yet, Google's Crankshaft (Dec 2010) was still faster than Firefox 4's SpiderMonkey, showing that there is still room for improvement in the space!
Let's not write the ES5 perf-increase obituary just yet.
From: "François REMY" <fremycompany_pub at yahoo.fr> To: <es-discuss at mozilla.org> Date: Mon, 12 Sep 2011 22:31:17 +0200 Subject: Re: IDE support? Types are not only "desirable" to borrow concepts from current IDEs. We know from DotNET that a language running in a VM can be pretty fast, close to a native language. But it has to be simple for the computer. JavaScript is simple for the developer, but sometimes its flexibility makes it impossible to "optimize" code properly. And it makes JavaScript slower.
Or, to be exact, it doesn't allow to make 95% of our code faster because it would break the other 5%. The more a compiler understand what you're doing, the more it will be confident that optimizing is safe. Types may be part of the data a compiler may use. It doesn't need to be the only one, but it can be a very important one.
Ah, but let's be exact then: what proportion of most Web applications will benefit from type-based performance improvements?
The reason I ask is that the diagrams that Wes posted show two things. One is obvious: the dramatic improvement in JS. Yay! Congrats to all involved!
The other is less obvious: further improvements in JS performance are less and less important. Not "unimportant" and not "insignficant". Just less important because the total performance depends on many factors of which JS -- for most pages most of the time -- may no longer be the critical factor. And even when JS is important, type-based improvements will only be a small factor really. In the big picture, such a fundamental change may not be very valuable.
Of course I am guessing, and having real performance analysis numbers would be excellent information.
jjb
On Mon, Sep 12, 2011 at 5:17 PM, Axel Rauschmayer <axel at rauschma.de> wrote:
Regarding tool support, adding type annotations in comments (JSDoc-style) should be enough, mid-term.
For IDEs, it would really help if there weren’t so many competing standards for even the most basic things in the JS world. For example:
(1) Doc comments (2) Unit tests (3) Type annotations [ Will become language features: inheritance API, module API ]
Is there really no way to standardize these? Some kind of best practice, recommended by TC39? The current state of affairs really hurts JavaScript. If Java can standardize (1) and have the quasi-standard JUnit für (2), why can’t JavaScript?
-- Dr. Axel Rauschmayer
CommonJS is a great place to start. www.commonjs.org
They're already standardizing Promises and Asynchronous Module Loaders. They have also started with Unit Tests. And the great part is that the community is actually listening and taking it into account. Dojo 2 will use CommonJS modules and I wouldn't be surprised if jQueryUI went that way (I already have a draft implementation for it).
Juan
When you see Microsoft and Apple both spending a lot of time just to get "scrolling a list" smooth and at 60fps, I assume there will always be a need for more performance. That all applications will not require that, I can understand it. But, again, let the developper free to decide if he need types or not.
Let me explain something (maybe you all know it, but you just didn't notice how important it is).
function(x) {
x.data++;
}
This code is slow by nature. Because you don't know what kind of data "x" will be, you need to call "ToObject" before using it. Then you need to seek throug a Dictionnary(Of String -> PropertyDescriptor) to get the revelant
property. When you've found your property in the dictionnary, you can increment its value. Even with a BTREE and optimized objects, you'll end up by performing far more operations than just incrementing an INTEGER variable as in C++.
However, if you had written
function(x : MyType) {
x.data++;
}
the compiler would know that the "data" property of a "MyType" instance is stored 4 bytes after where the instance pointer is located in memory. The compiler knows it's an integer property and it can issue very few operation statements in native code :
LOAD *x; LOAD 4; ADD; INCREMENT-REF;
The difference between those two codes are minimal for the developer, but they can improve the speed a lot. Optimizing RegExps, Dates, and even number operations, all of that is fine to get a better score in the SunSpider test. But it will never make ECMAScript a "fast" language. Just a less slow one.
François
-----Message d'origine---
On Sep 12, 2011, at 12:22 PM, John J Barton wrote:
On Mon, Sep 12, 2011 at 12:00 PM, <es-discuss-request at mozilla.org> wrote:
Some of the discussion on this thread amounts to "IDEs work great for typed languages so let's make JS typed". What if we started with "What would be great for JavaScript developers"? Then we would not waste a lot of time talking about static analysis. It's the wrong tool.
Why are you assuming that conclusion already? Why not answer your own question "What would be great for JavaScript developers?" and if the answer includes type inference, great?
On Sep 12, 2011, at 12:35 PM, Mikeal Rogers wrote:
I was super confused as to why good tools and IDEs would require typing.
A bunch of typed languages do have great IDEs but I feel like that's mainly because programming in a typed language can be considerably more difficult which puts more emphasis on creating great tools to help deal with it.
Could be, although a lot of the IntelliJ fans talk not about types per se, but about knowing what member names mean, e.g., which concrete method implementations might obj.foo() actually call? That uses type declarations, for sure, but it need not.
This reminds me of the ES4 discussion around typing being a requirement of making the language more performant, that clearly wasn't true.
That was an AS3 claim, not an ES4 discussion. I explicitly disavowed it during ES4 days here:
brendaneich.com/2007/11/my-media-ajax-keynote
I'm going to throw out a crazy idea. Maybe the language is fine in this regard and creating better tools is done by creating better tools, not by changing the language. Maybe changing the language has a negative effect on creating better tools because it increases the surface area and complexity those tools need have to solve.
+1
I'm not saying changing the language is universally bad, only that some problems are better solved outside of ECMA and that there is a big community out here that is working to solve this problem and they don't need anything from ECMA to do it.
Tools can do all sorts of amazing things with JS source: semi-static analysis, dynamic inspection of the live object graph and active code, combinations and variations. Tools don't have to run super-fast to avoid regressing benchmarks. We're early in JS tool evolution, we should not hack up the language prematurely -- your point is on target.
On Sep 12, 2011, at 11:58 PM, François REMY wrote:
Let me explain something (maybe you all know it, but you just didn't notice how important it is).
function(x) { x.data++; }
This code is slow by nature. Because you don't know what kind of data "x" will be, you need to call "ToObject" before using it. Then you need to seek throug a Dictionnary(Of String -> PropertyDescriptor) to get the revelant property. When you've found your property in the dictionnary, you can increment its value. Even with a BTREE and optimized objects, you'll end up by performing far more operations than just incrementing an INTEGER variable as in C++.
However, if you had written
function(x : MyType) { x.data++; }
the compiler would know that the "data" property of a "MyType" instance is stored 4 bytes after where the instance pointer is located in memory. The compiler knows it's an integer property and it can issue very few operation statements in native code :
LOAD *x; LOAD 4; ADD; INCREMENT-REF;
The difference between those two codes are minimal for the developer, but they can improve the speed a lot. Optimizing RegExps, Dates, and even number operations, all of that is fine to get a better score in the SunSpider test. But it will never make ECMAScript a "fast" language. Just a less slow one.
You are simply way out of date on JS optimizing VMs, which (based on work done with Self and Smalltalk) all now use "hidden classes" aka shapes and polymorphic inline caching to optimize to exactly the pseudo-assembly you show, prefixed by a short (cheap if mispredicted) branch.
What's more, SpiderMonkey bleeding edge does semi-static type inference, which can eliminate the guard branch.
Please don't keep repeating out of date information about having to "seek through a dictionary". It simply isn't true.
Okay, maybe you don't have to seek throug a dictionnary anymore. But still, the performance overhead is higher than you would want to. Because it's possible to achieve no overhead, even a small overhead is an overhead to be deleted. The fastest code is the code you don't have to execute.
A compiler may be very dumb, if I say something is true, he know it's true. On the other side, a compiler can be as smart as you want, there are things he'll never be able to guess.
And even if the compiler is able to guess (and makes things faster using hidden classes and all other stuff modern VM implemented), it will take CPU time to guess things I already knew at the moment where I started writing my code. And each time my script will run, the browser will need to figure some stuff out. What a waste of energy, in a world where power efficiency is key.
-----Message d'origine---
There are some half dozen or more papers on Javascript type inference or static analysis (hmm, is there a central wiki or bibliography where we could record and collect such JS-related references? should I post here what I've found so far?).
For as far as you haven't already, I'd love to see more of them.
Ok, here are some I've found so far (more than I remembered:-). I've tried to add urls, but haven't checked those:
TAJS: Type Analyzer for JavaScript
http://www.brics.dk/TAJS/
Modeling the HTML DOM and Browser API
in Static Analysis of JavaScript Web Applications
Jensen, Madsen, Møller, 2011
http://cs.au.dk/~amoeller/papers/dom/
Interprocedural Analysis with Lazy Propagation
Jensen, Møller, Thiemann, 2010
http://users-cs.au.dk/amoeller/papers/lazy/
Type Analysis for JavaScript
Jensen, Møller, Thiemann, 2009
http://users-cs.au.dk/amoeller/papers/tajs/
Recency Types for Analyzing Scripting Languages
Heidegger, Thiemann, 2010
https://proglang.informatik.uni-freiburg.de/JavaScript/recency.pdf
Towards a Type System for Analyzing JavaScript Programs
Thiemann, 2005
https://mailserver.di.unipi.it/ricerca/proceedings/ETAPS05/papers/3444/34440408.pdf
Type Inference for JavaScript
Anderson, 2006
http://pubs.doc.ic.ac.uk/chrisandersonphd/
Towards Type Inference for JavaScript
Anderson, Giannini, Drossopoulou, 2005
http://pubs.doc.ic.ac.uk/typeinferenceforjavascript-ecoop/
Staged Information Flow for JavaScript
Chugh, Meister, Jhala, Lerner, 2009
http://goto.ucsd.edu/~rjhala/papers/staged_information_flow_for_javascript.html
An Empirical Study of Privacy-Violating Information Flows
in JavaScript Web Applications
Jang, Jhala, Lerner, Shacham, 2010
http://goto.ucsd.edu/~rjhala/papers/an_empirical_study_of_privacy_violating_flows_in_javascript_web_applications.html
CFA2: a Context-Free Approach to Control-Flow Analysis
Vardoulakis, Shivers, 2010 (used in DoctorJS)
http://www.ccs.neu.edu/home/dimvar/papers/cfa2-NU-CCIS-10-01.pdf
Gulfstream: Incremental Static Analysis for
Streaming JavaScript Applications
Livshits, Guarnieri, 2010
http://research.microsoft.com/pubs/118310/paper.pdf
GATEKEEPER: Mostly Static Enforcement of Security and
Reliability Policies for JavaScript Code
Guarnieri, Livshits, 2009
http://research.microsoft.com/en-us/um/people/livshits/papers/pdf/usenixsec09a.pdf
JSTrace: Run-time Type Discovery for JavaScript
Saftoiu, 2010
http://www.cs.brown.edu/research/pubs/theses/ugrad/2010/saftoiu.pdf
Polymorphic Type Inference for Scripting
Languages with Object Extensions
Zhao, 2011
http://jiangxi.cs.uwm.edu/publication/dls2011.pdf
RATA: Rapid Atomic Type Analysis by Abstract Interpretation.
Application to JavaScript optimization.
Logozzo, Venter,
http://research.microsoft.com/pubs/115734/aitypes.pdf
An Analytic Framework for JavaScript
van Horn, Might, 2011
http://www.ccs.neu.edu/home/dvanhorn/pubs/vanhorn-might-preprint11.pdf
Points-to Analysis for JavaScript
Dongseok Jang, Kwang-Moo Choe, 2009
http://cseweb.ucsd.edu/~d1jang/papers/sac09.pdf
Language-Based Isolation of Untrusted JavaScript
Sergio Maffeis, Mitchell, Taly, 2009
http://www.stanford.edu/~jcm/papers/csf09-techrep.pdf
An Operational Semantics for JavaScript
Maffeis, Mitchell, Taly, 2008
http://www.stanford.edu/~jcm/papers/aplas08-camera-ready.pdf
The Essence of JavaScript
Guha, Saftoiu, Krishnamurthi, 2010
http://www.cs.brown.edu/research/plt/dl/jssem/v1/gsk-essence-javascript-r5.pdf
Using Static Analysis for Ajax Intrusion Detection
Guha, Krishnamurthi, Jim, 2009
http://sca2002.cs.brown.edu/people/arjun/public/intrusion-detection.pdf
Typing Local Control and State using Flow Analysis
Guha, Saftoiu, Krishnamurthi, 2011
http://www.cs.brown.edu/~sk/Publications/Papers/Published/gsk-flow-typing-theory/paper.pdf
JavaScript Instrumentation for Browser Security
Yu, Chander, Islam, Serikov, 2007
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.89.183&rep=rep1&type=pdf
Automated Analysis of Security-Critical JavaScript APIs
Taly, Erlingsson, Mitchell, Miller, Nagra, 2011
http://theory.stanford.edu/~ataly/Papers/sp11.pdf
Trace-based Just-in-Time Type Specialization for Dynamic Languages
Gal et. al., 2009
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.148.349&rep=rep1&type=pdf
Also useful for setting the stage are general studies like:
The Eval that Men Do
A Large-scale Study of the Use of Eval in JavaScript Applications
Richards, Hammer, Burg, Vitek, 2011
http://www.cs.washington.edu/homes/burg/files/eval-ecoop-2011-paper.pdf
An Analysis of the Dynamic Behavior of JavaScript Programs
Richards, Lebresne, Burg, Vitek, 2010
http://www.cs.washington.edu/homes/burg/files/dynjs-pldi-2010-paper.pdf
Not to forget moving from prototypes to practice:
Introduce Javascript type inference
https://bugzilla.mozilla.org/show_bug.cgi?id=557407
I've omitted some performance-oriented general studies and implementation papers, as well as presentations where I've only seen slides, focusing on publications somewhat related to (static or dynamic) analysis. No claims of accuracy, completeness, or relevance are made - further references or corrections are welcome!
Claus clausreinke.github.com
Improved, statically checkable, types would also help mitigate Javascript's silent failures (silently breaking code when trying to refactor, fixing bugs, or adding features). Unless the type system is fairly advanced, type safety only expresses a thin veneer of invariants, but coverage is total, and automatic.
I'm sorry, but as much as I want a type system, the idea of a "thin veneer" that imposes an ongoing tax at all declaration and usage sites is really distasteful. JS should do better than Java and it's ilk have managed.
Why are you trying to misinterpret me? My inspiration for a useful type system would be Haskell (certainly not Java), and it is likely that any JS type system will be less advanced. So it will be able to express fewer program properties, hence only replace a weak/thin level of testing. But if done right, that level of testing will still cover the whole program, without coders having to include needless declarations, because type inference will connect the dots.
That also answers another common objection: useful type systems do not render testing superfluous, but they allow test suites to focus on more interesting aspects of your code, because the basics are covered by the type system (and that coverage includes source and path coverage).
Claus clausreinke.github.com
On Sep/12/2011 22:19, Rick Waldron wrote:
On Mon, Sep 12, 2011 at 11:38 AM, Claus Reinke <claus.reinke at talk21.com <mailto:claus.reinke at talk21.com>> wrote:
I'm hopeful that type inference combined with class syntax and an (eventual) traits system will get us "there", so that you can use structural type tests for enforcement and that the IDE can get the benefit of hints through inference. Improved, statically checkable, types would also help mitigate Javascript's silent failures (silently breaking code when trying to refactor, fixing bugs, or adding features).
Unit tests with comprehensive and thorough code coverage does this today.
Sure. The question is why should we do it if a machine can do it? That would mean we can concentrate on writing business logic tests instead of tests that verify type matches.
For features such as expansion help (e.g. when you type a dot after a parameter name inside a function), you still need static analysis.
But your point stands: JS IDEs can and should go beyond “edit-compile-debug” (which has become so mainstream that many people forget about languages such as Common Lisp and Smalltalk).
On 13 September 2011 09:33, Brendan Eich <brendan at mozilla.com> wrote:
You are simply way out of date on JS optimizing VMs, which (based on work done with Self and Smalltalk) all now use "hidden classes" aka shapes and polymorphic inline caching to optimize to exactly the pseudo-assembly you show, prefixed by a short (cheap if mispredicted) branch.
What's more, SpiderMonkey bleeding edge does semi-static type inference, which can eliminate the guard branch.
Please don't keep repeating out of date information about having to "seek through a dictionary". It simply isn't true.
True. On the other hand, all the cleverness in today's JS VMs neither comes for free, nor can it ever reach the full performance of a typed language.
- There are extra costs in space and time to doing the runtime analysis.
- Compile time is runtime, so there are severe limits to how smart you can afford to get in a compiler.
- A big problem is predictability, it is a black art to get the best performance out of contemporary JS VMs.
- The massive complexity that comes with implementing all this affects stability.
- Wrt limits, even in the ideal case, you can only approximate the performance of typed code -- e.g. for property access you have at least two memory accesses (type and slot) plus a comparison and branch, where a typed language would only do 1 memory access.
- Type inference might mitigate some more of these cases, but will be limited to fairly local knowledge.
- Omnipresent mutability is another big performance problem in itself, because most knowledge is never stable.
So despite all the cool technology we use these days, it is safe to assume that we will never play in the performance league of typed languages. Unless we introduce real types into JS, of course. :)
On Sep 13, 2011, at 5:33 AM, Andreas Rossberg wrote:
On 13 September 2011 09:33, Brendan Eich <brendan at mozilla.com> wrote:
You are simply way out of date on JS optimizing VMs, which (based on work done with Self and Smalltalk) all now use "hidden classes" aka shapes and polymorphic inline caching to optimize to exactly the pseudo-assembly you show, prefixed by a short (cheap if mispredicted) branch.
What's more, SpiderMonkey bleeding edge does semi-static type inference, which can eliminate the guard branch.
Please don't keep repeating out of date information about having to "seek through a dictionary". It simply isn't true.
True. On the other hand, all the cleverness in today's JS VMs neither comes for free, nor can it ever reach the full performance of a typed language.
The AS3 => ES4 argument for optional types. Been there, done that.
- There are extra costs in space and time to doing the runtime analysis.
- Compile time is runtime, so there are severe limits to how smart you can afford to get in a compiler.
These are bearable apples to trade against the moldy oranges you'd make the world eat by introducing type annotations to JS. Millions of programmers would start annotating "for performance", i.e., gratuitously, making a brittle world at high aggregate cost.
The costs born in browsers by implementors and (this can hit users, but it's marginal) at runtime when evaluating code are less, I claim.
- A big problem is predictability, it is a black art to get the best performance out of contemporary JS VMs.
This is the big one in my book. Optimization faults happen. But can we iterate till flat?
- The massive complexity that comes with implementing all this affects stability.
This one I'm less sympathetic to, since we won't get rid of untyped JS up front. A sunk cost fallacy? If we could make a "clean break" (ahem), sure. Otherwise this cost must be paid.
- Wrt limits, even in the ideal case, you can only approximate the performance of typed code -- e.g. for property access you have at least two memory accesses (type and slot) plus a comparison and branch, where a typed language would only do 1 memory access.
That's not the ideal case. Brian Hackett's type inference work in SpiderMonkey can eliminate the overhead here. Check it out.
- Type inference might mitigate some more of these cases, but will be limited to fairly local knowledge.
s/might/does/ -- why did you put type inference in a subjunctive mood? Type inference in SpiderMonkey (Firefox nightlies) is not "local".
- Omnipresent mutability is another big performance problem in itself, because most knowledge is never stable.
Type annotations or (let's say) "guards" as for-all-time monotonic bounds on mutation are useful to programmers too, for more robust programming-in-the-large. That's a separate (and better IMHO) argument than performance. It's why they are on the Harmony agenda.
So despite all the cool technology we use these days, it is safe to assume that we will never play in the performance league of typed languages. Unless we introduce real types into JS, of course. :)
Does JS need to be as fast as Java? Would half as fast be enough?
Speaking pragmatically, for myself and my unusual (server-side) environment:
On 13 September 2011 10:48, Brendan Eich <brendan at mozilla.com> wrote:
Does JS need to be as fast as Java? Would half as fast be enough?
If it's compute-bound then that's plenty.
Provided we develop so that we can scale across cores, I can double my compute this month for about the same dollar cost as 0.1% programmer-time. My estimate is that type annotations would be far more expensive; as we don't generally have type-related bugs in our JS code, spending time annotating only for the sake of execution speed would be a poor decision.
On Tue, Sep 13, 2011 at 12:26 AM, Brendan Eich <brendan at mozilla.com> wrote:
On Sep 12, 2011, at 12:22 PM, John J Barton wrote:
On Mon, Sep 12, 2011 at 12:00 PM, <es-discuss-request at mozilla.org> wrote:
Some of the discussion on this thread amounts to "IDEs work great for typed languages so let's make JS typed". What if we started with "What would be great for JavaScript developers"? Then we would not waste a lot of time talking about static analysis. It's the wrong tool.
Why are you assuming that conclusion already? Why not answer your own question "What would be great for JavaScript developers?" and if the answer includes type inference, great?
I'm assuming that conclusion already because the current tools for JS development are so incredibly lame that wasting time on static analysis -- which we know does not work without changing the language -- defies common sense.
jjb
On Tue, Sep 13, 2011 at 12:03 PM, John J Barton <johnjbarton at johnjbarton.com
wrote:
On Tue, Sep 13, 2011 at 12:26 AM, Brendan Eich <brendan at mozilla.com> wrote:
On Sep 12, 2011, at 12:22 PM, John J Barton wrote:
On Mon, Sep 12, 2011 at 12:00 PM, <es-discuss-request at mozilla.org> wrote:
Some of the discussion on this thread amounts to "IDEs work great for typed languages so let's make JS typed". What if we started with "What would be great for JavaScript developers"? Then we would not waste a lot of time talking about static analysis. It's the wrong tool.
Why are you assuming that conclusion already? Why not answer your own question "What would be great for JavaScript developers?" and if the answer includes type inference, great?
I'm assuming that conclusion already because the current tools for JS development are so incredibly lame that wasting time on static analysis -- which we know does not work without changing the language -- defies common sense.
I think you may be a little confused. Type "inference" means inferring types without annotations. Thus, it can be done without changing the language and can be useful for vm implementers and tool developers alike. Today. Sounds pretty common-sensical to me.
On Sep 13, 2011, at 9:03 AM, John J Barton wrote:
On Tue, Sep 13, 2011 at 12:26 AM, Brendan Eich <brendan at mozilla.com> wrote:
On Sep 12, 2011, at 12:22 PM, John J Barton wrote:
On Mon, Sep 12, 2011 at 12:00 PM, <es-discuss-request at mozilla.org> wrote:
Some of the discussion on this thread amounts to "IDEs work great for typed languages so let's make JS typed". What if we started with "What would be great for JavaScript developers"? Then we would not waste a lot of time talking about static analysis. It's the wrong tool.
Why are you assuming that conclusion already? Why not answer your own question "What would be great for JavaScript developers?" and if the answer includes type inference, great?
I'm assuming that conclusion already because the current tools for JS development are so incredibly lame that wasting time on static analysis -- which we know does not work without changing the language
Ok, your assumed conclusion rests on a prior assumption:
static analysis ... we know does not work without changing the language
Evidence?
It seems to me you have not studied either doctorjs.org, which is nodejs based, the code is on github (it's all JS, essentially a fork of narcissus via Patrick Walton's jsctags):
or Brian Hackett's work in SpiderMonkey (Patrick Walton made a JS version of it, should be easier to study:
Really, asserting an assumption to back up an assumed conclusion?
I'm assuming that conclusion already because the current tools for JS development are so incredibly lame that wasting time on static analysis -- which we know does not work without changing the language
Ok, your assumed conclusion rests on a prior assumption:
static analysis ... we know does not work without changing the language
Evidence?
It seems to me you have not studied either doctorjs.org, which is nodejs based, the code is on github (it's all JS, essentially a fork of narcissus via Patrick Walton's jsctags):
Or just do some live coding at zeonjs.com
There are key points in Breton's post below that should not be lost in a debate on whether or not dynamic JS performance can or can not asymptotically approach that of a hypothetical class-based statically typed language (something that you might call simply "J").
Success with a dynamic language languages requires understanding that they are different beasts than statically typed languages. Great performance and great development tools for them is not achieved by trying to treat them like a poorly designed or ill behaved statically typed language. The recently successful JS "engine" implementors already knew or ultimately learned this and we have really turned the corner on JS performance (now if they can just get their acts together WRT garbage collection...).
We need to do the same thing with our JS tools. All of the great dynamic language IDEs (that, BTW, preceded and largely inspired the modern static language IDEs) were "live" environments. They didn't just provide a live debugging experience, it was a live authoring experience. You developed code as a dynamic running program. They truly support incremental, interactive development. Developers operate in a continuous write a little/run a little cycle. The tools use information obtained from an actual running program to provide a great developer experience. Regarding Axel's case of "dot completion". In a real live development experience you would probably be writing that code within the context of a a function call that is suspended because the function doesn't actually exist yet or its body has not been filled in. As you write that code the tools have visibility of the actual arguments that were passed to the suspended call as one of the sources of information for suggesting dot completions.
It is probably also worth noting that the language "engines" the successful live dynamic language environments were built upon were engineered all the way down to their lowest levels to support the dynamic program modification capabilities needed to support their live developer experience. In many cases it makes sense to algorithms around immutable immutable objects but such objects sill need to be mutable by the tools of a live development environment.
Another difference is that static language IDEs generally try to provide perfect information. If one provides a dot completion list then the expectation is that the list is contextually exhaustive and accurate. These and only these names can be used after this dot. A dynamic language tools would offer an advisory list. Here are some names that based upon the tools current understanding of the program are likely to be meaningful after this dot. Static tools operate under the illusion that programs are a fixed and closed world that can be perfectly understood. Some people would say that such programs tend to be "fragile" in the face of change. Dynamic language tools work in a world where programs are malleable and continually evolving. Which is a closer reflection of the actual web?
To do great things with a language you have to love the language. The great JS development experiences of the future will be built by a new generation of JS lovers who aren't constrained by static language notions of how development tools need to work.
On Sep 13, 2011, at 1:02 AM, François REMY wrote:
Okay, maybe you don't have to seek throug a dictionnary anymore. But still, the performance overhead is higher than you would want to. Because it's possible to achieve no overhead, even a small overhead is an overhead to be deleted. The fastest code is the code you don't have to execute.
You're right, small constant factors and terms matter asymptotically.
More apples-to-oranges, but we have to weigh and decide: JS programmer productivity is not clearly helped by adding type annotations. We worked through a number of scenarios for ES4:
-
Typed code calling untyped code: dynamic barriers required to enforce invariants, this'll cost.
-
Untyped code calling typed code: spot-checks (remember "like") or barriers required, ditto.
-
The easy one, typed calling typed, but then you've probably harmed productivity by over-annotating.
The gravity well is steep. AS3 went far down it and its users over-annotated. Different economics and tooling on the web, so maybe we'll be luckier -- or perhaps an attempt to sell optional type annotations will simply fail to close -- no widespread adoption.
This all assumes we have a sound optional type system. We never had one for ES4. It's a research area. TC39 is not doing research, remember?
A compiler may be very dumb, if I say something is true, he know it's true. On the other side, a compiler can be as smart as you want, there are things he'll never be able to guess.
Yes, but does JS need to run almost as fast as C?
BTW, for well-behaved benchmarks using typed arrays, our tracing JIT does run about as fast as C. So really, you have to stipulate workload. For all workloads, must JS run almost as fast as C? No way!
Great post, Allen!
On 13 September 2011 15:01, Allen Wirfs-Brock <allen at wirfs-brock.com> wrote:
We need to do the same thing with our JS tools. All of the great dynamic language IDEs (that, BTW, preceded and largely inspired the modern static language IDEs) were "live" environments. They didn't just provide a live debugging experience, it was a live authoring experience. You developed code as a dynamic running program. They truly support incremental, interactive development. Developers operate in a continuous write a little/run a little cycle. The tools use information obtained from an actual running program to provide a great developer experience.
This reminds of one of my constant modern-day-language laments...no development-time interactivity.
I cut my very first programming teeth during the 8-bit microcomputer era, and learned a wide variety of BASICs (Sinclair, Commodore, AppleSoft, Atari, GW-). A few years on and I'd moved to environments like Logo, Quick Basic 4.5 (not Visual Basic), before heading for less dynamic (but faster) pastures. I've spent time with unusual IDEs like Garry Kitchen's Game Maker, Opcode's MAX, and ControllerMate IV.
All of these environments -- even if the languages themselves were awful -- had one thing in common which I love: you just type stuff in, and it goes. You can try a hundred solutions as fast as you can google for one. You get to explore the machine. You learn by doing.
These are great traits, and one of the reasons I love JS is that I believe I can recapture some of that ... agility ... that has been eroded over the years.
When I write shell programs, and JS programs, I keep an extra terminal window open to a spare shell or a JS REPL. I try stuff. Stuff that works, I copy into my program. Then I run my program - which happens quickly, because the compiler is super-fast and the program is a contained entity which probably runs in a dynamically configured environment.
I'm a *highly *productive shell programmer, and a very productive JS programmer. I spent more than a decade full time hacking C, and I frequently write JS programs which are superior to equivalent C programs, even when they are both manipulating the same underlying OS calls, because I can test my JS incrementally (not to mention prototypal inheritance, superior flow control,etc).
So, even though C is absolutely my forte, I prefer to hack in JS these days because I am so much more productive. That increase in productivity is due in no small part to the dynamic language development experience you mentioned above....
....and I believe we have only barely scratched the surface. I can't wait to see what improvements will be brought to the ECMAScript IDE in the next few years.
Just to respond to this, JavaScript is one of my favorite language. I like it. I like the fact I’ve a freedom hardly comparable to anything else. But it’s not because we’ll have the possibility to use static types where it’s really useful we’ll lose the hability to continue our work as we do today.
It’s just one more tool in your toolbox. Not a complete language change.
From: Allen Wirfs-Brock Sent: Tuesday, September 13, 2011 9:01 PM To: Axel Rauschmayer Cc: es-discuss ; Breton Slivka Subject: Re: IDE support? There are key points in Breton's post below that should not be lost in a debate on whether or not dynamic JS performance can or can not asymptotically approach that of a hypothetical class-based statically typed language (something that you might call simply "J").
Success with a dynamic language languages requires understanding that they are different beasts than statically typed languages. Great performance and great development tools for them is not achieved by trying to treat them like a poorly designed or ill behaved statically typed language. The recently successful JS "engine" implementors already knew or ultimately learned this and we have really turned the corner on JS performance (now if they can just get their acts together WRT garbage collection...).
We need to do the same thing with our JS tools. All of the great dynamic language IDEs (that, BTW, preceded and largely inspired the modern static language IDEs) were "live" environments. They didn't just provide a live debugging experience, it was a live authoring experience. You developed code as a dynamic running program. They truly support incremental, interactive development. Developers operate in a continuous write a little/run a little cycle. The tools use information obtained from an actual running program to provide a great developer experience. Regarding Axel's case of "dot completion". In a real live development experience you would probably be writing that code within the context of a a function call that is suspended because the function doesn't actually exist yet or its body has not been filled in. As you write that code the tools have visibility of the actual arguments that were passed to the suspended call as one of the sources of information for suggesting dot completions.
It is probably also worth noting that the language "engines" the successful live dynamic language environments were built upon were engineered all the way down to their lowest levels to support the dynamic program modification capabilities needed to support their live developer experience. In many cases it makes sense to algorithms around immutable immutable objects but such objects sill need to be mutable by the tools of a live development environment.
Another difference is that static language IDEs generally try to provide perfect information. If one provides a dot completion list then the expectation is that the list is contextually exhaustive and accurate. These and only these names can be used after this dot. A dynamic language tools would offer an advisory list. Here are some names that based upon the tools current understanding of the program are likely to be meaningful after this dot. Static tools operate under the illusion that programs are a fixed and closed world that can be perfectly understood. Some people would say that such programs tend to be "fragile" in the face of change. Dynamic language tools work in a world where programs are malleable and continually evolving. Which is a closer reflection of the actual web?
To do great things with a language you have to love the language. The great JS development experiences of the future will be built by a new generation of JS lovers who aren't constrained by static language notions of how development tools need to work.
Allen
On Sep 13, 2011, at 4:49 AM, Axel Rauschmayer wrote:
For features such as expansion help (e.g. when you type a dot after a parameter name inside a function), you still need static analysis.
But your point stands: JS IDEs can and should go beyond “edit-compile-debug” (which has become so mainstream that many people forget about languages such as Common Lisp and Smalltalk).
On Sep 13, 2011, at 6:44 , Breton Slivka wrote:
Forgive me if this has already been talked about, but is it possible
that trying to improve static analysis of javascript is barking up the
wrong tree? I think a better approach is to write your IDE in
javascript and have it running inside a context that has access to the
running JS environment. Think about the precedents set by Smalltalk
and Self, and even Forth, with hints of this happening in the modern
browser "debugger" environments which now have some rudimentary code
completion features.
A JS IDE should be a running environment that can reflect on changes
to the environment dynamically and "save" them back out to your source
files, via some diff like protocol which can be rather
straightforwardly implemented with proxies, akin to how a self program
was essentially an in memory image core dump.
Having your IDE hooked up to a running JS context gets around a lot of
your problems with static analysis, including even dealing with
dynamic property access, dynamically constructed objects, prototype
chains and inheritences, and function name aliases.
Well, you're right to point the difficulty encountered while stepping "in" and "out" of a typed environment. I know I'm fairly limited in my understanding of JavaScript VMs because they've become both wonderful and very complex tools at the same time, but I've got the chance to understand pretty deeply the .NET CLR. In fact, when you use the "Object" type for a parameter, you must know that sending an integer or a boolean to the fuction will require something called "boxing" (I think it's the kind of problem you rose in your mail).
It's true that it's rather uncommon in .NET since nearly everything is strongly typed. But it comes at a cost, I understand it. Depending on the cost of "boxing" and "unboxing", it may be a worse overhead than the initial one (the one where nothing would be typed). But, well, this problem has to be taken in account in the current implementation of ECMAScript, right? Since you can call methods on integers or booleans, there must be some kind of Boxing involved somewhere.
Anyway, I feel like annotating is needed, somewhere. For exemple, Visual Studio does know that the return value of document.getElementById is a HTMLElement. Since he doesn't have access to the browser own code, it must be hard-coded somewhere. It's the same for framework functions. You don't want your IDE to analyse the whole jQuery framework each time you use it. If it has simple annotation on its functions return values and parameters, this task can be done when needed and very quickly. Almost chirugically.
An exemple : in VB.NET, if you have a function taking an Enum as a parameter, the Intellisense will provide you as options the flag names defined in the enum. But in your code, you may want to treat the value as an int and there's no way, from the code, to guess it's coming form an enum if it wasn't stated in the parameter type.
So, yes, annotating types can help a lot. Inference can do a lot, too. I almost never specify the types of my variables and my lambda's parameters, to quote a few. From the informations gathered from a small amount of annotation, the compiler can guess those things for me, right when I'm typing the code. If I make changes to my program, it will be able to "update" the type inference and check incompatibilities. All this was a pain when type inference was not enabled.
Static typing, type inferfence and virtual execution are both very useful tools, and each of them solves almost the same problem. But each of them has case it solves pretty well and others it don't. Making them working together is the dream we should have. Actually, working together is the motto of the web as a whole...
-----Message d'origine---
Just to point out that Web Inspector in the Chrome browser has run time dot completions (as does Firebug) and it has live JS and CSS editing with save to file. I won't defend the user experience, that needs work.
I tried and failed to convince one IDE team that starting from the runtime tools was the short path to better Web dev.
jjb
On Sep 13, 2011, at 12:26 AM, Brendan Eich wrote:
On Sep 12, 2011, at 12:22 PM, John J Barton wrote:
On Mon, Sep 12, 2011 at 12:00 PM, <es-discuss-request at mozilla.org> wrote:
Some of the discussion on this thread amounts to "IDEs work great for typed languages so let's make JS typed". What if we started with "What would be great for JavaScript developers"? Then we would not waste a lot of time talking about static analysis. It's the wrong tool.
Why are you assuming that conclusion already? Why not answer your own question "What would be great for JavaScript developers?" and if the answer includes type inference, great?
John and I corresponded privately and we agreed that static is less than static+dynamic. That is something I tend to ass-ume, being an implementor (SpiderMonkey does analysis when compiling, and of course lots of runtime feedback-based code generation).
So, static+dynamic. The static side has some powerful algorithms to bring to bear. Dynamic is necessary due to eval and kin, and gives strictly more information (and more relevant information!).
I am always amused by the continuing demands for more performance. The only real advantage of performance as a major metric is that it is relatively easy to measure.
If performance is your number one goal, then the only languages you should consider are assembler and machine language. :-)
On the other hand, if you like safety, security, maintainability, understandability etc., then recognize that these features have associated costs.
On 9/13/11 at 7:48, brendan at mozilla.com (Brendan Eich) wrote:
On Sep 13, 2011, at 5:33 AM, Andreas Rossberg wrote:
- A big problem is predictability, it is a black art to get the best performance out of contemporary JS VMs.
This is the big one in my book. Optimization faults happen. But can we iterate till flat?
A set of rules a developer interested in performance can use would be helpful. Particularly if they applied to more than one implementation. :-)
- The massive complexity that comes with implementing all this affects stability.
This one I'm less sympathetic to, since we won't get rid of untyped JS up front. A sunk cost fallacy? If we could make a "clean break" (ahem), sure. Otherwise this cost must be paid.
We could get better stability with simpler, less performant VMs. Some users might prefer the increased stability and security such a VM would offer.
Cheers - Bill
Bill Frantz | Privacy is dead, get over | Periwinkle (408)356-8506 | it. | 16345 Englewood Ave www.pwpconsult.com | - Scott McNealy | Los Gatos, CA 95032
On Sep 13, 2011, at 3:49 PM, Bill Frantz wrote:
I am always amused by the continuing demands for more performance. The only real advantage of performance as a major metric is that it is relatively easy to measure.
If performance is your number one goal, then the only languages you should consider are assembler and machine language. :-)
On the other hand, if you like safety, security, maintainability, understandability etc., then recognize that these features have associated costs.
I agree, of course. Still, it's surprising that JS, which is quite fast, is still considered the performance bottleneck. Profiles I see, ignoring JS-heavy benchmarks that don't match much real-world code, show we have work cut out elsewhere: DOM, layout, rendering...
Sure, putting game logic and physics engine in JS will hurt. Indeed we should be using the short-vector units and massively parallel GPUs (safely) from JS. More on this very soon.
On 9/13/11 at 7:48, brendan at mozilla.com (Brendan Eich) wrote:
On Sep 13, 2011, at 5:33 AM, Andreas Rossberg wrote:
- A big problem is predictability, it is a black art to get the best performance out of contemporary JS VMs.
This is the big one in my book. Optimization faults happen. But can we iterate till flat?
A set of rules a developer interested in performance can use would be helpful. Particularly if they applied to more than one implementation. :-)
That will require iteration too, just to get VMs into alignment (assuming their maintainers are willing to do the work, which may mean converging on optimization hierarchy).
- The massive complexity that comes with implementing all this affects stability.
This one I'm less sympathetic to, since we won't get rid of untyped JS up front. A sunk cost fallacy? If we could make a "clean break" (ahem), sure. Otherwise this cost must be paid.
We could get better stability with simpler, less performant VMs. Some users might prefer the increased stability and security such a VM would offer.
Not in the large in today's browser market. A niche market, perhaps. Very niche.
On 13 September 2011 16:48, Brendan Eich <brendan at mozilla.com> wrote:
On Sep 13, 2011, at 5:33 AM, Andreas Rossberg wrote:
- There are extra costs in space and time to doing the runtime analysis.
- Compile time is runtime, so there are severe limits to how smart you can afford to get in a compiler.
These are bearable apples to trade against the moldy oranges you'd make the world eat by introducing type annotations to JS. Millions of programmers would start annotating "for performance", i.e., gratuitously, making a brittle world at high aggregate cost.
The costs born in browsers by implementors and (this can hit users, but it's marginal) at runtime when evaluating code are less, I claim.
Depends on how good you want to optimize. Aggressive compilers can be really slow. There are limits to what you can bear at runtime. Especially, when you have to iterate the process.
- The massive complexity that comes with implementing all this affects stability.
This one I'm less sympathetic to, since we won't get rid of untyped JS up front. A sunk cost fallacy? If we could make a "clean break" (ahem), sure. Otherwise this cost must be paid.
Well, the counter-argument would be that you wouldn't need to care about optimising untyped code as much, if the user had the option to switch to a typed sublanguage for performance.
- Wrt limits, even in the ideal case, you can only approximate the performance of typed code -- e.g. for property access you have at least two memory accesses (type and slot) plus a comparison and branch, where a typed language would only do 1 memory access.
That's not the ideal case. Brian Hackett's type inference work in SpiderMonkey can eliminate the overhead here. Check it out.
I'm actually pretty excited about that, and hope to see more on that front. Cool stuff.
However, that ideal case is achieved in a relatively small percentage of cases only. Otherwise we should probably not see a (already impressive) 20-40% speed-up (IIRC), but rather something closer to 200-400%.
- Type inference might mitigate some more of these cases, but will be limited to fairly local knowledge.
s/might/does/ -- why did you put type inference in a subjunctive mood? Type inference in SpiderMonkey (Firefox nightlies) is not "local".
Fair enough re the subjunctive. Still, there are principal limitations to what type inference can do. Especially for OO. You hit on undecidable territory very quickly. You also have to give up at boundaries such as native bindings, calls to eval, or (in ES6) to the module loader, unless you're given extra information by the programmer (this is basically the separate compilation problem).
So the inferencer has to work with approximations and fallbacks. For a language like JS, where a lot of conceptual polymorphism, potential mutation, and "untypable" operations are going on, those approximations will remain omnipresent, except, mostly, in sufficiently local contexts.
Not to say that it is not worth extending the boundaries -- it definitely is. But it will only get you that far.
- Omnipresent mutability is another big performance problem in itself, because most knowledge is never stable.
Type annotations or (let's say) "guards" as for-all-time monotonic bounds on mutation are useful to programmers too, for more robust programming-in-the-large. That's a separate (and better IMHO) argument than performance. It's why they are on the Harmony agenda.
Of course I'd never object to the statement that there are far better reasons to have typy features than performance. :) I just didn't mention it because it wasn't the topic of the discussion.
So despite all the cool technology we use these days, it is safe to assume that we will never play in the performance league of typed languages. Unless we introduce real types into JS, of course. :)
Does JS need to be as fast as Java? Would half as fast be enough?
No, it doesn't have to be as fast. Not yet, at least... I estimate we have another 3 years. ;)
On 13 September 2011 21:32, Wes Garland <wes at page.ca> wrote:
When I write shell programs, and JS programs, I keep an extra terminal window open to a spare shell or a JS REPL. I try stuff. Stuff that works, I copy into my program. Then I run my program - which happens quickly, because the compiler is super-fast and the program is a contained entity which probably runs in a dynamically configured environment.
REPLs and quasi-instant compile turn-arounds are indeed great features, but by no means exclusive to untyped languages, and never have been. It just happens that the typed languages dominating the mainstream suck badly in this area.
On 14 September 2011 00:00, Brendan Eich <brendan at mozilla.com> wrote:
So, static+dynamic. The static side has some powerful algorithms to bring to bear. Dynamic is necessary due to eval and kin, and gives strictly more information (and more relevant information!).
Nitpick: I believe you are mistaken about the "strictly more" bit. There is information that only static analysis can derive. Consider e.g. aliasing or escape analysis, or other kinds of global properties.
On Thu, Sep 15, 2011 at 9:02 AM, Andreas Rossberg <rossberg at google.com> wrote:
On 14 September 2011 00:00, Brendan Eich <brendan at mozilla.com> wrote:
So, static+dynamic. The static side has some powerful algorithms to bring to bear. Dynamic is necessary due to eval and kin, and gives strictly more information (and more relevant information!).
Nitpick: I believe you are mistaken about the "strictly more" bit. There is information that only static analysis can derive. Consider e.g. aliasing or escape analysis, or other kinds of global properties.
There are systems that handle escape analysis cases via write barriers, no? Alias detection (or more importantly non-alias determinations) seem amenable to the assume-and-guard model used for PICs and trace selection and other code specialization patterns seen all over modern JS engines.
The TraceMonkey engine tracks many global properties (including the properties of globals) to deoptimize as needed and optimize where possible; our TI and IonMonkey work goes even further.
Mike
On the subjects of code navigation and DoctorJS as a basis for tools:
- Jump to the definition of a method/function. ..
- Find all invocations of a method/function. .. I'm in the process of adding scoped tags support to DoctorJS, so that in Vim you can also navigate to local definitions and parameters (in principle, that would also give access to any local information produced by DoctorJS's analysis, but I'm not sure whether that information is recorded anywhere). Generation and navigation already works, I just have to push to github, figure out pull requests and submodules, and write a blog post to explain the whole thing..
(note that scoped tags only refer to lexical scoping, not to the combination of lexical scoping and property lookup that would be required for method navigation; nevertheless, this is a natural extension of tags navigation)
This is now online in a fork of DoctorJS (this also includes patches to make jsctags work on Windows):
https://github.com/clausreinke/doctorjs
If you are a Vim user, you'll find a matching Vim script to use this scoping information for navigation (go to local definition, cycle through uses in scope) here:
https://github.com/clausreinke/scoped_tags
More info and examples in an introductory blog post (if you don't use Vim, there is a small screencast, so you can get an idea of existing navigation features and compare them with your requirements):
http://libraryinstitute.wordpress.com/2011/09/16/scoped-tags-support-for-javascript-in-vim/
Hope you'll find this useful!-) And if you happen to know a job where I could work on this kind of Javascript tooling, let me know;-)
Claus clausreinke.github.com
PS. Assuming that some DoctorJS folks are listening here: the DoctorJS ticket tracker seems unwatched (not even patches get applied). I do hope that this is temporary, and that choosing to build on jsctags was a good idea, but it would be nice to get some confirmation.
On 16 September 2011 02:12, Mike Shaver <mike.shaver at gmail.com> wrote:
On Thu, Sep 15, 2011 at 9:02 AM, Andreas Rossberg <rossberg at google.com> wrote:
On 14 September 2011 00:00, Brendan Eich <brendan at mozilla.com> wrote:
So, static+dynamic. The static side has some powerful algorithms to bring to bear. Dynamic is necessary due to eval and kin, and gives strictly more information (and more relevant information!).
Nitpick: I believe you are mistaken about the "strictly more" bit. There is information that only static analysis can derive. Consider e.g. aliasing or escape analysis, or other kinds of global properties.
There are systems that handle escape analysis cases via write barriers, no? Alias detection (or more importantly non-alias determinations) seem amenable to the assume-and-guard model used for PICs and trace selection and other code specialization patterns seen all over modern JS engines.
Being able to detect when a condition is violated is not equivalent to knowing that it always holds.
Take the property access example. You want to eliminate the extra check. For that you have to know that the typecheck would never fail at this point. You use type inference to find that out, a static analysis.
In general, whenever the correctness of an optimization or code transformation depends on a non-trivial invariant, you have to prove that this invariant holds. You can only do that statically, because it implies a quantification over all possible executions. No amount of dynamic checking can give you that.
(Of course, you can often do something else instead that involves dynamic checks, but then you are in fact doing a different optimization. Property access is a good example. Stack-allocating local variables is another one, where you need escape analysis.)
On Sep 16, 2011, at 1:55 AM, Andreas Rossberg wrote:
Being able to detect when a condition is violated is not equivalent to knowing that it always holds.
Yes, this is the point I mangled in that "dynamic is necessary [in JS] ... and gives strictly more information (and more relevant information!)" sentence. I should have written "static+dynamic gives strictly more information."
Sorry about that -- not a profound point, but something that came out in the exchange John J. Barton and I had about static program analysis not being enough. It's not enough, but it can do a lot with dynamic analysis in partnership.
On Fri, Sep 16, 2011 at 7:04 AM, Brendan Eich <brendan at mozilla.com> wrote:
On Sep 16, 2011, at 1:55 AM, Andreas Rossberg wrote:
Being able to detect when a condition is violated is not equivalent to knowing that it always holds.
Yes, this is the point I mangled in that "dynamic is necessary [in JS] ... and gives strictly more information (and more relevant information!)" sentence. I should have written "static+dynamic gives strictly more information."
Sorry about that -- not a profound point, but something that came out in the exchange John J. Barton and I had about static program analysis not being enough. It's not enough, but it can do a lot with dynamic analysis in partnership.
I'll also revise:
Dimitris Vardoulakis's work on CFA2, the algorithm behind doctorjs, demonstrates that my claims about static analysis being a dead end are wrong. In fact it provides further evidence that innovation in tools can help dynamic languages keep their core advantages while overcoming some of their challenges.
jjb
On Sep 15, 2011, at 11:57 AM, Andreas Rossberg wrote:
On 13 September 2011 16:48, Brendan Eich <brendan at mozilla.com> wrote:
On Sep 13, 2011, at 5:33 AM, Andreas Rossberg wrote:
- There are extra costs in space and time to doing the runtime analysis.
- Compile time is runtime, so there are severe limits to how smart you can afford to get in a compiler.
These are bearable apples to trade against the moldy oranges you'd make the world eat by introducing type annotations to JS. Millions of programmers would start annotating "for performance", i.e., gratuitously, making a brittle world at high aggregate cost.
The costs born in browsers by implementors and (this can hit users, but it's marginal) at runtime when evaluating code are less, I claim.
Depends on how good you want to optimize. Aggressive compilers can be really slow. There are limits to what you can bear at runtime.
You the user, yes -- good point. It's not just an implementor's burden.
- The massive complexity that comes with implementing all this affects stability.
This one I'm less sympathetic to, since we won't get rid of untyped JS up front. A sunk cost fallacy? If we could make a "clean break" (ahem), sure. Otherwise this cost must be paid.
Well, the counter-argument would be that you wouldn't need to care about optimising untyped code as much, if the user had the option to switch to a typed sublanguage for performance.
I was really turned off by how AS3 users over-annotate. If we added "guards" and users turned to them "for performance", I'm afraid we'd see the same contamination.
Annotating might not even help performance -- in AS3, evaluation of arithmetic operators must widen from int to double, not wrap around 32 bits, so annotating int on storage types that annotate slots fetched and used via +, *, etc. can actually lose performance when storing back, since you have to use the FPU, or analyze and guard on overflow (as JS engines do today to avoid the FPU).
More analysis can help to stick to the ALUs but that works for untyped JS too.
Still, I agree that for performance-critical storage, annotations help. This is obviously so with typed arrays, which the fast VMs target with dedicated type inference and instruction selection machinery.
Is there a way to avoid the contamination problem we saw with AS3 during the ES4 days?
- Wrt limits, even in the ideal case, you can only approximate the performance of typed code -- e.g. for property access you have at least two memory accesses (type and slot) plus a comparison and branch, where a typed language would only do 1 memory access.
That's not the ideal case. Brian Hackett's type inference work in SpiderMonkey can eliminate the overhead here. Check it out.
I'm actually pretty excited about that, and hope to see more on that front. Cool stuff.
However, that ideal case is achieved in a relatively small percentage of cases only. Otherwise we should probably not see a (already impressive) 20-40% speed-up (IIRC), but rather something closer to 200-400%.
Oh, we're not done! The current work does not analyze the heap as aggressively as it might. We want objects that can be, say, four int32 machine types in a row, to be inferred that way and represented that way. Currently such a type would take four "any" values in a row, plus the usual header for metadata.
So the inferencer has to work with approximations and fallbacks. For a language like JS, where a lot of conceptual polymorphism, potential mutation, and "untypable" operations are going on, those approximations will remain omnipresent, except, mostly, in sufficiently local contexts.
Yes, and a key element of TI in SpiderMonkey is to put the "monster in the box" -- to avoid the untyped abstractions in one's JS polluting the whole inference. We try to fall back on runtime PICs and the other dynamic optimizations where we can. This means some guarding, or to avoid guards, some invalidation protocol overhead instead of guards. And low-level barriers to monitor all writes.
Not to say that it is not worth extending the boundaries -- it definitely is. But it will only get you that far.
Farther enough than without, that it seems worth it. Again for me the big cosmic god-mode tradeoff is implementors working harder on these approaches, vs. throwing users to the wolves of type (over-)annotation.
Type annotations or (let's say) "guards" as for-all-time monotonic bounds on mutation are useful to programmers too, for more robust programming-in-the-large. That's a separate (and better IMHO) argument than performance. It's why they are on the Harmony agenda.
Of course I'd never object to the statement that there are far better reasons to have typy features than performance. :) I just didn't mention it because it wasn't the topic of the discussion.
It's useful to be reminded of the performance wins that are possible with judicious use of type annotations, though. I expect we'll feel some "darts" (lol) in our backside on this point (heh).
So despite all the cool technology we use these days, it is safe to assume that we will never play in the performance league of typed languages. Unless we introduce real types into JS, of course. :)
Does JS need to be as fast as Java? Would half as fast be enough?
No, it doesn't have to be as fast. Not yet, at least... I estimate we have another 3 years. ;)
Agreed!
The best way to avoid the "over-annotation" of codes is probably to teach users about WHEN or WHY they should add types guards. If you add the feature and don't explain what's her purpose and what specific scenarii it can solve, you'll get in trouble.
However, if you can coordinate with the whole community and publish a bunch of blog posts explaining how to "type efficiently", you'll get great results. But, in order to achieve that, all the implementations must be optimized for the same specific cases.
JavaScripters are acustomated to a untyped language. They will not type everthing because we know how handy it's to rely on compiler optimization. The most critical part will be to teach cross-compiler makers and people learning JavaScript after beeing used to, let's say, Java.
Anyway, another usage of type will be "automatic class instanciation from json data". For exemple, let's say I've a type defined like this :
class A {
var myFirstField : B;
var mySecondField : B;
function areEqual() : bool {
if(myFirstField==null) { return mySecondField==null; }
if(mySecondField==null) { return false; }
return myFirstField.x == mySecondField.y;
}
};
class B { int x; int y; }
You can do things like that :
var myA = { myFirstField: {x:5, y:6}, mySecondField: {x:5, y:7} } as A;
console.log(myA.areEqual());
François
-----Message d'origine---
On Sep 17, 2011, at 2:57 PM, François REMY wrote:
The best way to avoid the "over-annotation" of codes is probably to teach users about WHEN or WHY they should add types guards. If you add the feature and don't explain what's her purpose and what specific scenarii it can solve, you'll get in trouble.
However, if you can coordinate with the whole community and publish a bunch of blog posts explaining how to "type efficiently", you'll get great results. But, in order to achieve that, all the implementations must be optimized for the same specific cases.
That all sounds great, but it's harder to do than you might think. My blog isn't read by everyone. People disagree with me anyway. The implementors won't all converge on the same economics in rapid fashion (they might over the long run, certainly for SunSpider :-P).
JavaScripters are acustomated to a untyped language. They will not type everthing because we know how handy it's to rely on compiler optimization. The most critical part will be to teach cross-compiler makers and people learning JavaScript after beeing used to, let's say, Java.
Here be dragons.
Anyway, another usage of type will be "automatic class instanciation from json data". For exemple, let's say I've a type defined like this :
class A { var myFirstField : B; var mySecondField : B; function areEqual() : bool { if(myFirstField==null) { return mySecondField==null; } if(mySecondField==null) { return false; } return myFirstField.x == mySecondField.y; } };
class B { int x; int y; }
You can do things like that :
var myA = { myFirstField: {x:5, y:6}, mySecondField: {x:5, y:7} } as A; console.log(myA.areEqual());
Syntax aside, this is all stuff we talked about in ES4 days, and the notion of initializing narrower types using object and array literals (with suffixed annotation, "as A" or ": A" or these days "::A" doesn't matter) is a natural.
The cosmic god-mode trade-off still lies ahead, and it bothers me. I would hate to make implementors' and bench-marketers lives easier if the net result for developers was negative.
On Fri, Sep 16, 2011 at 1:55 AM, Andreas Rossberg <rossberg at google.com> wrote:
Being able to detect when a condition is violated is not equivalent to knowing that it always holds.
You're right, of course. Thanks for slicing that more finely for me.
Mike
One thing I liked about the Dash/Dart email was that it explicitly stated the goal of tool/IDE support (this is the only area where I miss Java when programming JavaScript). Is there something corresponding among the Harmony goals [1]? (5) Support a statically verifiable, object-capability secure subset(?)
For IDE-support we already have class literals in ES.next. What is still missing? Type guards? Will those make it into ES.next?
I find that there is a lot of cool stuff going into ES.next which will make it highly competitive with whatever Google can possibly come up with. However, a good Dart IDE can do much to increase Dart’s appeal. On the other hand, the market for a good JavaScript IDE is large (as in “unfragmented”) and only becoming larger, so maybe we’ll eventually get there.
[1] harmony:harmony