Maps and Sets, goodbye polyfill ?!

# Andrea Giammarchi (11 years ago)

I've noticed by accident Maps and Sets (simple?) now uses generators so that no polyfill will ever work anymore and Sets are pointless since there's no way to retrieve back values in any shim.

I'll drop my es6-collections polyfill and take care of a pointless WeakMap shim only at this poit ... wondering what's Mark M. take on this since he was taking care of other polyfills for closure too.

harmony:simple_maps_and_sets

Best , this was a very bad move, IMO.

# Erik Arvidsson (11 years ago)

Why do you think they use generators?

It is true that you can pass an iterable, but your polyfill can just ignore that.

# Mark S. Miller (11 years ago)

I don't understand the question. Could you expand, and provide more links to relevant background? Thanks.

# Tab Atkins Jr. (11 years ago)

On Thu, Jul 11, 2013 at 3:31 PM, Andrea Giammarchi <andrea.giammarchi at gmail.com> wrote:

I've noticed by accident Maps and Sets (simple?) now uses generators so that no polyfill will ever work anymore and Sets are pointless since there's no way to retrieve back values in any shim.

I'll drop my es6-collections polyfill and take care of a pointless WeakMap shim only at this poit ... wondering what's Mark M. take on this since he was taking care of other polyfills for closure too.

harmony:simple_maps_and_sets

I'm not sure what you think is happening, but there's nothing that defeats a polyfill in Map or Set.

As Arv says, it's allowed to pass an iterable in their constructor, but you can ignore that if you want.

The .keys/values/entries() methods return iterators, but you can easily polyfill that if you'd like.

Both of them have .forEach() methods which just take a callback, exactly like Array#forEach().

# Allen Wirfs-Brock (11 years ago)

On Jul 11, 2013, at 3:31 PM, Andrea Giammarchi wrote:

I've noticed by accident Maps and Sets (simple?) now uses generators so that no polyfill will ever work anymore and Sets are pointless since there's no way to retrieve back values in any shim.

I'll drop my es6-collections polyfill and take care of a pointless WeakMap shim only at this poit ... wondering what's Mark M. take on this since he was taking care of other polyfills for closure too.

harmony:simple_maps_and_sets

Have you read the actual ES6 spec. draft and do you have specific feedback on that?

# Andrea Giammarchi (11 years ago)

harmony Sets and Maps have generators now ... generators break backward compatibility because cannot be polyfilled.

How are you dealing with Set.proottype.values since you cannot shim a generator behavior?

This is my question (also ... WeakMaps were extremely hard to polyfill, no generators ... Sets and Maps were easy to polyfill ... now impossible with generators).

This is confusing :-/

This is what I can read here: harmony:simple_maps_and_sets

Best

# Andrea Giammarchi (11 years ago)

wait ... so it's just the Harmony page that is outdated ? 'cause I've reached that through MDN ... so no, gonna dig more into specs.

Tab Atkins .. you cannot shim generators since it's not just about returning a bounce of objects, it's the ability to "hold" that is missing ... if you don't need to hold why on earth that page shows *values() *keys() and *items() ? iterable is the only one that might make sense, imho

br

# Tab Atkins Jr. (11 years ago)

On Thu, Jul 11, 2013 at 3:44 PM, Andrea Giammarchi <andrea.giammarchi at gmail.com> wrote:

wait ... so it's just the Harmony page that is outdated ? 'cause I've reached that through MDN ... so no, gonna dig more into specs.

Tab Atkins .. you cannot shim generators since it's not just about returning a bounce of objects, it's the ability to "hold" that is missing ... if you don't need to hold why on earth that page shows *values() *keys() and *items() ? iterable is the only one that might make sense, imho

The fact that the suggested implementation is with a generator is completely irrelevant. From the outside, you can't tell the difference between a generator function and an ordinary function that returns a manually-constructed iterator. The latter is easy to polyfill.

Further, you should actually read the spec, not an outdated harmony proposal wiki page.

# Andrea Giammarchi (11 years ago)

yes, I should have checked the specs but that page should be either updated or removed.

Anyway you still didn't convince me generators were needed at all there, neither that I can "simply shim iterators" 'cause this makes generators pointless in first place if it was so easy to polyfill ... which is not.

The polyfill gonna grow in size for no concrete reason now, can I at least ask if current specs about Maps, Sets, and WeakMaps are final?

# Rick Waldron (11 years ago)

On Thu, Jul 11, 2013 at 7:06 PM, Andrea Giammarchi < andrea.giammarchi at gmail.com> wrote:

yes, I should have checked the specs but that page should be either updated or removed.

No, you should just read the spec and if you find yourself at the page you linked to, in big bold letters at the top:

This proposal has progressed to the Draft ECMAScript 6 Specification, which is available for review here: harmony:specification_drafts.

Any new issues relating to them should be filed as bugs at bugs.ecmascript.org

 

Anyway you still didn't convince me generators were needed at all there, neither that I can "simply shim iterators" 'cause this makes generators pointless in first place if it was so easy to polyfill ... which is not.

The polyfill gonna grow in size for no concrete reason now, can I at least ask if current specs about Maps, Sets, and WeakMaps are final?

Polyfills don't decide the fate of the future.

# Tab Atkins Jr. (11 years ago)

On Thu, Jul 11, 2013 at 4:06 PM, Andrea Giammarchi <andrea.giammarchi at gmail.com> wrote:

yes, I should have checked the specs but that page should be either updated or removed.

Anyway you still didn't convince me generators were needed at all there, neither that I can "simply shim iterators" 'cause this makes generators pointless in first place if it was so easy to polyfill ... which is not.

I still have no idea what your problem with this is. Your objections aren't making any sense.

In general, generators are very hard to polyfill. (Not impossible, as you can do a CPS transform of the source code, but very difficult.)

This is not the general case, this is a very simple specific case. You just need a trivial little iterator that runs over the internal data structure of the Map/Set. I've written these kinds of things tons of times for classes in PHP, so they can work conveniently with PHP foreach() loops.

That said, the reason the harmony page defined the methods with generators is that, as easy as it is to write an iterator manually for them, it's even easier to write them with a generator. That's all.

# Andrea Giammarchi (11 years ago)

Rick, that big bold thing is what mislead to me ... This proposal I thought was referred to the page, the one showed in that page ...

# Andrea Giammarchi (11 years ago)

my problems is that generators were not needed there ... values() as array of values, keys() as array of keys and items() as array of key/value pairs.

This is even easier

Generators because you wrote them in PHP ... this, is non sense to me (I am zend certified engineer if you are wondering)

Anyway, problem solved: harmony pages and examples in them are outdated.

# Rick Waldron (11 years ago)

On Thu, Jul 11, 2013 at 7:34 PM, Andrea Giammarchi < andrea.giammarchi at gmail.com> wrote:

Rick, that big bold thing is what mislead to me ... This proposal I thought was referred to the page, the one showed in that page ...

Would you like me to add some additional text so that it's clearer?

# Andrea Giammarchi (11 years ago)

The proposal in this page is outdated and the formal one has progressed to ... etc etc ...

but hey, I am not the English mother tongue here, I just got confused by "this proposal".

I refer to the current one showed/discussed/described usually, but maybe it's just me so if you think that's "unmistakenable" fine, I've learned some better english today too.

# Rick Waldron (11 years ago)

On Thu, Jul 11, 2013 at 7:37 PM, Andrea Giammarchi < andrea.giammarchi at gmail.com> wrote:

my problems is that generators were not needed there ... values() as array of values, keys() as array of keys and items() as array of key/value pairs.

When were these ever arrays? Here's their first appearance, this revision: doku.php?id=harmony:simple_maps_and_sets&rev=1339969786

Either way, all you need to do is stop treating the proposal as a normative specification and start using the actual spec as a reference and this isn't a problem. Here's what I do: whenever I need to refresh on something ES6 related, I look at Allen's latest draft first—usually the one with the change markup (this is useful, because Allen regularly leaves notes about the changes). If I can't find what I'm looking for, then I look at my meeting notes. If I'm still unable to find the answer, then the very last place I look is at the proposal. I find this approach greatly reduces my need to post messages of outrage to mailing list.

# Brandon Benvie (11 years ago)

On 7/11/2013 4:37 PM, Andrea Giammarchi wrote:

my problems is that generators were not needed there ... values() as array of values, keys() as array of keys and items() as array of key/value pairs.

This is even easier

Generators because you wrote them in PHP ... this, is non sense to me (I am zend certified engineer if you are wondering)

Anyway, problem solved: harmony pages and examples in them are outdated.

Well to be more exact, the harmony page showed an example implementation that used a generator, but that was never required. It was always called for keys/values/entries to return an iterator, not a generator. Generator functions are a convenient way to create iterators (becase generators are iterators).

As an example, the following two implementations can be treated the same for the purposes of iteration:

 // ES6
 function* entries(obj){
   for (let key in obj) {
     yield [key, obj[key]];
   }
 }

 // ES3-5 version
 function entries(obj){
   var keys = [];
   var index = 0;
   for (keys[index++] in obj);
   var total = index;
   index = 0;
   return {
     next: function(){
       if (index < total) {
         var key = keys[index++];
         return { done: false, value: [key, obj[key]] }
       }
       return { done: true };
     }
   };
 }

In the same way, you can more succinctly demonstrate how Map/Set iterators work using generators, but still could be implemented in ES3.

# Rick Waldron (11 years ago)

On Thu, Jul 11, 2013 at 7:43 PM, Andrea Giammarchi < andrea.giammarchi at gmail.com> wrote:

The proposal in this page is outdated and the formal one has progressed to ... etc etc ...

I've updated every relevant page with some further clarifying language. Hopefully it's sufficient.

# Andrea Giammarchi (11 years ago)

thank you

# Andrea Giammarchi (11 years ago)

On Thu, Jul 11, 2013 at 5:02 PM, Brandon Benvie <bbenvie at mozilla.com> wrote:

// ES6
function* entries(obj){
  for (let key in obj) {
    yield [key, obj[key]];
  }
}

cool, but why do we need that exactly ?

// ES3-5 version
function entries(obj){
  var keys = [];
  var index = 0;
  for (keys[index++] in obj);
  var total = index;
  index = 0;
  return {
    next: function(){
      if (index < total) {
        var key = keys[index++];
        return { done: false, value: [key, obj[key]] }
      }
      return { done: true };
    }
  };
}

have you guys read this article ? where it talks about JS people not thinking about GC and RAM ? sealedabstract.com/rants/why-mobile-web-apps-are-slow

Thanks for this perfect example of (IMO) not-needed thing to just have key/value pairs ... is this the new ES direction? Objkect.keys(obj) ain't cool anymore because we can (metaphorically, I know it won't work) for(yield key in obj); ?

These kind of things are one of the reason old HW and mobile is slow, we keep penalizing for no concrete reason already slow engines under limited hardware proposing not so useful, in this very specific case, solutions to simple problems.

Personally, I don't like that (I know you could not care more about this) but I don't think is good for anyone or anything in any case ... really, I don't see any reason to have that anywhere at all in first place, even in 5 years with generators/iterators available everywhere.

One thing I liked though, you showed a trick over for/in I've never thought about but you didn't consider in ES3 everyone would have to check for hasOwnProperty too ... so that could have been simplified with Object.keys(obj) in ES3 polyfilled out of ES5.

Back to Rick ... yes, polyfills cannot decide the future ... I've also read once here that new stuff should not have broken old one, this is what I really liked about asm.js and what I don't understand about all this generator fuzz out of python 2.2 or similar 2000ish problems/programming languages.

Just my opinion, sorry about that.

Best

# Andreas Rossberg (11 years ago)

On 12 July 2013 03:32, Andrea Giammarchi <andrea.giammarchi at gmail.com> wrote:

have you guys read this article ? where it talks about JS people not thinking about GC and RAM ? sealedabstract.com/rants/why-mobile-web-apps-are-slow

I had skimmed through this article earlier. While some of what he says (unsurprisingly) is correct, I have two main observations:

  1. Any article that analyses JavaScript performance solely based on SunSpider measurements has immediately disqualified itself.

  2. While GC can certainly be an issue, the real hard performance wall for JavaScript is its overly "dynamic" nature.

The latter can be witnessed by the existence of other GC'ed languages that are far more on par with C performance.

# Claus Reinke (11 years ago)

In general, generators are very hard to polyfill. (Not impossible, as you can do a CPS transform of the source code, but very difficult.)

It depends on what you want. For concise specification of iteration, you can do something without full CPS transform, by using monadic coding style. My scratch area for monadic generators and promises:

monadic javascript/typescript: promises and generators
https://gist.github.com/clausreinke/5984869

(which you can run with TypeScript 0.9, playground or npm)

Given that JS control-structures aren't predefined but built-in, we can't redefine them but have to define our own, but it still isn't too bad. For instance, the simple generator example

https://gist.github.com/clausreinke/5984869#file-monadic-ts-L492-L506

outputs

// Generator.forIn, with yield, plain iteration (prefix #)
# yield1 1
(yield1 returns 0)
# yield2 1
(yield2 returns 1)
# yield1 2
(yield1 returns 2)
# yield2 4
(yield2 returns 3)
# yield1 3
(yield1 returns 4)
# yield2 9
(yield2 returns 5)
# 1,2,3

Note from the iteration loop that I've implemented a functional API (next returns {done,value,next}) instead of an imperative one (next returns {done,value} and modifies its host).

The standard recursive tree generator

https://gist.github.com/clausreinke/5984869#file-monadic-ts-L521-L529

even looks readable without special syntax

function iterTree(tree) {
  return Array.isArray(tree)
         ? tree.map( iterTree ).reduce( (x,y)=> x.then( _=> y ), G.of(undefined) )
         : G.yield(tree);
}

var generator3 = iterTree([1,[],[[2,3],4],5]);
MonadId.forOf( generator3, y=> (console.log("* "+y), MonadId.of(y)) );

and outputs

// MonadId.forOf, iterTree recursive generator() (prefix *)
* 1
* 2
* 3
* 4
* 5

With a very little syntactic sugar for monads (monad comprehensions, monadic do notation), it could even be made to look like conventional code. This has come up several times here, and would have a very high value for very small cost, if done right.

Claus

# Brian Di Palma (11 years ago)
// ES6
function* entries(obj){
  for (let key in obj) {
    yield [key, obj[key]];
  }
}

cool, but why do we need that exactly ?

The one suggestion I can make is that generators mean you don't have to calculate the entire answer up front. This is well demonstrated by Brandon's code examples. This could be a nice performance gain under certain circumstances.

// ES3-5 version
function entries(obj){
  var keys = [];
  var index = 0;
  for (keys[index++] in obj);
  var total = index;
  index = 0;
  return {
    next: function(){
      if (index < total) {
        var key = keys[index++];
        return { done: false, value: [key, obj[key]] }
      }
      return { done: true };
    }
  };
}

have you guys read this article ? where it talks about JS people not thinking about GC and RAM ? sealedabstract.com/rants/why-mobile-web-apps-are-slow

I have read the article, I found it interesting. I was just wondering if the 'const' keyword would help give JS another small performance boost. As Andreas points out the major issue is that JS is highly dynamic, this can be very useful sometimes but for most code it's not required. Maybe if you mark everything as const or freeze/seal classes then maybe JS engines will optimize for that code.

One issue that could help with is memory allocation for JS objects, maybe that will make it easier to know how much memory to allocate for a class, or JS object if it's marked as a const. I don't know for sure as I lack the knowledge, I do know that in ES6 code I will use const as my new var.

# David Bruant (11 years ago)

Le 13/07/2013 10:21, Brian Di Palma a écrit :

I was just wondering if the 'const' keyword would help give JS another small performance boost.

Unlikely. Const can be almost statically inferred (no assignment to a given variable). The "almost" refers to cases where eval happens. Worst case, if there is no assignment, a variable can be optimistically optimized as const. If the value is changed via eval, then de-optimize (but in practice, eval is rare, so the optimistic optimization will be worth it)

As Andreas points out the major issue is that JS is highly dynamic, this can be very useful sometimes but for most code it's not required. Maybe if you mark everything as const or freeze/seal classes then maybe JS engines will optimize for that code.

JS engines already optimistically optimize assuming code remains stable (for objects, V8 has "hidden classes", SpiderMonkey has the equivalent "shape" feature) and deoptimizes when the dynamic features are being used.

It might be one of the reason why maps are better at being maps than objects (since objects seem to have been optimized for cases where they are stable)

# Brian Di Palma (11 years ago)

OK. So we have reached "Peak JavaScript" then.

If people write JS code without triggering shape changes then the JIT should be able to produce code that can match a JVM? If there was something that I could do as a developer that could help the JIT I would do it.

Would ES6 classes not make the creation of shapes a lot easier? From what I understand it takes time to figure out shapes/hidden-classes. Well I'm marking this object as a class, does that help?

It would be interesting if engines provided feedback on when we developers break the optimistic optimizations.

# David Bruant (11 years ago)

Le 13/07/2013 11:02, Brian Di Palma a écrit :

OK. So we have reached "Peak JavaScript" then.

That would be the reason why JS engines don't see 30x (or even 30%) improvements from a year to another anymore. The JavaScript used on most websites is fast now. The next boundary for performance is intensive applications like games. Other than that, room for performance improvements are to be found at different components like the DOM or graphics

If people write JS code without triggering shape changes then the JIT should be able to produce code that can match a JVM?

I would believe so. A bunch of talks from various JS engine implementors explain that when your code has stable/predictable types, you pretty much get the performance of... code with stable types. If a "+" is always used with integers, it will be JIT-compiled as an integer addition as it would in C (modulo some guards and overflow issues). One of these talks www.youtube.com/watch?v=UJPdhx5zTaw (there are plenty others)

If there was something that I could do as a developer that could help the JIT I would do it.

Usually, the overlap between what JS devs consider good readable code and what JITs need to optimize is very strong. From my experience, anytime someone tries being smarter than writing good code they end up using a fragile optimization (may work in some engines, but not others, may become a performance issue as the targeted engine changes over time. For reference, ~25% of a SpiderMonkey changes each year blog.mozilla.org/dmandelin/2011/11/29/js-development-newsletter-1123-1129 )

Would ES6 classes not make the creation of shapes a lot easier? From what I understand it takes time to figure out shapes/hidden-classes. Well I'm marking this object as a class, does that help?

Implementors will answer better as I'm reaching the limits of my knowledge, but when you read:

 function C(){
     this.a = 12;
     this.b = "azerty";
 }

 C.prototype = {
     ...
 }

 var c = new C();

the shape is pretty clear and class syntax can hardly provide better insight. Whether current engines already statically analyses functions like C to get the same sort of info they would use from class syntax, I don't know.

It would be interesting if engines provided feedback on when we developers break the optimistic optimizations.

They have given a bunch of talks on the topic at various conferences already. Search on Youtube if the link I gave above isn't enough ;-)

# Andrea Giammarchi (11 years ago)

you know what's funny? VBScript had static/fixed classes and immutable objects in 1999 and goddammit nobody ever thought that was cool!

I've also proposed static/fixed/frozen classes a while ago and also asked why Object.freeze({}) makes the object slower, instead of faster, to deal with, as it would/could be a prototype and maybe marked as static ...

In another recent thread I was (wrongly) asking about CTypes meaning binary data where developers can devine their own static shapes "C struct like" and apparently this is still an ES6 thing.

Interesting time in this "new" JS era ... :-)

# Andreas Rossberg (11 years ago)

On 13 July 2013 17:09, David Bruant <bruant.d at gmail.com> wrote:

Le 13/07/2013 11:02, Brian Di Palma a écrit :

Would ES6 classes not make the creation of shapes a lot easier? From what I understand it takes time to figure out shapes/hidden-classes. Well I'm marking this object as a class, does that help?

Implementors will answer better as I'm reaching the limits of my knowledge, but when you read:

function C(){
    this.a = 12;
    this.b = "azerty";
}

C.prototype = {
    ...
}

var c = new C();

the shape is pretty clear and class syntax can hardly provide better insight. Whether current engines already statically analyses functions like C to get the same sort of info they would use from class syntax, I don't know.

Yes, VMs do a lot to handle these cases well, and e.g. produce a flat object representation for such examples. But like with all those optimisations they are just optimistic, you still need to guard all over the place, because JavaScript semantics is rather deprived of useful invariants. There is very little that provably holds for a larger region of code or for a non-trivial extend of object life time. So you get tons of repetitive checks even in optimised code.

Classes won't improve performance because they don't introduce any new invariants either. They are just sugar. Struct types (see the binary data proposal) have far more potential on that front, as is already utilised in practice with typed arrays vs conventional JS arrays.

Like it or not, high-performance JavaScript will have to be far less dynamic and far more typed than what the language allows. ;)

# Brendan Eich (11 years ago)

Brian Di Palma wrote:

OK. So we have reached "Peak JavaScript" then.

It's risky to assume anything like that based on past trends in optimizations and workloads.

# Brendan Eich (11 years ago)

Andreas Rossberg wrote:

Yes, VMs do a lot to handle these cases well, and e.g. produce a flat object representation for such examples. But like with all those optimisations they are just optimistic, you still need to guard all over the place, because JavaScript semantics is rather deprived of useful invariants. There is very little that provably holds for a larger region of code or for a non-trivial extend of object life time. So you get tons of repetitive checks even in optimised code.

That's one way to do it. Another is to invalidate more aggressively optimized, less-guarded code when the unlikely bad thing happens.

Classes won't improve performance because they don't introduce any new invariants either. They are just sugar.

Right, although the const class or "sealed class" idea is still on the ES7 agenda.

Struct types (see the binary data proposal) have far more potential on that front, as is already utilised in practice with typed arrays vs conventional JS arrays.

Indeed, this is where Emscripten with appropriate optimization on the target VM side is getting within 1.2x of native (and not done yet)

Like it or not, high-performance JavaScript will have to be far less dynamic and far more typed than what the language allows. ;)

You mean what current editions allow without resorting to typed arrays. What's stopping future editions from adding sealed classes, struct syntax, whatever it takes?

The final battle will be checked (real, not warning-only) type annotations. I'm not holding my breath after ES4, but who knows?

My point here is JS evolves. It's not always easy to extend, but the alternatives look much harder.

# David Bruant (11 years ago)

2013/7/15 Brendan Eich <brendan at mozilla.com>

Andreas Rossberg wrote:

Yes, VMs do a lot to handle these cases well, and e.g. produce a flat object representation for such examples. But like with all those optimisations they are just optimistic, you still need to guard all over the place, because JavaScript semantics is rather deprived of useful invariants. There is very little that provably holds for a larger region of code or for a non-trivial extend of object life time. So you get tons of repetitive checks even in optimised code.

That's one way to do it. Another is to invalidate more aggressively optimized, less-guarded code when the unlikely bad thing happens.

What is the expected improvements from const/sealed classes against such a strategy? Same question for checked type annotation? (are type annotation a form of "userland" guard?)

Unrelated, but I'm curious: What do engines do when they face an object-as-map (random and potentially numerous keys)? Do they try to find a shape/hidden class and give up after realizing that the object really isn't stable? Do you have stats (I also take guesstimates) on how often object-as-map occur against "class-ed" objects? Will the use of ES6 maps to replace object-as-map make a significant dfference as far as perf is concerned?

Thanks,

# Andreas Rossberg (11 years ago)

On 15 July 2013 12:27, David Bruant <bruant.d at gmail.com> wrote:

2013/7/15 Brendan Eich <brendan at mozilla.com>

Andreas Rossberg wrote:

Yes, VMs do a lot to handle these cases well, and e.g. produce a flat object representation for such examples. But like with all those optimisations they are just optimistic, you still need to guard all over the place, because JavaScript semantics is rather deprived of useful invariants. There is very little that provably holds for a larger region of code or for a non-trivial extend of object life time. So you get tons of repetitive checks even in optimised code.

That's one way to do it. Another is to invalidate more aggressively optimized, less-guarded code when the unlikely bad thing happens.

Yes, but that has its own overhead. It's a different a trade-off, and I think today's VMs typically use a mixture of both techniques.

What do engines do when they face an object-as-map (random and potentially numerous keys)? Do they try to find a shape/hidden class and give up after realizing that the object really isn't stable?

In V8, an object goes into "dictionary mode" when the number of properties hits a certain limit. It also happens when you delete a property.

Do you have stats (I also take guesstimates) on how often object-as-map occur against "class-ed" objects? Will the use of ES6 maps to replace object-as-map make a significant dfference as far as perf is concerned?

I don't have numbers, but I would estimate that maps and dictionary objects are roughly on par (or at least, should be).

# Mark S. Miller (11 years ago)

On Mon, Jul 15, 2013 at 2:56 AM, Brendan Eich <brendan at mozilla.com> wrote:

Andreas Rossberg wrote:

Yes, VMs do a lot to handle these cases well, and e.g. produce a flat object representation for such examples. But like with all those optimisations they are just optimistic, you still need to guard all over the place, because JavaScript semantics is rather deprived of useful invariants. There is very little that provably holds for a larger region of code or for a non-trivial extend of object life time. So you get tons of repetitive checks even in optimised code.

That's one way to do it. Another is to invalidate more aggressively optimized, less-guarded code when the unlikely bad thing happens.

Classes won't improve performance because they don't introduce any new

invariants either. They are just sugar.

Right, although the const class or "sealed class" idea is still on the ES7 agenda.

Indeed! Still desperately needed for cheaper security as well.