extending an ES6 class using ES5 syntax?
I believe this will work in most cases:
function B() {
const obj = new A();
Object.setPrototypeOf(obj, new.target.prototype); // or B.prototype, but if you derive from B you'll have to do this dance again
// use obj instead of this
return obj;
}
Also, in general you should do
Object.setPrototypeOf(B.prototype, A.prototype);
Object.setPrototypeOf(B, A);
instead of
B.prototype = Object.create(A.prototype);
for slightly better semantics, including class-side inheritance and not clobbering .constructor
.
This may be a small bit nicer:
class A {}
function B() {
return Reflect.construct(A, arguments, B)
}
Reflect.setPrototypeOf(B.prototype, A.prototype)
Reflect.setPrototypeOf(B, A)
On 5/13/16 9:04 PM, Domenic Denicola wrote:
Object.setPrototypeOf(obj, new.target.prototype); // or B.prototype, but if you derive from B you'll have to do this dance again
This is highly undesirable because it will deoptimize in implementations in practice.
That said, can't the right thing be done using Reflect.construct?
Object.setPrototypeOf(B.prototype, A.prototype);
I believe this will, again, deoptimize in practice...
That said, can't the right thing be done using Reflect.construct?
It can:
function B() {
return Reflect.construct(A, arguments)
}
instanceof B
will be false
, though.
If it's helpful, you can look at how newless handles this: mr0grog/newless
It's a little more complicated than the earlier replies because many implementations have classes but not Reflect.construct
, but it goes gracefully allow ES5- function-style constructors to inherit from class-style constructors.
At the end of the day, the one restriction you can't really work around is not being able to touch this
inside the constructor before calling the super class’s constructor (unless you don't call it at all, though that will no longer work if/when you transition to proper classes).
I'm trying to provide a path where common code can migrate to ES6 classes before all the consumers have. So I was really looking for something that didn't require the subclasses to be touched at all (I wasn't clear about this). I have a fair amount of control over how the inheritance is setup and how the ES6 class is written but that is about it.
As-is, it seems every subclass constructor in the hierarchy must be changed to not use "this" and that means all the leaves must be changed first and they might as well migrate to ES6 classes which I'm trying not to wait on.
It seems I need some way to opt-out of "constructor method can't be called without new".
I'm pondering a half-way solution of parallel set of constructor functions and hoping every subclass uses the "super call" abstractions we provide (which I know isn't true) so I can redirect the "constructor" call:
class SomeSuper { constructor() { super(); init(); }
es5constructor() { super.es5construcor(...); init(); }
init() { ... } }
This makes me very unhappy and I would rather make the ES6 class a little less safe and get more code moving over to ES6 classes.
As is, folks are talking about transpiling "forever" because they can avoid this problem.
not the first time I read your comments about this. I have two questions:
- why is that? the most annoying warning ever in Firefox. There is a
spec'd method that is not even on Annex B and Firefox deliberately
discourage its usage. Why I don't see warnings every time I
[].slice.call(arguments)
? I understand it might de-optimize but I wonder if that's really always necessary (maybe it doesn't have to deopt if it's a well known operation with a predictable result). On the other side, I also wish Firefox woudn't show warnings about modern and recent specifications. Deprecated stuff is OK, opinionated and "not reasoned so well on console" warnings are ... yak?! - where were you when the
__proto__
landed on specs? :P
Thanks for any clarification.
Best
On 5/14/16 3:11 AM, Andrea Giammarchi wrote:
- why is that?
Why does mutating the proto after an object has been exposed to script end up deoptimizing things? Because it invalidates assumptions JITs otherwise make. So the options are to make the proto-hasn't-been-mutated case slower by not making those assumptions or to make the proto-has-been-mutated case slower. Guess which one is a better choice?
There is a spec'd method that is not even on Annex B and Firefox deliberately discourage its usage.
Sure. Just because something specced doesn't mean it's a good idea to actually do it.
This is why in the HTML spec there's all sorts of stuff that's marked as "not valid HTML" for authoring purposes even though the spec then goes ahead and defines what a browser should do with that stuff if authors do it anyway.
Why I don't see warnings every time I
[].slice.call(arguments)
?
Because that's not as big a performance hit?
I understand it might de-optimize but I wonder if that's really always necessary (maybe it doesn't have to deopt if it's a well known operation with a predictable result).
I'm not an expert on the type inference setup (which is what ends up deoptimizing on proto mutation, iirc), so I can't usefully answer this.
On the other side, I also wish Firefox woudn't show warnings about modern and recent specifications. Deprecated stuff is OK,
There can totally be things that are both recently added to the spec (for UA implementation purposes, because everyone has to do it for web compat) and deprecated for authoring purposes (because they're a bad idea). Dynamic proto mutation is one of those. ;)
- where were you when the
__proto__
landed on specs? :P
You mean when every browser on the market implemented it, which was the relevant bit? The addition to the spec was just acknowledging ugly reality.
Where was I when browsers implemented proto? We're talking 20ish years ago, so probably high school or a few years into college.
FWIW the warning is going away. bugzilla.mozilla.org/show_bug.cgi?id=1049041
Thanks Andy, I think that bug has exact same concerns and valid answers for
Boris too. Here it's also the only valid option to properly extend classes
in ES5 and I can't wait for such "warning" to go away, most developers have
been scared by the same warning by my document.registerElement
polyfill
but there's no other way, hence my complain. FF should probably stop
wasting time warning about what devs should do or not, if devs are using
standardized practices. Deprecation is OK, scary messages without details
are just ...yak.
Best
On May 15, 2016 12:51 AM, "Andy Earnshaw" <andyearnshaw at gmail.com> wrote:
FWIW the warning is going away. bugzilla.mozilla.org/show_bug.cgi?id=1049041
Is there a reason Reflect.setPrototypeOf(B.prototype, A.prototype) can't be optimized on class declaration the same way B.prototype = Object.create(A.prototype) is?
I'm trying to provide a path where common code can migrate to ES6 classes before all the consumers have. So I was really looking for something that didn't require the subclasses to be touched at all (I wasn't clear about this).
I hope it was clear that newless supports this (functions, classes, and other newless constructors can all inherit from newless constructors and vice versa). Obviously there is the caveat noted in the README about dealing with this
. In newless, it’s mostly a non-issue unless the super class needs to keep a reference to the actual instance that was created.
You simply can’t get around the fact that calling an ES6 class constructor will create a new object instance, even if you already have one with the right prototype chain (Even when using Reflect.construct()
, which you really can’t depend on in practice). Newless tries to work around this by setting the prototype of the instance you already had (i.e. the one a function constructor called SuperConstructor.call(this)
with) to the object that was created when instantiating the underlying class constructor. This preserves all the instance properties, but means that the this
in the superclass’s constructor is not the exact same object as the this
in the subclass’s constructor. (It also returns the this
from the superclass’s constructor, so a knowledgable inheriting function can work with that instead and be totally safe with no caveats.)
I have a fair amount of control over how the inheritance is setup and how the ES6 class is written but that is about it.
However! If you are in control of the the class hierarchy from your class down to the root, you can do a little better than newless can if you are willing to get a little tricky:
class FunctionInheritable {
constructor() {
return this._constructor.apply(this, arguments);
}
_constructor() {}
static call(context, ...args) {
return this.apply(context, args);
}
static apply(context, args) {
return this.prototype._constructor.apply(context, args) || context;
}
}
class YourActualLibraryClass extends FunctionInheritable {
// all your inheritable classes will have to use `_constructor` instead of `constructor`
_constructor(firstArg) {
// do whatever you want, there are no special requirements on what’s in here
this.something = firstArg;
global.libraryInstance = this;
}
someLibraryClassFunction() {
return this.something;
}
}
// any ES5 or earlier function-style “class” can now inherit with no changes
function SomeEs5FunctionConstructor() {
this.somethingElse = 'whatever';
YourActualLibraryClass.call(this, 'something');
}
SomeEs5FunctionConstructor.prototype = Object.create(YourActualLibraryClass.prototype);
But, bottom line, you’re going to have to do something funky if you want to upgrade function-style classes in the middle of an inheritance chain to ES6 classes without imposing any changes on the ultimate subclasses at the end of the chain. ES6 simply goes out of its way to make it impossible to do that without some hackery, be it some code like the above, a tool like newless, or a transpiler (that does not fully enforce ES6 restrictions).
As a library author, I’ve run into the same issues you have here—with rare exception, it’s still much too early to impose ES6-only support on a library’s users. It’s why libraries I work on generally haven’t moved to ES6 classes unless they’re doing something special as noted above.
Hope that helps,
Rob
On Sun, May 15, 2016 at 3:07 AM, Andrea Giammarchi <andrea.giammarchi at gmail.com> wrote:
Thanks Andy, I think that bug has exact same concerns and valid answers for Boris too. Here it's also the only valid option to properly extend classes in ES5 and I can't wait for such "warning" to go away, most developers have been scared by the same warning by my
document.registerElement
polyfill but there's no other way, hence my complain.
This is another reason we killed the warning - for a lot of polyfills there is just no other way to get the desired behavior.
Another reason - the cases where setting an object's prototype was the right thing to do were disproportionately cases that affect library authors, but then it's mainly the library users who see console warnings. So the warning was not even being shown to the right people.
FF should probably stop wasting time warning about what devs should do or not, if devs are using standardized practices. Deprecation is OK, scary messages without details are just ...yak.
Right, this is another reason we killed the warning - the way it was worded was pointlessly scary.
In other words, we agree completely, and that's why we killed the warning.
That said, it remains true that changing an object's prototype kind of fundamentally interferes with techniques which all fast ES implementations use to optimize property/method access. The warning is going away. The situation it is warning about is not going away.
It's funny, if you think that the entire story of __proto__
is because
Zepto library was using and not willing to ditch the conversion of
document.querySelectorAll('css')
results as Array
via __proto__
...
and only because that was way faster than an
Array.prototype.slice.call(document.querySelectorAll('css'))
alternative
operation.
ECMAScript standardized a practice used to improve performance on developers side ... turns out, that was rather Harakiri for performance optimizations.
Hilarious, I hope we all learned something about it (but unfortunately I'm sure we didn't)
Best
In short, cache invalidation is hard. Standard disclaimer: everything below is a radical simplification of what really goes on in a JS engine...
When a property access (or equivalently, a method call) happens, the standard says to do a lookup along the prototype chain to find out where the property actually lives.
This takes time. So implementations cache property lookup results. But that turns out to tricky. The next time that line of JS code runs, you may be accessing a different object, or the object may have been mutated somehow. How do we know the cached result still applies? Well, there are two ways.
-
We can check each time the code runs, to make sure the object this time is similar enough to the object last time, and the cached result is still valid. Checking still takes time, but not as much time as a full lookup.
-
We can say, ok, this cache entry is guaranteed to be valid as long as X Y and Z don't happen -- we can make a list of invalidating events, such as the property being deleted, or an unexpectedly different kind of object being passed in to this line of code. And then the engine has to notice when any of those things happen and purge corresponding cache entries. This is faster than approach #1 -- but then unexpected events deoptimize your code.
Note how approach #2 turns the "cached result" into a kind of performance assumption. The code runs fast until the assumption gets broken. Such assumptions even get baked into jit code, and then instead of "purging" a cache entry we have to throw away a bunch of compiled machine code and start fresh with less-optimistic assumptions. This is not even rare: it is a totally normal thing that happens...
Anyway, the point of all that is, changing the [[Prototype]] of an object is one of these events that can invalidate lots of cached results at once. Neither approach to caching can cope with that and still run at full speed.
I guess I should note that if the change happens early enough during library setup, and then never happens again at run time, some implementations might cope better than others. I think ours sets a bit on the object that means "my [[Prototype]] has been changed" and invalidates some kinds of cached results forever after, because that is the simplest thing. We could always make more corner cases fast by making the engine even more complicated! But if you got through all of the above and you're thinking "well, you could just add another hack, it's only a little hack" then maybe you are not thinking about JS engines as software that has to be maintained for the long haul. :)
B.prototype = Object.create(A.prototype)
is less of a problem, for
our implementation, because objects created by constructor B later get
a prototype chain where every object is clean (none of them have ever
had their [[Prototype]] changed; so no assumptions have been
invalidated).
On May 16, 2016, at 10:31 AM, Jason Orendorff <jason.orendorff at gmail.com> wrote:
...
B.prototype = Object.create(A.prototype)
is less of a problem, for our implementation, because objects created by constructor B later get a prototype chain where every object is clean (none of them have ever had their [[Prototype]] changed; so no assumptions have been invalidated).
Jason,
Do you or have you considered special casing proto used in object literals:
let p = {
__proto__: Array.prototype,
m1 () {…},
m2 () {…}
};
Lots of good declarative shape information in that form.
On Sun, May 15, 2016 at 6:23 PM, Rob Brackett <rob at robbrackett.com> wrote:
I'm trying to provide a path where common code can migrate to ES6 classes before all the consumers have. So I was really looking for something that didn't require the subclasses to be touched at all (I wasn't clear about this).
I hope it was clear that newless supports this (functions, classes, and other newless constructors can all inherit from newless constructors and vice versa). Obviously there is the caveat noted in the README about dealing with
this
. In newless, it’s mostly a non-issue unless the super class needs to keep a reference to the actual instance that was created.
I didn't look as closely as I should have the first time around.
You simply can’t get around the fact that calling an ES6 class constructor will create a new object instance, even if you already have one with the right prototype chain (Even when using
Reflect.construct()
, which you really can’t depend on in practice). Newless tries to work around this by setting the prototype of the instance you already had (i.e. the one a function constructor calledSuperConstructor.call(this)
with) to the object that was created when instantiating the underlying class constructor. This preserves all the instance properties, but means that thethis
in the superclass’s constructor is not the exact same object as thethis
in the subclass’s constructor. (It also returns thethis
from the superclass’s constructor, so a knowledgable inheriting function can work with that instead and be totally safe with no caveats.)
Having a different "this", multiple objects be instances and a unique prototype chain per instance is a little too harsh for my usecase.
I have a fair amount of control over how the inheritance is setup and how the ES6 class is written but that is about it.
However! If you are in control of the the class hierarchy from your class down to the root, you can do a little better than newless can if you are willing to get a little tricky:
class FunctionInheritable { constructor() { return this._constructor.apply(this, arguments); } _constructor() {} static call(context, ...args) { return this.apply(context, args); } static apply(context, args) { return this.prototype._constructor.apply(context, args) || context; } } class YourActualLibraryClass extends FunctionInheritable { // all your inheritable classes will have to use `_constructor` instead of `constructor` _constructor(firstArg) { // do whatever you want, there are no special requirements on what’s in here this.something = firstArg; global.libraryInstance = this; } someLibraryClassFunction() { return this.something; } } // any ES5 or earlier function-style “class” can now inherit with no changes function SomeEs5FunctionConstructor() { this.somethingElse = 'whatever'; YourActualLibraryClass.call(this, 'something'); } SomeEs5FunctionConstructor.prototype = Object.create(YourActualLibraryClass.prototype);
Having some variant of this (parallel ES5/ES6 construction) is what I'm currently investigating.
But, bottom line, you’re going to have to do something funky if you want to upgrade function-style classes in the middle of an inheritance chain to ES6 classes without imposing any changes on the ultimate subclasses at the end of the chain. ES6 simply goes out of its way to make it impossible to do that without some hackery, be it some code like the above, a tool like newless, or a transpiler (that does not fully enforce ES6 restrictions).
As a library author, I’ve run into the same issues you have here—with rare exception, it’s still much too early to impose ES6-only support on a library’s users. It’s why libraries I work on generally haven’t moved to ES6 classes unless they’re doing something special as noted above.
Yes, we are on the same page here. I just need to provide some guidance to future developers that have this problem.
Good call, however, it wouldn't work so well with constructor because
Object.setPrototypeOf(B, A)
is still neede here, and also trashing
initial B.prototype
with an all enumerable assignment still wouldn't
solve this case.
B.prototype = {
__proto__: A.prototype,
constructor: B, // now enumerable
method() {
// also enumerable
}
};
Maybe having a special case for "classes", the only place where prototype is usually modified, would solve them all?
True. We have some special cases w.r.t. object literals, and I've
thought about optimizing __proto__:
in particular. There's no
fundamental reason we couldn't do it, but so far the syntax does not
seem to be common enough to pay for it.
Rob, you gave an example (I modified barely, to make it work in browser console):
class FunctionInheritable {
constructor() {
return this._constructor.apply(this, arguments);
}
_constructor() {}
static call(context, ...args) {
return this.apply(context, args);
}
static apply(context, args) {
return this.prototype._constructor.apply(context, args) || context;
}
}
class YourActualLibraryClass extends FunctionInheritable {
// all your inheritable classes will have to use `_constructor` instead of `constructor`
_constructor(firstArg) {
// do whatever you want, there are no special requirements on what’s in here
this.something = firstArg;
}
someLibraryClassFunction() {
return this.something;
}
}
// any ES5 or earlier function-style “class” can now inherit with no changes
function SomeEs5FunctionConstructor() {
this.somethingElse = 'whatever';
YourActualLibraryClass.call(this, 'something');
}
SomeEs5FunctionConstructor.prototype =
Object.create(YourActualLibraryClass.prototype);
new SomeEs5FunctionConstructor().someLibraryClassFunction() // "something"
It works, but I don't understand why. For example, if I try the following, it fails:
class Foo {}
Foo.apply(window, ['foo'])
Why is it that the static version you implemented above can .apply
itself
without this same error (and new
is not being used)?
Why is it that the static version you implemented above can
.apply
itself without this same error (andnew
is not being used)?
Wow, this is old stuff! Anyway, it’s because it never calls the built-in apply
on itself:
class FunctionInheritable {
//…other bits omitted…
static apply(context, args) {
// this calls `apply` on `_constructor`, which is a function, not `constructor`, which is a class constructor
return this.prototype._constructor.apply(context, args) || context;
}
}
Ah, makes sense.
Specifically, I'm looking for a way to call an super class ES6 constructor without violating the "new" rule (TypeError: Class constructor A cannot be invoked without 'new'):
class A { }
function B() { A.call(this); // this breaks } B.prototype = Object.create(A.prototype);
Without something like this, it isn't possible to migrate a large code base except "leaves" first. Which is a bad place to be.