Traits library
I looked at your library a few weeks ago and I was pretty impressed by it back then. (I'll take a second look but I'm sure more time has made it even better.)
One serious omission in your library is that traits cannot (could not?) be applied to existing objects. Adding traits to existing objects is important because it allows you to combine prototype based inheritance with traits by adding a trait to the prototype. Adding traits to the prototype, in turn makes composition a one time cost instead of per instance cost.
function MyComponent() {} MyComponent.prototype = Object.create(MySuperClass.prototype); addTrait(MyComponent.prototype, eventTargetTrait); var c = new MyComponent; c.addEventListener('foo', bar);
Keep up the good work,
On Tue, Feb 16, 2010 at 16:57, Erik Arvidsson <erik.arvidsson at gmail.com> wrote:
One serious omission in your library is that traits cannot (could not?) be applied to existing objects. Adding traits to existing objects is important because it allows you to combine prototype based inheritance with traits by adding a trait to the prototype. Adding traits to the prototype, in turn makes composition a one time cost instead of per instance cost.
function MyComponent() {} MyComponent.prototype = Object.create(MySuperClass.prototype); addTrait(MyComponent.prototype, eventTargetTrait); var c = new MyComponent; c.addEventListener('foo', bar);
I knew I should have looked at your updated library before posting. It seems like you do support my scenario above by passing in the proto.
var EventTargetTrait = ...;
function MyComponent() {} MyComponent.prototype = Trait.create( MySuperClass.prototype, EventTargetTrait); ...
I still have use cases where adding traits to existing objects would be useful but those are more obscure and can be done by setting proto to a new object (shudders).
If I understand the implementation of traits, it provides a ES5 compatible way of composing 'final' properties to an existing object's prototype. Options provide the meta-data required to define the property descriptor such as required, etc. Do traits provide an ability to bind a 'context' to the property in the form of a closure so that the property may be provided with additional information? Effectively a way to curry or export additional information required by the trait when it is called in the context of the object it was added to.
thx kam
I knew I should have looked at your updated library before posting. It
seems like you do support my scenario above by passing in the proto.
var EventTargetTrait = ...;
function MyComponent() {} MyComponent.prototype = Trait.create( MySuperClass.prototype, EventTargetTrait); ...
Your solution is almost right. If you want an object instantiated from a trait to serve as the prototype of other objects, you should instantiate it using "Object.create" "Trait.create" will generate an object whose methods are 'bound' (i.e. their "this"-binding is fixed). So, self-sends in this object won't delegate back to "child" objects.
A good way to think of it is that Object.create generates 'plain' Javascript objects whereas Trait.create generates 'final' objects.
If I understand the implementation of traits, it provides a ES5 compatible
way of composing 'final' properties to an existing object's prototype. Options provide the meta-data required to define the property descriptor such as required, etc. Do traits provide an ability to bind a 'context' to the property in the form of a closure so that the property may be provided with additional information? Effectively a way to curry or export additional information required by the trait when it is called in the context of the object it was added to.
I'm not entirely sure I understand the question. Do you have a particular use case in mind or can you give an example to clarify things? As far as I can understand, if what you want is additional meta-data stored in property descriptors, this can be done by adding additional attributes to the descriptors. In effect, that's what the library already does for 'required' and 'conflicting' properties.
The 'pattern' I'm describing has some similarities to multi-methods.
An example is a drawing program. Depending on 'context' which
includes the user agent, the type of drawing primitive and the type
of drawing filter or effects you need to dynamically build an object which
chooses vml, flash, svg or canvas. The drawing object is told to draw
a circle with a linear gradient. As the designer you do not want to
download implementations of vml, flash, svg and canvas. You also do
not want to build a container object which has gratuitous if statements
to do delegation or contains knowledge of all implementations. This would not
scale given you would like the design to be extensible. For example
next week you may want to add silverlight. You also would like to only download
resources based things like user agent and version (vml for ie, svn and/or canvas for
webkit, firefox, opera depending on the type of primitive and effect). Patterns like
template-methods do not apply because each implementation may
take different parameters in its methods and given a user agent/version you
may want to do a drawCircle using a canvas on firefox but use svg
on iphone and use flash on webkit. It doesn't sound like traits would
enable this 'pattern' given different methods need to be mixed in with
appropriate context from the implementation and all methods are final.
Also it doesn't look like you could draw a circle and pass things like border,
background in a closure rather than explicitly through parameters, although
you allude to a way via the property descriptor. Passing attributes like
background, border in the property descriptor would seem to be an abuse
of the meta-data framework by passing data not meta-data.
thx kam
On 2010-02-16, at 17:55, Tom Van Cutsem wrote:
Hi,
Mark Miller and I have been working on a small traits library for Javascript. It is documented here: < code.google.com/p/es-lab/wiki/Traits>
Nicely done. A couple of high-level questions. [Full disclosure: I'm with OpenLaszlo, and we have our own class/mixin implementation based on prototypical inheritance, built around some extra syntax, transformed by our compiler, and a runtime library; all running in es3 (and as2 and as3). We haven't yet thought about how we might take advantage of es5.]
-
I understand the motivation for traits (removing some of the magic of mixins), but I don't understand why that needs to be anything more than a lint-like tool, rather than insinuating itself into the implementation. The fact that you provide the order-sensitive
override
compositor weakens the case for strict traits. If override combination somehow made the overridden property accessible with a standard name (super
), it seems to me you would have mixins. What am I missing? -
Could you elaborate on this:
The renaming performed by resolve is shallow: it only changes the binding of the property. If the property is a method, it will not change references to the renamed identifier within the method body.
It seems to me that this 'feature' could lead to more magic than mixins. If I read this correctly, overriding a method in a trait will be visible to methods outside the trait, but not to methods inside the trait. That doesn't seem to follow the original traits composition principles.
-
In our usage, being able to test that an object "is" an instance of a mixin is quite important. In your description, you imply that type testing is an exercise left to the future. Has this not been an issue in applying your traits library?
-
Your 'stateful trait' example shows why I would call 'class state' and 'each-subclass state' (with the latter recommended, but the former could have applications). Because your implementation allows any type of property (not just methods), you also have 'instance state' in your traits. I think this is a good thing. Perhaps you should make that more explicit, since it diverges from the traits references.
-
I've lost track of where Harmony is heading with classes, but do you have any thoughts on that? Do you see you traits as an alternative to any other class proposal, or would it be intended as an extension of classes (as the original traits proposal was), should they ever arrive?
Thanks for the clarification. As far as I can tell, what you are describing is an example that would require trait composition to be postponed until runtime (as it depends on runtime context). The way I see it, the traits library would not prevent that: since traits are not static entities but just property maps that are manipulated at runtime, I see no fundamental obstacle to writing a generic dispatching function that, given some context, returns the appropriate traits or trait composition making up a drawing editor.
However, I would like to point out that if this is your desired use case, then traits may not be the right building block. As you point out yourself, the example you're describing seems like a perfect match for multi-methods. I do not think that 'dynamic trait composition' is a good alternative for multi-methods. At least, that's not the use case for which traits have been designed. Traits are an alternative to composition by inheritance. As is the case in typical single/mixin/multiple-inheritance schemes, the relationship between the inheriting entities is usually known (and declared) statically and is immutable. I see traits more like a static building block that can be used to organize code (while enabling a higher reuse potential than that provided by classes+single-inheritance).
Some details:
It doesn't sound like traits would
enable this 'pattern' given different methods need to be mixed in with appropriate context from the implementation and all methods are final.
Methods are 'final' upon trait composition only when a property map is instantiated using "Trait.create". If "Object.create" is used, the instantiated object is a plain Javascript object with all of the malleability this implies.
Passing attributes like background, border in the property descriptor would seem to be an abuse of the meta-data framework by passing data not meta-data.
I completely agree. When I proposed the use of attributes to store data in property descriptors in my earlier message, I thought you meant to store meta-data.
- I understand the motivation for traits (removing some of the magic of
mixins), but I don't understand why that needs to be anything more than a lint-like tool, rather than insinuating itself into the implementation.
Good point. For stateless traits, I think you're right that a preprocessor would be sufficient. For stateful traits, since state is captured using lexical scoping, you end up with the same optimization issues involved with getting the "objects-as-closures" pattern implemented efficiently. The same goes for traits instantiated using "Trait.create" (which tries to uphold the integrity of |this| by turning all of the trait's methods into bound methods. Since the binding must be done on a per-instance basis, object instances created in this way cannot share method implementations without help from the implementation.
The fact that you provide the order-sensitive
override
compositor weakens the case for strict traits. If override combination somehow made the overridden property accessible with a standard name (super
), it seems to me you would have mixins. What am I missing?
You are absolutely right in stating that the override
operator weakens the
strength of the traits approach. If you perform all of your trait
composition via override
, you are effectively performing inheritance/mixin
composition, with the associated gotchas. The benefit of traits is of course
that they provide this other operator, compose
, with nicer properties. The
intent of the traits library is of course that compose
be used more
frequently than override
. I provided override
for two reasons:
- the original traits model also includes it. In the original traits model,
when a composite trait is eventually composed with a class, the class and
the trait are composed using
override
. - as a convenience.
override
may sometimes simple be the right operator for the job. If it captures the semantics of the composition you're trying to express, there's nothing wrong with using it. The alternative would be to use compose + resolve all conflicts by explicitly excluding all properties-to-be-overridden.
- Could you elaborate on this:
The renaming performed by resolve is shallow: it only changes the binding of the property. If the property is a method, it will not change references to the renamed identifier within the method body.
It seems to me that this 'feature' could lead to more magic than mixins. If I read this correctly, overriding a method in a trait will be visible to methods outside the trait, but not to methods inside the trait. That doesn't seem to follow the original traits composition principles.
Ok, my bad. I should have made this more clear. It is not the case that overriding the method is only visible outside of the trait, it will also be visible inside the trait (so we don't break any trait composition principles here). For example, given:
var t = Trait({ foo: function() { ... }, bar: function() { ... this.foo() ... } })
Then, Trait.resolve({ foo: 'baz' }, t) results in:
Trait({ baz: function() { ... }, bar: function() { ... this.foo() ... } // references to 'foo' in method bodies are not renamed, but internal methods do 'see' the new 'baz' method foo: Trait.required // since 'foo' was renamed and this trait may depend on it, a composer must provide a new definition for 'foo' })
This is completely in line with the original trait composition semantics. In particular, it upholds the trait "flattening property" that trait composition can be performed statically without transforming method bodies.
- In our usage, being able to test that an object "is" an instance of a
mixin is quite important. In your description, you imply that type testing is an exercise left to the future. Has this not been an issue in applying your traits library?
My personal impression is that type testing should completely orthogonal to reuse and inheritance mechanisms. To put it in Java terms: I think "obj instanceof type" is only OK if type is an interface, not a class. Since Javascript does not have a built-in notion of 'interface' (other than feature-testing an object for the methods it implements), I couldn't figure out a good way to reconcile 'instanceof' and traits. Admittedly, for traits like "Enumerable" and "Comparable", the trait is itself a useful abstract type. But I can equally imagine traits being used to factor out more ad hoc implementation code whose type you don't want to see propagated into objects instantiated from it, in the interest of not leaking implementation details to clients of those objects.
- Your 'stateful trait' example shows why I would call 'class state' and 'each-subclass state' (with the latter recommended, but the former could have applications). Because your implementation allows any type of property (not just methods), you also have 'instance state' in your traits. I think this is a good thing. Perhaps you should make that more explicit, since it diverges from the traits references.
Thanks for pointing this out.
- I've lost track of where Harmony is heading with classes, but do you have any thoughts on that? Do you see you traits as an alternative to any other class proposal, or would it be intended as an extension of classes (as the original traits proposal was), should they ever arrive?
I'm glad you ask this question. To be frank, I don't see what benefits classes would bring in addition to traits. As a unit of code reuse, trait composition is more flexible than class inheritance. As a unit of encapsulation, closures are more flexible than private instance variables. As a unit of instantiation, 'maker' functions are more flexible than class constructors. As a unit of implementation sharing, traits have the same properties as classes, given a trait-aware implementation. As a unit of classification, interface types are more flexible than class types (of course ES doesn't have interfaces, so I think a solid object-type-testing mechanism would be a more useful addition to ES than classes would be).
In the original traits model, since traits are stateless, classes play an additional role not fulfilled by traits: carrying instance variables. However, in an extended trait model such as this one that allows traits to define instance variables as well as methods, traits can now take over this role from classes as well.
Please do correct me if I'm missing a role for which classes do provide significant added value.
thanks for your excellent clarifications below. I agree traits is a way to compose different blocks from different 'classes' which would avoid the limitations of single-inheritance. One use case when using traits in this way is whether traits (either using Trait.create or Object.create) could be used to replace prototypical inheritance rather than classical inheritance. That is, can traits be used to compose different blocks from different instances rather than different classes.
kam
On Wed, Feb 17, 2010 at 2:42 PM, P T Withington <ptw at pobox.com> wrote:
On 2010-02-16, at 17:55, Tom Van Cutsem wrote:
Hi,
Mark Miller and I have been working on a small traits library for Javascript. It is documented here: < code.google.com/p/es-lab/wiki/Traits>
Nicely done. A couple of high-level questions. [Full disclosure: I'm with OpenLaszlo, and we have our own class/mixin implementation based on prototypical inheritance, built around some extra syntax, transformed by our compiler, and a runtime library; all running in es3 (and as2 and as3). We haven't yet thought about how we might take advantage of es5.]
I understand the motivation for traits (removing some of the magic of mixins), but I don't understand why that needs to be anything more than a lint-like tool, rather than insinuating itself into the implementation. The fact that you provide the order-sensitive
override
compositor weakens the case for strict traits. If override combination somehow made the overridden property accessible with a standard name (super
), it seems to me you would have mixins. What am I missing?Could you elaborate on this:
The renaming performed by resolve is shallow: it only changes the binding of the property. If the property is a method, it will not change references to the renamed identifier within the method body.
It seems to me that this 'feature' could lead to more magic than mixins. If I read this correctly, overriding a method in a trait will be visible to methods outside the trait, but not to methods inside the trait. That doesn't seem to follow the original traits composition principles.
In our usage, being able to test that an object "is" an instance of a mixin is quite important. In your description, you imply that type testing is an exercise left to the future. Has this not been an issue in applying your traits library?
Your 'stateful trait' example shows why I would call 'class state' and 'each-subclass state' (with the latter recommended, but the former could have applications). Because your implementation allows any type of property (not just methods), you also have 'instance state' in your traits. I think this is a good thing. Perhaps you should make that more explicit, since it diverges from the traits references.
I've lost track of where Harmony is heading with classes, but do you have any thoughts on that? Do you see you traits as an alternative to any other class proposal, or would it be intended as an extension of classes (as the original traits proposal was), should they ever arrive?
Tom & I started with the question "If we could have any syntactic sugar we wanted, how would we express traits?". Once we had a plausible answer, we asked "How close could we come with a pure library, with no new syntactic sugar?". The answer was, so close that the remaining benefit of additional syntax was tiny, so we dropped it.
Allan and I had previously agreed that the overlap in functionality between his extended object literal syntax < strawman:object_initialiser_extensions>
and my classes-as-sugar proposal (< esdiscuss/2009-March/009115> and <
esdiscuss/2009-March/009116>) is
sufficiently great that only one or the other should become part of future ES. I would add traits to the list. All three provide, in some sense, extended object literals for high integrity objects, and with similar implementation burdens to avoid per-instance bound method allocation cost. But they do so in very different ways, and with different flexibilities beyond their common core functionality.
Of the three, I would rather see Tom's traits than either my classes-as-sugar or Allan's extended object literal syntax. But as long as we get efficient high integrity objects without undue complexity, I am rather happy.
For comparison, the classes-as-sugar example at < esdiscuss/2009-March/009116>
class Point(privX, privY) {
let privInstVar = 2;
const privInstConst = -2;
public toString() {
return ('<' + getX() + ',' + getY() + '>');
};
public getX() { return privX; };
public getY() { return privY; };
public let pubInstVar = 4;
public pubInstConst = -4;
}
rewritten using Tom's traits:
function Point(privX, privY) {
let privInstVar = 2;
const privInstConst = -2;
let pubInstVar = 4;
const pubInstConst = -4;
return Trait.object({
toString: function() {
return ('<' + this.getX() + ',' + this.getY() + '>');
},
getX: function() { return privX; },
getY: function() { return privY; },
get pubInstVar() { return pubInstVar; },
pubInstConst: pubInstConst
});
}
Object.freeze(Point);
Object.freeze(Point.prototype);
with the caveat that the two Object.freeze()s are manually hoisted to the beginning of the block in which Point is defined. Given only the const function sugar proposed at the top of < esdiscuss/2009-March/009116>, even
this remaining annoyance would go away.
On Wed, Feb 17, 2010 at 3:49 PM, Tom Van Cutsem <tomvc at google.com> wrote:
[...]
- I've lost track of where Harmony is heading with classes, but do you have any thoughts on that? Do you see you traits as an alternative to any other class proposal, or would it be intended as an extension of classes (as the original traits proposal was), should they ever arrive?
I'm glad you ask this question. To be frank, I don't see what benefits classes would bring in addition to traits.
- As a unit of code reuse, trait composition is more flexible than class inheritance.
- As a unit of encapsulation, closures are more flexible than private instance variables.
- As a unit of instantiation, 'maker' functions are more flexible than class constructors.
- As a unit of implementation sharing, traits have the same properties as classes, given a trait-aware implementation.
- As a unit of classification, interface types are more flexible than class types (of course ES doesn't have interfaces, so I think a solid object-type-testing mechanism would be a more useful addition to ES than classes would be).
[Numbers added to list entries above]
Hi Tom, I love the elegance of this answer. I agree with your conclusion, and I agree with the above list regarding Java-like, Smalltalk-like, or ES4-like classes. However, on points #2 and #3, your traits proposal is identical to my classes-as-sugar proposal.
One use case when using traits in this way is
whether traits (either using Trait.create or Object.create) could be used to replace prototypical inheritance rather than classical inheritance. That is, can traits be used to compose different blocks from different instances rather than different classes.
Yes, the traits library enables this, because it allows instances to be 'lifted' into traits. Just like the library provides two ways to generate an object from a property map (either use Object.create or Trait.create), the library provides two ways to generate a property map from an object:
- either use the Trait constructor: Trait(object) -> property-map
- or use Object.getOwnProperties(object) -> property-map
I didn't mention this on the wiki page, but the traits.js library actually goes ahead and adds the "missing" 'getOwnProperties' utility method to Object if it doesn't already exist, precisely because it's so useful in combination with representing traits as property maps. getOwnProperties is a simple utility method based on ES5's getOwnPropertyNames + getOwnPropertyDescriptor built-in methods.
Just like Trait.create is the "trait-aware" version of Object.create, the Trait constructor is the "trait-aware" version of Object.getOwnProperties. The Trait constructor will, for example, recognize properties bound to Trait.required, whereas Object.getOwnProperties will not.
That said, I envisaged the Trait constructor to be used in the absolute majority of cases in conjunction with an in-line object literal, as in Trait({...}). Generally, I don't think it makes a lot of sense to turn an actual instance back into a trait (it's like turning an object into a class). But the library doesn't stop you from using Trait or Object.getOwnProperties on arbitrary instances if this is really what you want to do.
As a follow-up to my earlier reply to Kam, I included a diagram at < code.google.com/p/es-lab/wiki/Traits#API> that helps to clarify what
kinds of objects and operations are involved in using the library.
Thanks Tom.
The traits concept resonates with something very similar I had prototyped which you briefly describe under 'Traits and Type Tests'. Specifically where you describe var o = Trait.create(proto, trait, { implements: [ brand1, brand2, ... ] });
The object literal I had used was { 'private': { ... }, 'constructor': function() { ... }, 'privileged': { ... }, 'public': { ... }, 'actions': { ... }, 'init': function() { ... } }
By providing groupings in the form of a meta-model, type classification was greatly improved in the resulting object along with reflection capabilities and other advantages when a javascript meta-model is enabled. For example 'private' would mean closures, 'actions' would mean the methods would be automating bound to Function.bind. One thing I did which may be worth considering is to make the 'Trait' object extensible by allowing it to also be extended via composition. Although this is somewhat metacircular, it provides the ability to add new meta-data other than 'required' and have that meta-data handled within the extended Trait object. For example, if you wanted to indicate whether a property should be documented you could do something like { a: Trait.document } where you would expand Trait to handle this new meta-data by doing something like Trait.compose(Trait, {'document': function() { ... }}). Match a new meta-data property with a Trait method of the same name would be one possible way of making Trait extensible in similar ways as java annotations. Given your expertise in this area, would you view this meta-circular approach as a way to extend Trait into areas beyond composition where more semantic behavior could be injected given a new meta-data keyword? You may feel this distorts the simple semantics but I would be interested in hearing your views.
thx kam
On Feb 17, 2010, at 4:30 PM, Mark S. Miller wrote:
Tom & I started with the question "If we could have any syntactic
sugar we wanted, how would we express traits?". Once we had a
plausible answer, we asked "How close could we come with a pure
library, with no new syntactic sugar?". The answer was, so close
that the remaining benefit of additional syntax was tiny, so we
dropped it.
First, kudos for traits.js, it has the long-sought-after (even back in
ES4 days, with "fixtures") ability to specify fixed sets of methods
and properties that can be composed freely, but which cannot be
overridden without extra effort (if one allows that calling override
is somehow "extra" compared to calling compose). Great work.
Second, demurral from the claim that the benefits of syntax are tiny,
for either users and implementors.
Users have to put up with Trait.object({}) instead of something like
trait {} (without an "extends" clause), in the following example from
your mail (revised to use const to create a frozen function with
frozen prototype bound to a block-scoped const binding):
const Point(privX, privY) {
let privInstVar = 2;
const privInstConst = -2;
let pubInstVar = 4;
const pubInstConst = -4;
return Trait.object({
toString: function() {
return ('<' + this.getX() + ',' + this.getY() + '>');
},
getX: function() { return privX; },
getY: function() { return privY; },
get pubInstVar() { return pubInstVar; },
pubInstConst: pubInstConst
});
}
This is non-trivially noisy, but admittedly less of an issue (in this
example, at any rate) for users than it is for implementors,
considering that graphics-intensive code might make 1e6 Points, with
3e6 bound methods (if I'm counting correctly).
Yet the syntactic noise and unchecked-till-runtime boilerplate tax
remain an issue for users, and the Object.freeze calls add to this
user-facing complexity if you revert the const Point(...) {...} change
to use a function definition.
Syntax errors instead of runtime typo complaints are always a win for
users.
If you keep that new const syntax, then there's no backward-
compatibility argument against better trait syntax, by which users
could say what they mean more simply and correctly, and
implementations could optimize much more easily.
For implementors as well as bean-counting users, optimizing bound
methods can be important. JS is being used to push pixels, and not
just pixels: the scene graph, physics, and game logic all want speed
at scale. It's a safe bet that programmers won't use traits only "in
the small".
Your posts seem to ask or suggest about implementations pattern-
matching for layered applications of Object.freeze and
Function.prototype.bind, which is a tall order in itself.
But even doing such matching is pessimistic if it results in a "bound
method pair" (meth, self) for a given method value (meth) and receiver
object (self), created and frozen at the point where traits.js calls
freeze(bindThis(meth, self)).
Bound methods need not be reified unless extracted as funargs (even if
only as operands of ===, e.g.), or frozen unless tampered with. If
they're (typically, or only ever) called on the right receiver, then
the cost of creating a bind result can be avoided.
Optimizing to defer layered freeze/bind combinations until method
value extraction or mutation attempt requires more than fancy pattern-
matching on the part of an implementation. It requires a read barrier.
Getters already require such a barrier, but it is on the slow path.
The overhead of a method binding/freezing barrier could be hidden in
that slow path, but it is no picnic for implementors even if they have
explicit, unambiguous syntax.
You could say "that's why implementors make the big bucks" (I've
written this in the past, half in jest).
But I'm not concerned only about a once-per-implementation-effort tax
on implementors who wish to avoid 3e6 methods per point, or other such
poor performance scaling. I'm more concerned about complexity, which
breeds bugs (including exploitable ones if the implementor is working
in an unsafe language).
Put together the user and implementor taxes, and you have sufficient
cause for new syntax.
Add to this tax revolt the plain desire for better syntax-as-user-
interface. If you want const f(){}, why //wouldn't// you want
declarative trait syntax?
On 2010-02-17, at 18:49, Tom Van Cutsem wrote:
As a unit of code reuse, trait composition is more flexible than class inheritance.
+1
On 2010-02-17, at 19:30, Mark S. Miller wrote:
Of the three, I would rather see Tom's traits than either my classes-as-sugar or Allan's extended object literal syntax.
+1
On 2010-02-18, at 00:29, Brendan Eich wrote:
Add to this tax revolt the plain desire for better syntax-as-user-interface. If you want const f(){}, why //wouldn't// you want declarative trait syntax?
+1
On 2010-02-17, at 18:49, Tom Van Cutsem wrote:
As a unit of classification, interface types are more flexible than class types (of course ES doesn't have interfaces, so I think a solid object-type-testing mechanism would be a more useful addition to ES than classes would be).
This seems like an important loose end to me. If Harmony adopts a declarative trait syntax, is there still a reason to have an orthogonal type mechanism, rather than traits being types?
On Feb 17, 2010, at 9:29 PM, Brendan Eich wrote:
Bound methods need not be reified unless extracted as funargs (even
if only as operands of ===, e.g.), or frozen unless tampered with.
If they're (typically, or only ever) called on the right receiver,
then the cost of creating a bind result can be avoided.Optimizing to defer layered freeze/bind combinations until method
value extraction or mutation attempt requires more than fancy
pattern-matching on the part of an implementation. It requires a
read barrier.
In case it wasn't clear, a read barrier is enough, even to cope with
mutation attempts (deferred freeze), since method extraction is a
read, and to mutate a method value (function object) you have to read
(not invoke) the method value:
o.m(); // ok, invoking with bound receiver, not reading o.m
foo(o.m); // not ok, must bind and freeze o.m
o.m.bar = 1; // mutation on o.m reference base, first bind and freeze
o.m
The barrier can thus be compiled into forms that read o.m without
immediately invoking it. Of course the overhead is still too high if
we only compile it, so there has to be a runtime fast path (a
polymorphic inline cache), but that is already the case due to
getters, as noted.
If the method body uses arguments and is not strict-mode code, then it
could somehow use arguments.callee and leak an unbound, unfrozen (and
"joined", see ES3 -- the first optimization to do here, prior to
traits and anything in ES5, is to defer evaluating a function
expression into a fresh object per evaluation -- SpiderMonkey does
this optimization under some conditions) method reference.
So arguments is a hazard (it's hard to analyze precisely for
arguments.callee, if you see arguments[i] or baz(arguments) you have
to assume the worst).
Same goes for named function expressions as methods, where the method
body or a function nested in it uses the name of the function
expression.
These are more places that need the barrier, or (to simplify
compilation) defeat the optimization of deferring bind and freeze (and
disjoining).
But let's back up again, to traits.js:
var Trait = (function(){ . . . var SUPPORTS_DEFINEPROP = !!Object.defineProperty;
var call = Function.prototype.call; . . .
Static analysis of just this file cannot be sure that Object and
Function have their original meanings -- these are writable properties
of the global object. So some speculation, at the least, is required
to get off the ground at all, in trying to pattern-match
freeze(bind(...)) etc.
Recognizing when Object and Function still mean what they originally
meant is something optimizing VMs may do, but it's a high bar to set
for traits performance to scale well.
The method read barrier complexity on top is also on the boards or
already partly implemented in at least one VM, but again why should
this be required of implementors, on top of the syn-tax on users?
For example, if you wanted to indicate whether a property should be documented you could do something like
{ a: Trait.document } where you would expand Trait to handle this new
meta-data by doing something like Trait.compose(Trait, {'document': function() { ... }}). Match a new meta-data property with a Trait method of the same name would be one possible way of making Trait extensible in similar ways as java annotations. Given your expertise in this area, would you view this meta-circular approach as a way to extend Trait into areas beyond composition where more semantic behavior could be injected given a new meta-data keyword? You may feel this distorts the simple semantics but I would be interested in hearing your views.
Now I see what you're getting at. Interesting! If you think of traits.js as a general-purpose property descriptor map manipulation library, then this seems to be a very nice-to-have feature. If you think of traits.js as a trait composition library, then it probably does introduce too much distraction, especially if the meta-data is not related to trait composition at all.
One possibility would be to fork traits.js into two libraries, such that the traits-specific composition library is a specialization of a more general-purpose property map combinator library. I don't know whether it's worth doing that. I would need to have a better idea of what other useful specializations one could construct out of a more general-purpose library, but perhaps we should have that discussion offline (or at least fork it off of the traits-oriented discussion thread).
Put together the user and implementor taxes, and you have sufficient cause for new syntax.
Add to this tax revolt the plain desire for better syntax-as-user-interface. If you want const f(){}, why //wouldn't// you want declarative trait syntax?
Hi Brendan,
Thanks for enlightening us with the implementation-level issues involved in getting user-land traits optimized. That definitely puts things in perspective. I wholeheartedly agree that dedicated syntax would be of great help to users, and to implementors as a not-to-be-underestimated bonus.
I am not at all opposed to dedicated syntax/semantics for traits in ES-harmony. Think of traits.js more as an exercise in exploring the design space of what is possible today. For example, the fact that ES5's property descriptor maps turned out to be directly usable as traits was a surprising new insight to me.
I think the most useful outcome of this experiment is that it gives us a better idea of what the fundamental limits of a library approach are. For example: that syntax for traits is (mostly) not a boilerplate issue but a semantic issue (early error feedback) + an implementation issue (method sharing).
And who knows, if there is an uptake of this library in ES5, it would help familiarize programmers with traits without them having to wait for dedicated syntax. But I am aware that this is very optimistic: many have previously designed good class, mixin and even trait libraries for ES3, and to the best of my knowledge, none of these seem to have widely caught on.
As a unit of classification, interface types are more flexible than class types (of course ES doesn't have interfaces, so I think a solid object-type-testing mechanism would be a more useful addition to ES than classes would be).
This seems like an important loose end to me. If Harmony adopts a declarative trait syntax, is there still a reason to have an orthogonal type mechanism, rather than traits being types?
It depends. Since traits don't extend or 'subclass' other traits, trait types would not be able to express subtyping directly. And of course, it conflates implementation types with interface types, but I have the feeling many don't consider that to be a show-stopper :-)
Picking up where Tom left off below... I've wondered how you and the ECMAScript body prefer to have particular concepts presented. Given that the lag time between new syntax and conformance across vendors could be months, years or never, it seems that there is always a need to provide a 'shim' or implementation that emulates proposed syntax. I think many concepts including Tom and Mark's steer away from new syntax due to the problems noted. In general should there be due diligence on both? I realize this may vary per strawperson but thought you may have a general philosophy to share.
thx kam
First, this traits proposal looks very nice -- thanks, Tom and Mark, for your work on this.
I want to add another point about the benefit of new syntax by calling out a piece of the code.google.com proposal under "Performance," where it says: "In order for the partial evaluation scheme to work, programmers should use the library with some restrictions." This is one of the places where you have the opportunity to call out more precisely in the language where users can expect better performance. By creating a declarative syntax, you invite implementors to make the common cases efficient, and provide a clearer set of rules for programmers to know when they are or aren't straying into "you can do that, but it'll cost you" territory.
Another side of the same coin is that you /dis/invite users from accidentally straying into the expensive territory. The convenient syntax provides an incentive for people to use the version you want to be the common case.
On Feb 19, 2010, at 7:29 AM, Kam Kasravi wrote:
Picking up where Tom left off below... I've wondered how you and the ECMAScript body prefer to have particular concepts presented. Given that the lag time between new syntax and conformance across vendors could be months, years or never, it seems that there is always a need to provide a 'shim' or implementation that emulates proposed syntax. I think many concepts including Tom and Mark's steer away from new syntax due to the problems noted. In general should there be due diligence on both? I realize this may vary per strawperson but thought you may have a general philosophy to share.
I know your question was to Brendan, but if I may add my $.02: we should be mindful but not terrified of changing the language. That's what we're here to do, after all!
Now, the specific concern over new syntax has been that it prevents a program from even running, which means you can't dynamically test for a feature before using it. Putting aside the fact that you could use `eval' (or dynamic module loading, given a module system) for such a purpose, I'm still not sure how often people actually want to build two different implementations of a program based on whether or not a particular feature exists. We shouldn't hold back the language just because it'll take take for new features to be spread enough for commonplace use. That's all the more reason to move forward sooner than later.
That said, I think something like the traits library could have a nice migration path, where a library compatible with ES5 could be even more attractive down the road with new syntax.
An HTML attachment was scrubbed... URL: esdiscuss/attachments/20100219/43980108/attachment-0001
The proposals, including from Tom and Mark, don't steer away from
const f() {} and other new syntax (destructuring).
Harmony has new syntax, this is not an open issue. As we've discussed
over the years, the language's users deserve new syntax, particularly
for early error reporting. Implementors also benefit.
One answer for the installed base problem is for the server to detect
downrev browsers and send them content translated automatically to
target ES3, even ES3 with workarounds (e.g. for IE's named function
expression binding pollution bug).
The other extreme is to hold off on using the new syntax for five
years, give or take. This can be a problem, but it's easy to
exaggerate it, and to use past monopoly behavior to predict outcomes
in a non-monopoly future.
An example: IE4 had regexps close enough to match Netscape 4.x, but
IE4 lacked not try/catch/finally, so regexps were usable sooner cross-
browser, but try/catch/finally was not usable till more recently.
This example in full, regexps and try/catch/finally, shows how the
situation depends on how fast browser vendors (a) implement new
features; (b) upgrade their installed bases -- (b) is critical not
only for developers, but primarily to protect users by fixing old
security bugs.
Content authors will have to decide among the several solutions. The
JS community should and no doubt will make translation tools available
(even in-browser translators, where feasible).
But we will not freeze syntax.
On Feb 19, 2010, at 9:28 AM, Brendan Eich wrote:
An example: IE4 had regexps close enough to match Netscape 4.x, but
IE4 lacked not try/catch/finally,
Sorry for the doubled post (and the stray "not" after "lacked").
Kam, interesting point. The same issues apply to modules as well as traits:
- module {} vs Module()
- new syntax vs ES5-ish Meta Api
- compile time vs runtime
Could macros - or some kind of AOP-ish compile time processing - help: @addtrait mytrait myobj; @import acme.mymod; // expands to const acme = {mymod: {myfunc: ...;
An ES-Harmony goal is to: "Provide syntactic conveniences ...defined by desugaring into kernel semantics." How could this be achieved? Macro source expansion? What is truly new 'kernel semantics' as opposed to syntax sugar? Interesting stuff.
On Feb 19, 2010, at 9:28 AM, Brendan Eich wrote:
Content authors will have to decide among the several solutions. The
JS community should and no doubt will make translation tools
available (even in-browser translators, where feasible).
To make a more useful followup to my own message, I should note that
the JS community already has made various translators available, notably
- OpenLaszlo
- 280N's Objective-J
- Aza's pyscript (www.azarask.in/blog/post/making-javascript-syntax-not-suck )
- Neil Mix's Narrative JS
- Alex Fritze's Stratified JS
Some of these adhere to likely Harmony syntax more than others ;-).
I'm sure I'm forgetting others. Anyone have a comprehensive list with
links?
On Feb 19, 2010, at 9:33 AM, Kevin Curtis wrote:
Kam, interesting point. The same issues apply to modules as well as
traits:
- module {} vs Module()
- new syntax vs ES5-ish Meta Api
- compile time vs runtime
Could macros - or some kind of AOP-ish compile time processing - help: @addtrait mytrait myobj; @import acme.mymod; // expands to const acme = {mymod: {myfunc: ...;
An ES-Harmony goal is to: "Provide syntactic conveniences ...defined
by desugaring into kernel semantics." How could this be achieved? Macro source expansion? What is truly
new 'kernel semantics' as opposed to syntax sugar? Interesting stuff.
There's no need to ape the C Pre-Processor (@ not #, etc.). Please
don't let that influence you. FYI, there's already an @if/@else/etc.
preprocessor anguage in IE JScript.
Harmony has new syntax, with semantics defined by desugaring where
doing so yields an unambiguous, abstraction-leak-free, and simple-
enough and clear-enough spec that implementors (the primary spec
audience) are most likely to create interoperable implementations.
The Harmony syntax, whatever it is (and some is approved, e.g.
destructuring, spread, rest -- see harmony:proposals)
, can be translated to ES3+quirks, as needed.
The quirks may be more challenging than the relatively mundane
(outside of generators) issues in targeting ES3.
The translation may have to work harder without @ to anchor its
parsing or lexing (or regexp matching). The benefit is that users can
program in the Harmony edition cross-browser. Let the translator
authors bear the costs of parsing the full langage and "lowering" it
to target old browsers.
Could macros - or some kind of AOP-ish compile time processing - help: @addtrait mytrait myobj; @import acme.mymod; // expands to const acme = {mymod: {myfunc: ...;
Macros wouldn't really solve the "I can't parse this" problem. You could package up syntax extensions as macros and provide them via modules, but you'd still then need to dynamically load the modules (or use server-side user-agent sniffing to produce web pages using different modules). And modules are a significant enough language feature that they deserve to be specified directly, not via translation.
An ES-Harmony goal is to: "Provide syntactic conveniences ...defined by desugaring into kernel semantics." How could this be achieved? Macro source expansion? What is truly new 'kernel semantics' as opposed to syntax sugar? Interesting stuff.
This is a design/specification approach, not a technological thing. The idea is that some features can be described in the spec as sugar for something else that already exists. For example, IIRC, Java 5's introduction of for-in loops was described by desugaring.
Specification-by-elaboration is not entirely unproblematic. It has its place, but it's not an end unto itself. For one, the idea that a desugared construct doesn't change the semantics is hard to make precise without falling into the Turing tarpit.[1] Also, unless the desugaring is dead simple, it starts to turn into specification by implementation.
Dave
[1] For example, why isn't a full-fledged compiler "just desugaring?" There's some pretty intense theoretical research on the topic [2], but some decent rules of thumb: if your translation needs information about its context or deeply analyzes or rearranges the contents of its subterms, you've fallen into the tarpit.
[2] citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.51.4656
On Fri, Feb 19, 2010 at 6:33 PM, David Herman <dherman at mozilla.com> wrote:
An ES-Harmony goal is to: "Provide syntactic conveniences ...defined by desugaring into kernel semantics." How could this be achieved? Macro source expansion? What is truly new 'kernel semantics' as opposed to syntax sugar? Interesting stuff.
This is a design/specification approach, not a technological thing. The idea is that some features can be described in the spec as sugar for something else that already exists. For example, IIRC, Java 5's introduction of for-in loops was described by desugaring.
OK - i thought the 'kernel semantics' was an actual language/artefact that
would be targeted by an higher level sugared 'Harmony' language (and also potentially alt-JS's). Rather than a spec/design thing. My bad.
thanks Dave, Brendan.
the net-net then is to emphasize syntax but when possible outline a migration path. This may have been obvious to many but at least I've been very conservative about suggesting new syntax. Worth far more than 2 pennies :)
kam
Do the various server-side implementations count for example ejsscript which incorporated many features of es4? Or is the list specific to translation engines that take some ecmascript derivative and produce harmony or es5? Given what I've seen in translators to javascript from language 'X', the list could be pages.
On Feb 19, 2010, at 11:43 AM, Kam Kasravi wrote:
Do the various server-side implementations count for example
ejsscript which incorporated many features of es4? Or is the list specific to translation engines that take some
ecmascript derivative and produce harmony or es5? Given what I've seen in translators to javascript from language 'X',
the list could be pages.
Of course, there is HaXe and predecessors from Nicolas Cannasse, who
may still be on the list.
For the current thread, I think we are more interested in processors
whose input language is ES3 or higher, in the JS family. Any that are
open source, especially any that run in-browser, are especially
interesting since we can test and modify them, and see how those that
run in-browser perform on downrev browsers and slower CPUs.
Dave had a list on his blog in 2006, calculist.blogspot.com/2006/08/compiling-to-javascript.html -- an update from anyone with time or knowledge would be helpful.
On Fri, Feb 19, 2010 at 8:58 AM, David Herman <dherman at mozilla.com> wrote:
First, this traits proposal looks very nice -- thanks, Tom and Mark, for your work on this.
I want to add another point about the benefit of new syntax by calling out a piece of the code.google.com proposal under "Performance," where it says: "In order for the partial evaluation scheme to work, programmers should use the library with some restrictions." This is one of the places where you have the opportunity to call out more precisely in the language where users can expect better performance. By creating a declarative syntax, you invite implementors to make the common cases efficient, and provide a clearer set of rules for programmers to know when they are or aren't straying into "you can do that, but it'll cost you" territory.
Another side of the same coin is that you /dis/invite users from accidentally straying into the expensive territory. The convenient syntax provides an incentive for people to use the version you want to be the common case.
On Feb 19, 2010, at 7:29 AM, Kam Kasravi wrote:
Picking up where Tom left off below... I've wondered how you and the ECMAScript body prefer to have particular concepts presented. Given that the lag time between new syntax and conformance across vendors could be months, years or never, it seems that there is always a need to provide a 'shim' or implementation that emulates proposed syntax. I think many concepts including Tom and Mark's steer away from new syntax due to the problems noted. In general should there be due diligence on both? I realize this may vary per strawperson but thought you may have a general philosophy to share.
I know your question was to Brendan, but if I may add my $.02: we should be mindful but not terrified of changing the language. That's what we're here to do, after all!
Now, the specific concern over new syntax has been that it prevents a program from even running, which means you can't dynamically test for a feature before using it.
In for a penny in for a pound. Harmony will already have other new syntax, so this need not be an impediment for a traits proposal. When the benefits of the new syntax are great enough to pay for the cost of new syntax, by all means. For example, I felt that new syntax was justified for my old classes-as-sugar proposal. I only wish to remind everyone to exercise skepticism and restraint. E grew too much syntax over time even though each bit of new syntax seemed justified by itself. The benefits of new syntax are easily apparent. The costs of new syntax are diffuse and most burdensome on those newly learning the language, and so are easy to underestimate, especially by us experts.
So let us experiment with syntactic support for traits. But let's just be careful before blessing any of these experiments. Apple pie I know, but worth saying.
Mark Miller and I have been working on a small traits library for Javascript. It is documented here: < code.google.com/p/es-lab/wiki/Traits>
Traits are reusable building blocks for classes, very similar to mixins, but with less gotchas. For example, traits support explicit conflict resolution upon name clashes and the order in which traits are composed is not significant.
In a nutshell:
The interesting thing about our choice of transparently representing traits as ES5 property maps is that our library can be used as a general-purpose library for manipulating property descriptors in addition to its use as a library to compose and instantiate traits.
A small expository example that uses the library: code.google.com/p/es-lab/source/browse/trunk/src/traits/examples.js
Mark and I were both surprised at how well Javascript accommodates a trait library with very little boilerplate. However, there is one catch to implementing traits as a library. Traits, like classes, are normally simply declared in the program text, but need not necessarily have a runtime representation. Trait composition is normally performed entirely at compile-time (in trait lingo this is called "flattening" the traits). At runtime, no trace of trait composition is left.
Because we use a library approach, traits are not declarative entities and must have a runtime representation. Thus, there is a runtime overhead associated with trait creation and composition. Moreover, because the implementation is oblivious to traits, multiple objects instantiated from the same trait "declaration" don't share structure. However, we did design the library such that, if traits are specified using object literals and property renaming depends only on string literals (which is the common case), a partial evaluator could in principle perform all trait composition statically, and replace calls to Trait.create with a specialized implementation that does support structural sharing between instances (just like an implementation that notices multiple calls to Object.create with the same property descriptor map can in principle arrange for the created objects to share structure).
Any feedback on our design is welcomed. In particular, it'd be interesting to hear how hard/easy it would be for an implementation to recognize the operations performed by our library in order to perform them statically.