Operator overloading revisited

# Christian Plesner Hansen (15 years ago)

Following the decimal discussion where values came up I looked for information on that. The most substantial I could find was Mark's operator overloading strawman.

I think the current strawman has a number of weaknesses. It relies on the (hypothetical) type system which makes it fundamentally different from normal method dispatch. It uses double dispatch for all user-defined operators which makes optimization difficult. It is asymmetrical: the left hand side always controls dispatch.

I'd like to propose an alternative approach that avoids these problems. You could call it "symmetric" operator overloading. When executing the '+' operator in 'a + b' you do the following (ignoring inheritance for now):

1: Look up the property 'this+' in a, call the result 'oa' 2: If oa is not a list give an error, no '+' operator 3: Look up the property '+this' in b, call the result ob 4: If ob is not a list give an error, no '+' operator 5: Intersect the lists oa and ob, call the result r 6: If r is empty give an error, no '+' operator 7: If r has more than one element give an error, ambiguity 8: If r[0], call it f, is not a function give an error 9: Otherwise, evaluate f(a, b) and return the result.

The way to define a new operator is to add the same function to the 'this+' list of the left-hand side and the '+this' list on the right-hand side. This would probably be handled best by a simply utility function, say Function.defineOperator:

function pointPlusNumber(a, b) { return new Point(a.x + b, a.y + b); }

Function.defineOperator('+', Point, Number, pointPlusNumber);

Using this approach it's easy to extend symmetrically:

function numberPlusPoint(a, b) { return new Point(a + b.x, a + b.y); }

Function.defineOperator('+', Number, Point, numberPlusPoint);

In this case you need two different functions, one for Point+Number and one for Number+Point, but in many cases the same function can be used for both:

function addPoints(a, b) { return new Point(a.x + b.x, a.y + b.y); }

Function.defineOperator('+', Point, Point, addPoints);

This approach is completely symmetric in the two operands. Operator dispatch is fairly similar to ordinary method dispatch and doesn't rely on a type system. Programmers don't have to manually implement double dispatch. It my be a bit more work to do lookup but on the other hand it is straightforward to apply inline caching so often you'll only need to do that work once. It can be extended to deal with inheritance -- you might consider all operators up through the scope chain and pick the most specific one based on how far up the scope chain you had to look.

It may look a bit foreign but I think it has a lot of nice properties. Comments?

# Mark S. Miller (15 years ago)

On Sun, Jun 28, 2009 at 7:05 AM, Christian Plesner Hansen < christian.plesner.hansen at gmail.com> wrote:

Following the decimal discussion where values came up I looked for information on that. The most substantial I could find was Mark's operator overloading strawman.

I think the current strawman has a number of weaknesses. It relies on the (hypothetical) type system which makes it fundamentally different from normal method dispatch.

Are you referring to < esdiscuss/2009-January/008535>? It

starts with:

Define a new nominal type named, let's say, "Operable". (I'm not stuck on

this name. Perhaps "Numeric"?) We don't yet know what nominal type system Harmony will have, so for concreteness, and to separate issues, let's use "instanceof" as our nominal type test, where only functions whose

[...suggested restrictions on when this test applies, but all without

presuming any "type"s beyond what ES5 already supports...]

I'm glad you started with one of the motivating problem cases: "+". The reason I conditioned the new overloadings on Operable is to handle the "+" case, which, to be upwards compatible, must continue to .toString() on legacy non-number objects. One of Sam's earlier decimal proposals ran aground on this "+" problem. IIUC, your proposal could deal with the legacy "+" problem by duck typing on "this+" and "+this"? This may be adequate.

My strawman was written without any concept of value types at the time. In committee, Waldemar then raised the "===" issue. "===" currently has the great virtues that it is a reliable equality test that does not run user code during the comparison, and whose veracity does not depend on user code. NaN and -0 aside, if x === y, then they are observably indistinguishable. Any occurrence of that value of x in a computation state can be substituted for any occurrence of that value of y in that computation state without effecting the semantics of that state. We did not want to lose those virtues, so we do not want to allow user defined abstractions to overload ===. Neither do we want

Point(3, 5) === Point(3, 5)

to be false. AFAIK, the only way to reconcile these goals is to introduce value types. In earlier internal email I summarized the value-type thinking as:

  • start with something like my operator-overloading-by-double-dispatch proposal,
  • but tie the ability to respond to operators to being a value -- an instance of a value type/class.
  • A value is necessarily frozen and compares "===" based on cycle-tolerant pairwise "===" comparison of named properties [missing from earlier email: as well as [[Prototype]] and [[Class]] properties].
  • Therefore, a value is indistinguishable from a shallow copy of the same value. Strings are a precedent.
  • Values would be able to overload "==" but not "===".
  • "===" would remain reliable and not cause user code to run during the comparison.

It seems to me that the Operable-test and value-type issues are orthogonal to the core suggestion of your email: whether operator dispatch is based on Smalltalk-like double dispatch, in which the left side is asymmetrically in charge, or on mutual agreement which is indeed symmetric. The first bullet could be changed to your proposal without affecting the rest of the value-type issue.

I note that your symmetric suggestion avoids the problem of most other symmetric overloading systems, like Cecil, of diffusion of responsibility. Since your's only proceeds under mutual agreement, both operands are responsible for the result. Unfortunately, this means that new libraries like Point, in order to allow "pt + number" or "number + pt", must in practice monkey patch Number.prototype to know about these new overloads. Should Number.prototype in that frame (global object context) already be frozen to prevent such monkey patching, then this Point extension can no longer be loaded into that frame.

By comparison, the cool thing about double dispatch is that new types can arrange to work with old types without monkey patching, given only that the old types engage in the generic enabling default behavior and that the new types have explicit cases for the old types it knows about and wishes to work with. Might there be a way to combine the modular extension virtues of double-dispatch with the symmetric-responsibility virtues of your proposal? I don't know. But if so, that would be awesome!

# Christian Plesner Hansen (15 years ago)

I note that your symmetric suggestion avoids the problem of most other symmetric overloading systems, like Cecil, of diffusion of responsibility. Since your's only proceeds under mutual agreement, both operands are responsible for the result. Unfortunately, this means that new libraries like Point, in order to allow "pt + number" or "number + pt", must in practice monkey patch Number.prototype to know about these new overloads. Should Number.prototype in that frame (global object context) already be frozen to prevent such monkey patching, then this Point extension can no longer be loaded into that frame.

Can you give me some context on freezing built-ins? What's the purpose? If it is to prevent the behavior of numbers to change under people's feet then you might argue that preventing libraries from changing the behavior of '+' is a good thing.

Note that adding an operator doesn't actually have to change Number.prototype -- what it changes is the list of operator functions. You could allow that even if Number.prototype is frozen. It would still be "monkey patching" but if you use something less general than a list, say an OperatorCollection that you could only add to, maybe it would be okay. Again, I don't know the exact purpose of freezing types this way so it's hard for me to tell if that solves the problem.

# Mark S. Miller (15 years ago)

On Sun, Jun 28, 2009 at 2:51 PM, Christian Plesner Hansen < christian.plesner.hansen at gmail.com> wrote:

Can you give me some context on freezing built-ins? What's the purpose? If it is to prevent the behavior of numbers to change under people's feet

yes

then you might argue that preventing libraries from changing the behavior of '+' is a good thing.

For objects that already have a '+' behavior, yes. The issue isn't only built-in and operators. If library L1 defines constructor C1 that makes objects that respond in a certain way when their foo method is called, then it is monkey patching if library L2 loaded later modifies C1.prototype to alter the foo behavior of these instances. L1 may or may not be designed in the expectation that other libraries will monkey patch it. If it wishes to prevent such monkey patching, it can do so by freezing, say, C1 and C1.prototype.

However, if L2 introduces creates distinct objects that respond to foo in a different way, that is not a monkey patch of L1. Hence the introduction of a distinct Operable in the original proposal.

Note that adding an operator doesn't actually have to change Number.prototype -- what it changes is the list of operator functions. You could allow that even if Number.prototype is frozen. It would still be "monkey patching" but if you use something less general than a list, say an OperatorCollection that you could only add to, maybe it would be okay. Again, I don't know the exact purpose of freezing types this way so it's hard for me to tell if that solves the problem.

Interesting. This is worth exploring.

# Brendan Eich (15 years ago)

On Jun 28, 2009, at 1:24 PM, Mark S. Miller wrote:

I note that your symmetric suggestion avoids the problem of most
other symmetric overloading systems, like Cecil, of diffusion of
responsibility.

"Diffusion" sounds like a problem, a bad thing, but consider (I've
quoted this before) the use-case:

The generalization of receiver-based dispatch to multiple dispatch
provides a number of advantages. For example, multimethods support
safe covariant overriding in the face of subtype polymorphism,
providing a natural solution to the binary method problem [Bruce et
al. 1995; Castagna 1995]. More generally, multimethods are useful
whenever multiple class hierarchies must cooperate to implement a
method’s functionality. For example, the code for handling an event in
an event-based system depends on both which event occurs and which
component is handling the event.

MultiJava: Design Rationale, Compiler Implementation, and Applications Curtis Clifton, Todd Millstein, Gary T. Leavens, and Craig Chambers

www.cs.washington.edu/research/projects/cecil/pubs/mj-toplas.html

So there are use-cases involving responsible classes A and B (event
source and sink), no third parties. But there are also the third party
cases you're thinking of. The last time this came up (esdiscuss/2009-January/008715 ), I wrote:

Over and over, in Mozilla code and especially on the web, we've seen
"code mashups" where one does not always have the money, or even the
ability, to monkey-patch class A or class B to understand each other.
Wrapping can be done to make a class C with operators, at some cost.
Why this is always a feature and never a bug is not clear, and
Chambers, et al. have researched a fix to it, viewing it as a bug to
motivate their research.

I can't find a reply from you on this :-), so let me take this
opportunity to ask: how does requiring a third party to create a
wrapper class C discharge undiffused responsibility in any useful way?

A and B still are not involved in the wrapping done by class C through
which A and B can become (wrapped) operands to an operator defined by
C. Often the creation of such a wrapper amounts to expending effort to
write tedious boilerplate that has non-trivial runtime and space
overhead.

This may be a "religious" point over which we shouldn't argue --
Waldemar suggested that it was a difference of "points of view" at the
TC39 meeting. But it seems to me taking one such point of view ought
not knock down multimethods, at least not without more precision about
what exactly is the problem with "diffusion of responsibility" that
they bring.

# Mark S. Miller (15 years ago)

On Mon, Jun 29, 2009 at 11:21 AM, Brendan Eich <brendan at mozilla.com> wrote:

On Jun 28, 2009, at 1:24 PM, Mark S. Miller wrote:

I note that your symmetric suggestion avoids the problem of most other symmetric overloading systems, like Cecil, of diffusion of responsibility.

"Diffusion" sounds like a problem, a bad thing, but consider (I've quoted this before) the use-case:

The generalization of receiver-based dispatch to multiple dispatch provides a number of advantages. For example, multimethods support safe covariant overriding in the face of subtype polymorphism, providing a natural solution to the binary method problem [Bruce et al. 1995; Castagna 1995]. More generally, multimethods are useful whenever multiple class hierarchies must cooperate to implement a method’s functionality. For example, the code for handling an event in an event-based system depends on both which event occurs and which component is handling the event.

Let's try a reductio ad absurdum. It seems to be that the argument you're making applies just as well to

w.foo(x)

or

bar(y,z)

In conventional oo reasoning, the first expression is making a request to the object bound to w. The second is making a request to function bound to bar. In both cases, the requested object is responsible for deciding what the next step is in responding to that request. The veracity of the result is according to the responsible object. If there's a bug in this result, the responsible object may not itself be the one that is buggy. However, the blame chain starts there.

What if w doesn't respond to a foo request? Currently our choices are

  1. rewrite our first expression so that it no longer asks w to foo.
  2. modify w or w.[[Prototype]] or something so that w knows how to foo. 2a) Get the provider/author of w to do this and release a new version 2b) Fork or rewrite the source for w 2c) Monkey patch w or w.[[Prototype]] from new module M2
  3. Wrap the original w object with a foo-responding decorator 3a) Conventional decoration, where other requests are manually forwarded 3b) In JS, one can decorate by prototype inheritance 3c) If we ever have catchalls, one might be able to use them for decoration

All the options above except for #2c maintain the locality of reponsibility that makes objects so pleasant to reason about. 2c has come to be regarded as bad practice. I suggest that one of the main reasons why is that it destroys this locality. w shouldn't be held responsible if it can't be responsible. One of the great things about Object.freeze is that w's provider can prevent 2c on w.

The Cecil-style operator overloading argument, extended to this example, would enable a fourth option

  1. Allow module M2 to say how w should respond to foo.

I grant that #4 is not as bad as #2c. But does anyone care to argue that #4 would be a good thing in general? If not, why would #4 be ok for operators but not method or function calls?

# Brendan Eich (15 years ago)

On Jun 29, 2009, at 11:55 AM, Mark S. Miller wrote:

On Mon, Jun 29, 2009 at 11:21 AM, Brendan Eich <brendan at mozilla.com>
wrote: On Jun 28, 2009, at 1:24 PM, Mark S. Miller wrote:

I note that your symmetric suggestion avoids the problem of most
other symmetric overloading systems, like Cecil, of diffusion of
responsibility.

"Diffusion" sounds like a problem, a bad thing, but consider (I've
quoted this before) the use-case:

The generalization of receiver-based dispatch to multiple dispatch
provides a number of advantages. For example, multimethods support
safe covariant overriding in the face of subtype polymorphism,
providing a natural solution to the binary method problem [Bruce et
al. 1995; Castagna 1995]. More generally, multimethods are useful
whenever multiple class hierarchies must cooperate to implement a
method’s functionality. For example, the code for handling an event
in an event-based system depends on both which event occurs and
which component is handling the event.

Let's try a reductio ad absurdum.

This doesn't settle anything since there is no "for all" claim in
Chambers, et al.'s words cited above. You can't reduce to an absurdity
something that according to its own proponents does not apply in all
cases.

It seems to me you are the one making a universal claim, for OOP
methods as the one true way, which could be argued to lead to absurd
(or just awkward) conclusions such as double dispatch (hard for V8,
easy for TraceMonkey :-P) and "reverse+", "reverse-", etc. operator- method bloat.

It seems to be that the argument you're making applies just as well to

w.foo(x)

or

bar(y,z)

In conventional oo reasoning, the first expression is making a
request to the object bound to w. The second is making a request to
function bound to bar. In both cases, the requested object is
responsible for deciding what the next step is in responding to that
request. The veracity of the result is according to the responsible
object. If there's a bug in this result, the responsible object may
not itself be the one that is buggy. However, the blame chain starts
there.

Sure, but reduction to an aburdity doesn't tell us which of the two
diffuses responsibility or why that's bad, or if it's bad how the cost
trumps other concerns.

Functions have their place, so does OOP. But OOP does not mean
functions are obsolete -- consider Math.atan2, a function (not a
method -- ignore the Math poor man's namespace!) taking two parameters
x and y. Either x or y could be blame-worthy for some call (signed
zero has a use-case with atan2, the numerics hackers tell me), but
there's no single receiver responsible for computing the result given
the other parameter as an argument. atan2 is a function, arguably
similar to a division operator.

"Blame chain" is a good phrase, and of course we've been tracking PLT
Scheme Contracts for a while and are still keeping them in mind for
some harmonious future edition. But if responsibility is about blame,
there are many ways to skin that cat. And there are other costs
competing with the cost of weak or wrongful blame.

Since JS has functions, not only methods, we have benefits
countervailing the cost of having to blame one of two or more actual
parameters when analyzing for responsibility. bar(y, z) is tenable in
JS interface design, depending on particulars, sometimes competitive
with w.foo(x), occasionally clearly better.

What if w doesn't respond to a foo request? Currently our choices are

  1. rewrite our first expression so that it no longer asks w to foo.
  2. modify w or w.[[Prototype]] or something so that w knows how
    to foo. 2a) Get the provider/author of w to do this and release a new
    version 2b) Fork or rewrite the source for w 2c) Monkey patch w or w.[[Prototype]] from new module M2
  3. Wrap the original w object with a foo-responding decorator 3a) Conventional decoration, where other requests are
    manually forwarded 3b) In JS, one can decorate by prototype inheritance 3c) If we ever have catchalls, one might be able to use them
    for decoration

All the options above except for #2c maintain the locality of
reponsibility that makes objects so pleasant to reason about.

Sorry, but you are assuming your conclusion -- the locality of
responsibility and its pleasantness in all cases, over against other
costs, is precisely the issue.

Experience on the web includes enough cases where third parties have
to mate two abstractions in unforeseen ways. We seem to agree on that
much. But your 1-3 do not prove themselves superior to 4
(multimethods, cited below) without some argument that addresses all
the costs, not just blame-related costs.

2c has come to be regarded as bad practice. I suggest that one of
the main reasons why is that it destroys this locality. w shouldn't
be held responsible if it can't be responsible. One of the great
things about Object.freeze is that w's provider can prevent 2c on w.

Assuming the conclusion ("One of the great things..."), I think --
although the argument seems incomplete ("one of", and no downside
analysis).

Many JS users have already given voice (see ajaxian.com) to fear that
freeze will be overused, making JS brittle and hard to extend,
requiring tedious wrapping and delegating. This is a variation on
Java's overused final keyword. Object.freeze, like any tool, can be
misused. There are good use-cases for freeze, and I bet a few bad
ones, but in any case with freeze in the language, monkey-patching is
even harder to pull off over time.

That leaves forking or wrapping in practice. Why forking or wrapping
should always win based on responsibility considerations, if one could
use a Dylan or Cecil multimethod facility, is not at all clear.
Forking has obvious maintenance and social costs. Wrapping has runtime
and space costs as well as some lesser (but not always trivial)
maintenance cost.

So why shouldn't we consider, just for the sake of argument, dispatch
on the types of arguments to one of several functions bound to a given
operator, in spite of the drawback (or I would argue, sometimes,
because of the benefit) of the responsibility being spread among
arguments -- just as it is for Math.atan2's x and y parameters?

The Cecil-style operator overloading argument, extended to this
example, would enable a fourth option

  1. Allow module M2 to say how w should respond to foo.

I grant that #4 is not as bad as #2c. But does anyone care to argue
that #4 would be a good thing in general?

Sure, and I've given references to some of the papers. There are
others, about Dylan as well as Cecil. Here's a talk that considers
"design patterns" (double dispatch with "reverse-foo" operators are
pattern-y) to be workarounds for bugs in the programming language at
hand:

www.norvig.com/design-patterns

Complete with multimethods!

If not, why would #4 be ok for operators but not method or function
calls?

Straw man, happy to knock it down. I never said your item 4 wasn't ok
for methods and function calls -- operators are the syntactic special
form here, the generalization is either to methods (double-dispatched)
or function (generic functions, operator multimethods).

The ES4 proposal (proposals:generic_functions ) for generic functions/methods indeed subsumed the operators proposal
that preceded it. I'm not trying to revive it wholesale, please rest
assured. But I am objecting to circular (or at best incomplete)
arguments for dyadic operators via double dispatch of single-receiver
methods.

As I've written before, if double dispatch is the best we can agree on
for Harmony, then I'm in. But we should be able to agree on the
drawbacks as well as the benefits, and not dismiss the former or
oversell the latter.

# P T Withington (15 years ago)

On 2009-06-30, at 01:26EDT, Brendan Eich wrote:

www.norvig.com/design-patterns

+∞

The flip side of the "diffusion of responsibility" is the "solipsism
of encapsulation", which falls down as soon as there is more than one
self (or this :P).

# Brendan Eich (15 years ago)

On Jun 28, 2009, at 7:05 AM, Christian Plesner Hansen wrote:

I'd like to propose an alternative approach that avoids these problems. You could call it "symmetric" operator overloading. When executing the '+' operator in 'a + b' you do the following (ignoring inheritance for now):

1: Look up the property 'this+' in a, call the result 'oa' 2: If oa is not a list give an error, no '+' operator 3: Look up the property '+this' in b, call the result ob 4: If ob is not a list give an error, no '+' operator 5: Intersect the lists oa and ob, call the result r 6: If r is empty give an error, no '+' operator 7: If r has more than one element give an error, ambiguity 8: If r[0], call it f, is not a function give an error 9: Otherwise, evaluate f(a, b) and return the result.

This reinvents asymmetric multimethod dispatch without any inheritance
or subtype specificity, i.e. with single-element class precedence
lists and therefore no need to judge best-fit or most-specific method
among the lists. You just pick the one that matches, and it's an error
if zero or more than one match. Lots of references from the nineties
to relevant work, e.g.,

www.laputan.org/reflection/Foote-Johnson-Noble-ECOOP-2005.html

under "Design: Symmetric vs. Encapsulated Multimethods", which leads
to the [Boyland 1997, Castagna 1995, Bruce 1995] refs. The ES4 generic
functions proposal has refs at the bottom too. It's necessary to put
on your subclass/subtype-filtering glasses when reading, of course,
but then the images resolves into very much what you propose.

Also, if you know Python and haven't seen this yet, have a look at:

www.artima.com/weblogs/viewpost.jsp?thread=101605

There are differences to debate between symmetric and asymmetric or
encapsulated multimethods, and we can defer the CPL until (and it's
not for sure) we have some user-extensible notion of subtyping, but I
want to stress that the multimethod cat is out of the bag here. I like
the gist of your proposal, it is a round-enough wheel. ;-)

Whether it should be extended to non-operator methods is something
else we can debate, or defer. I claim there's nothing special about
operators, and given other numeric types one might very well want
atan2 multimethods.

# Allen Wirfs-Brock (15 years ago)

Pulling back from Mark's and Brendan's programming language metaphysics discussion (which I think is interesting and useful) I just wanted to say that I find Christian's proposal quite interesting. It feels like a natural fit to the language as it exists today and doesn't require any new abstraction concepts such as classes, nominal type, value types, etc. although it could be made compatible with any of them. It has a simple conceptual explanation: an operator works with two object if they both associate the same function with that operator.

It may or may not be the ultimate solution, but I think it deserves serious consideration.

I do have a few thoughts and questions on the proposal as outlined:

I wouldn't want to describe or specify these per object (or per prototype) operator lists as properties. Too much baggage comes with that concept. It'd just make them another internal characteristic of objects. As already mentioned, if you think of them this way there shouldn't be any interactions with Object.freeze or any other existing semantics other than whatever we might choose to explicitly define.

It's too bad that the "this+" and "+this" lists can't be merged but keeping them distinct seems essential to making this scheme work. However, it also complicates the simple explanation.

I assume that your example really should be defining the operators on prototype objects. Eg: Function.defineOperator('+', Point.prototype, Number.prototype, pointPlusNumber);

In order to emphasize the essential difference between "this+" and "+this", I might order the arguments like: Function.defineOperator(Point.prototype,'+', Number.prototype, pointPlusNumber);

There seems like there are probably cases where you would only want to define a "this+" or "+this" binding and not both.

defineOperator probably should be a method on Object and there probably needs to be corresponding introspection methods. This sort of usability engineering can be worked out if this proposal gains traction.

# Brendan Eich (15 years ago)

On Jun 30, 2009, at 10:35 AM, Allen Wirfs-Brock wrote:

Pulling back from Mark's and Brendan's programming language
metaphysics discussion (which I think is interesting and useful) I
just wanted to say that I find Christian's proposal quite
interesting. It feels like a natural fit to the language as it
exists today and doesn't require any new abstraction concepts such
as classes, nominal type, value types, etc. although it could be
made compatible with any of them. It has a simple conceptual
explanation: an operator works with two object if they both
associate the same function with that operator.

It may or may not be the ultimate solution, but I think it deserves
serious consideration.

We can entertain strawman UI but I hope to focus on substance not
surface.

Costs that seem to leap out from Christian's mutual-asymmetric
multimethod proposal:

  1. Two lookups, "this+" and "+this", required.
  2. Intersection.

(2) looks like a transformation or simplification of standard
multimethod Class Precedence List usage to find the most specific
method. Any system allowing multiple methods for a given operator will
have something like it.

(1) seems avoidable, though -- assuming it is not a phantom cost. We
have to deal with the symmetric vs. encapsulated issue, though. But
notice that Christian's proposal does not use "this" in the method
bodies!

function pointPlusNumber(a, b) { return new Point(a.x + b, a.y + b); } function numberPlusPoint(a, b) { return new Point(a + b.x, a + b.y); } function addPoints(a, b) { return new Point(a.x + b.x, a.y + b.y); }

These are nice generic functions. With type annotations or guards, you
could imagine them as adding to generic function "+":

generic function "+"(a :Point, b :Point) { return new Point(a.x + b.x, a.y + b.y); }

Syntax is not the point, please hold fire and spare the bikeshed --
the unencapsulated (no |this| or |self|) nature of these functions is
what I'm stressing.

So why require "this+" and "+this" lookups?

If the reason is to assign responsibility to the different
constructors [*], then how can you tell who is responsible?

Function.defineOperator in the proposal binds both "this+" and
"+this". I agree these should not be plain old properties. But then
why spec them as named internal properties of any kind?

Suppose we had a generic function "+" (syntax TBD) to which methods
could be added for particular non-conflicting pairs of operand types.
Then the lookup would be for two types A and B, but this can be
optimized under the hood. See, e.g.,

citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.35.4799

Seems worth allowing this kind of optimization. Whether and how users
add multimethods we can debate at greater length. But can we agree
that the "this+" and "+this" properties are not necessary?

/be

[*] or constructor.prototypes in your reply, but I don't see why we
would require noisy ".prototype" suffixing -- instanceof does not!

# P T Withington (15 years ago)

On 2009-06-30, at 14:06EDT, Brendan Eich wrote:

These are nice generic functions. With type annotations or guards,
you could imagine them as adding to generic function "+":

generic function "+"(a :Point, b :Point) { return new Point(a.x + b.x, a.y + b.y); }

Syntax is not the point, please hold fire and spare the bikeshed --
the unencapsulated (no |this| or |self|) nature of these functions
is what I'm stressing.

+∞ again

If the reason is to assign responsibility to the different
constructors [*]

I don't understand the "assign responsibility" argument. Is this
about aligning (conflating?) security/access-control with classes/ instances?

Isn't a generic function a class that you could assign responsibility
to?

# Allen Wirfs-Brock (15 years ago)

From: Mark S. Miller . What if w doesn't respond to a foo request? Currently our choices are

...

2c) Monkey patch w or w.[[Prototype]] from new module M2 ...

All the options above except for #2c maintain the locality of reponsibility that makes objects so pleasant to reason about. 2c has come to be regarded as bad practice. I suggest that one of the main reasons why is that it destroys this locality. w shouldn't be held responsible if it can't be responsible. One of the great things about Object.freeze is that w's provider can prevent 2c on w.

From: Brendan Eich ... Many JS users have already given voice (see ajaxian.com) to fear that freeze will be overused, making JS brittle and hard to extend, requiring tedious wrapping and delegating. This is a variation on Java's overused final keyword. Object.freeze, like any tool, can be misused. There are good use-cases for freeze, and I bet a few bad ones, but in any case with freeze in the language, monkey-patching is even harder to pull off over time.

I believe that the original experience base with this style of monkey patching was in Smalltalk where it provide to be both highly useful and also gained a reputation as a bad practice. We need to look at both sides of that equation.

Smalltalk programming was largely about reuse and monkey patching was highly useful because when combining (reusing) independently created implementation "modules" into a larger application you often need to pragmatically do some tweaking around the edges of the modules in order to make them fit together. Even if you have access to the source of each module (which you typically did in Smalltalk) it was still better (more maintainable) to package the necessary integration code as part of the consuming application module rather than creating modified versions of the pre-existing modules that incorporates the integration code.

The "bad practice" rap came about because in the original Smalltalk environments multiple conflicting occurrences of such monkey patching applied on the same classes would silently stomp on each other and you would get some inconsistent combination of behaviors that were difficult to test for and hence caused downstream errors. Patches that conflict with each other are fundamentally incompatible and require human intervention to resolve. Digitalk's Team/V solved this problem by treating such monkey patching as being declarative in nature and detecting any conflicting monkey patches at integration time (load time, build time, whatever..). Such a conflict were treated as a "hard error" (comparable to a syntax error). With this mechanism developers could monkey patch to their hearts content, "mashing up" all sorts of independently created code but as soon as the patches stated stepping on each other they found out and could fix the problem.

I've often remarked that to me, the web feels like one big persistent Smalltalk virtual image. It's being continually extended and modified but it can never be recreated from scratch or redesigned as a whole. In such an environment you have to evolve in place and sometimes that means dynamically patching what is already there in order to make it work with something new or to use it for some new purpose. Web technologies need to be dynamic, flexible, and forgiving in order to accommodate this sort of evolution. We can provide mechanisms to help developers detect and deal with common failure scenarios but if the web doesn't stay flexible enough to allow for the possibility of such failures than it won't be able to evolve and will eventually be replaced by something that can

# Mark S. Miller (15 years ago)

On Mon, Jun 29, 2009 at 10:26 PM, Brendan Eich <brendan at mozilla.com> wrote:

On Jun 29, 2009, at 11:55 AM, Mark S. Miller wrote:

Let's try a reductio ad absurdum.

This doesn't settle anything since there is no "for all" claim in Chambers, et al.'s words cited above. You can't reduce to an absurdity something that according to its own proponents does not apply in all cases.

A fine counterargument if you can distinguish which cases it does and doesn't apply to. The syntactic distinction "operators" seems silly. I'm glad you seem to include Math.atan2(), as that gets us beyond the silly syntactic distinction.

It seems to me you are the one making a universal claim, for OOP methods as the one true way, which could be argued to lead to absurd (or just awkward) conclusions such as double dispatch (hard for V8, easy for TraceMonkey :-P) and "reverse+", "reverse-", etc. operator-method bloat.

Contradicted by the next text of mine you quote:

It seems to be that the argument you're making applies just as well to

w.foo(x)

or

bar(y,z)

I include the bar(y,z) as an example of case against multimethods. Since

you include Math.atan2() in the case for multimethods, we can focus on it, and ignore either operators or method calls.

In conventional oo reasoning, the first expression is making a request to the object bound to w. The second is making a request to function bound to bar. In both cases, the requested object is responsible for deciding what the next step is in responding to that request. The veracity of the result is according to the responsible object. If there's a bug in this result, the responsible object may not itself be the one that is buggy. However, the blame chain starts there.

Sure, but reduction to an aburdity doesn't tell us which of the two diffuses responsibility or why that's bad, or if it's bad how the cost trumps other concerns.

Functions have their place, so does OOP. But OOP does not mean functions are obsolete -- consider Math.atan2, a function (not a method -- ignore the Math poor man's namespace!) taking two parameters x and y.

Good. Hereafter, this example is simply atan2(x,y).

Either x or y could be blame-worthy for some call (signed zero has a use-case with atan2, the numerics hackers tell me), but there's no single receiver responsible for computing the result given the other parameter as an argument. atan2 is a function, arguably similar to a division operator.

No. This is the crux. atan2 is a variable bound to a value in some lexical scope. In different scopes it may be bound to different values. In the expression atan2(x,y), the responsible party is the atan2 function. It decides the next step in processing this request, and decides in what way it will subcontract parts of the job to its x and y arguments.

"Blame chain" is a good phrase, and of course we've been tracking PLT Scheme Contracts for a while and are still keeping them in mind for some harmonious future edition. But if responsibility is about blame, there are many ways to skin that cat. And there are other costs competing with the cost of weak or wrongful blame.

Since JS has functions, not only methods, we have benefits countervailing the cost of having to blame one of two or more actual parameters when analyzing for responsibility. bar(y, z) is tenable in JS interface design, depending on particulars, sometimes competitive with w.foo(x), occasionally clearly better.

I was not arguing that w.foo(x) is always better than bar(y,z). Rather, I was saying that currently neither violate locality of responsibility. In these two expressions, w and bar are the respective responsible parties.

What if w doesn't respond to a foo request? Currently our choices are

  1. rewrite our first expression so that it no longer asks w to foo.
  2. modify w or w.[[Prototype]] or something so that w knows how to foo. 2a) Get the provider/author of w to do this and release a new version 2b) Fork or rewrite the source for w 2c) Monkey patch w or w.[[Prototype]] from new module M2
  3. Wrap the original w object with a foo-responding decorator 3a) Conventional decoration, where other requests are manually forwarded 3b) In JS, one can decorate by prototype inheritance 3c) If we ever have catchalls, one might be able to use them for decoration

All the options above except for #2c maintain the locality of reponsibility that makes objects so pleasant to reason about.

Sorry, but you are assuming your conclusion -- the locality of responsibility and its pleasantness in all cases, over against other costs, is precisely the issue.

Experience on the web includes enough cases where third parties have to mate two abstractions in unforeseen ways. We seem to agree on that much. But your 1-3 do not prove themselves superior to 4 (multimethods, cited below) without some argument that addresses all the costs, not just blame-related costs.

Were we to adopt multimethods, where atan2 somehow gets enhanced by all imported modules defining overloadings of atan2 in that scope, what is the value of the atan2 variable in that scope? If it is a function composed of all lexical contributors, what happens when this function is passed as a value and then used in a different scope?

What about a global atan2 (as would seem to be suggested by the Math.atan2 example you started with)? Would modules dynamically loaded into the same global environment be able to dynamically enhance the value already bound to the atan2 variable, changing the behavior of the atan2 value that may have already been passed to other scopes? Presumably this would fail if atan2 is frozen, as it should, since such mutation monkey patches the atan2 value.

Or would it bind a new value to the atan2 variable, where that new value has the new behavior? Presumably this would fail if atan2 is const, as it should.

2c has come to be regarded as bad practice. I suggest that one of the main reasons why is that it destroys this locality. w shouldn't be held responsible if it can't be responsible. One of the great things about Object.freeze is that w's provider can prevent 2c on w.

Assuming the conclusion ("One of the great things..."), I think -- although the argument seems incomplete ("one of", and no downside analysis).

Please reread the quote above. I agree that the argument I present here for Object.freeze is incomplete. Since the purpose of this thread is not to decide whether Object.freeze is or is not a good idea, I did not feel it necessary to enumerate all the arguments on either side of this question. But neither am I assuming my conclusion. I am simply enumerating one of these arguments -- that Object.freeze is good because it enables w's provider be responsible for the behavior for which we'd like to hold it responsible. You may disagree with this argument, but I don't see how it assumes its conclusion.

Many JS users have already given voice (see ajaxian.com) to fear that freeze will be overused, making JS brittle and hard to extend, requiring tedious wrapping and delegating.

I couldn't quickly find these arguments. Could you provide some links? Thanks.

This is a variation on Java's overused final keyword.

Vastly underused IMO, due to syntactic overhead. In the early E language, it was easier to declare a variable let-like than const-like. One of the best changes we made was to reverse this. E variables are now const-like unless stated otherwise. Everyone, whether concerned with security or not, has found this to be a pleasant change to the language. When reading code and wondering about a variable, once you see that it's declared const-like, you can avoid many follow-on questions.

Object.freeze, like any tool, can be misused. There are good use-cases for freeze, and I bet a few bad ones,

Of course. Good things are good and bad things are bad ;).

but in any case with freeze in the language, monkey-patching is even harder to pull off over time.

As goto and pointer arithmetic became harder in earlier language evolution. I sure hope so. Especially for involuntary monkey patching, which could be seen as an attack from the victim's perspective.

That leaves forking or wrapping in practice. Why forking or wrapping should always win based on responsibility considerations, if one could use a Dylan or Cecil multimethod facility, is not at all clear. Forking has obvious maintenance and social costs. Wrapping has runtime and space costs as well as some lesser (but not always trivial) maintenance cost.

Forking is indeed quite costly and should be avoided when there's a less bad alternative. Wrapping (or decorating) may or may not be a good pattern to use depending on the particulars.

So why shouldn't we consider, just for the sake of argument, dispatch on the

types of arguments to one of several functions bound to a given operator, in spite of the drawback (or I would argue, sometimes, because of the benefit) of the responsibility being spread among arguments -- just as it is for Math.atan2's x and y parameters?

We are considering it. That's what this discussion is about. But we need to separate two kinds of considerations:

  1. In a language in which all these abstraction mechanisms are present and used, which one should we use for a given problem? This will often be the kind of tradeoff you are explaining, where sometimes monkey patching or multimethods win. Indeed, the Cajita runtime library as implemented on ES3 internally monkey patches some of the ES3 primordials. Were I programming in Common Lisp and trying to make use of preexisting ibraries, I'm sure I'd use multimethods and packages. OTOH, Common Lisp's bloat and incoherence is one of the reasons I've always avoided it.

  2. In a language with an adequate set of good abstraction mechanisms and already burdened by a plethora of bad or broken abstraction mechanisms, when is it worth adding a new abstraction mechanism?

The bar on #2 must be much higher than #1. If the language in the absence of the proposed abstraction mechanism already has some pleasant formal properties that the addition of the new mechanism would destroy, then the loss of this property must be added as a cost in assessing #2. But not #1, since that battle is already lost.

The Cecil-style operator overloading argument, extended to this example, would enable a fourth option

  1. Allow module M2 to say how w should respond to foo.

I grant that #4 is not as bad as #2c. But does anyone care to argue that #4 would be a good thing in general?

Sure, and I've given references to some of the papers. There are others, about Dylan as well as Cecil. Here's a talk that considers "design patterns" (double dispatch with "reverse-foo" operators are pattern-y) to be workarounds for bugs in the programming language at hand:

www.norvig.com/design-patterns

Complete with multimethods!

Well, Peter's my boss, so I concede the case. Let's add multimethods. Just kidding!

Actually, thanks for the link. I hadn't seen it before, and it is a nice presentation.

If not, why would #4 be ok for operators but not method or function calls?

Straw man, happy to knock it down.

Huh?

I never said your item 4 wasn't ok for methods and function calls -- operators are the syntactic special form here, the generalization is either to methods (double-dispatched) or function (generic functions, operator multimethods).

The ES4 proposal ( proposals:generic_functions) for generic functions/methods indeed subsumed the operators proposal that preceded it. I'm not trying to revive it wholesale, please rest assured. But I am objecting to circular (or at best incomplete) arguments for dyadic operators via double dispatch of single-receiver methods.

As I've written before, if double dispatch is the best we can agree on for Harmony, then I'm in. But we should be able to agree on the drawbacks as well as the benefits, and not dismiss the former or oversell the latter.

I certainly agree that it's good to have a more complete enumeration of the pros and cons. I think we largely agree on many of these but disagree on the weights. JavaScript today has adequate support for modularity and abstraction, and is mostly understandable. ES5/strict even more so. Common Lisp and PL/1 never were understandable. And Common Lisp is highly immodular. I agree that ES6 should grow some compared to ES5, but I highly value retaining ES5/strict's modularity, simplicity, understandability.

# Christian Plesner Hansen (15 years ago)

Sorry about falling behind in a discussion I started. I'll try to catch up.

Are you referring to esdiscuss/2009-January/008535? It starts with:

Define a new nominal type named, let's say, "Operable". (I'm not stuck on this name. Perhaps "Numeric"?) We don't yet know what nominal type system Harmony will have, so for concreteness, and to separate issues, let's use "instanceof" as our nominal type test, where only functions whose

[...suggested restrictions on when this test applies, but all without presuming any "type"s beyond what ES5 already supports...]

Yes that's the one. The main problem I had was with the use of "instanceof". However, as I read the proposal again I've become unsure if I've read something into it that it doesn't say. The way I understood it you would implement the + operator for Point instances something like:

Point.prototype['+'] = function (that) { if (that instanceof Point) { return new Point(this.x + that.x, this.y + that.y); } else { return that'reverse+'; } };

// Ditto 'reverse+'

Is that correct?

===. Neither do we want

Point(3, 5) === Point(3, 5)

to be false.

Don't we? I can see that we want Point(3, 5) == Point(3, 5) to be true but ===?

It seems to me that the Operable-test and value-type issues are orthogonal to the core suggestion of your email: whether operator dispatch is based on Smalltalk-like double dispatch, in which the left side is asymmetrically in charge, or on mutual agreement which is indeed symmetric. The first bullet could be changed to your proposal without affecting the rest of the value-type issue.

That's probably true, though I don't know how Operable solves the '+' problem so I couldn't say for sure.

# Christian Plesner Hansen (15 years ago)

Whether it should be extended to non-operator methods is something else we can debate, or defer. I claim there's nothing special about operators, and given other numeric types one might very well want atan2 multimethods.

I did briefly consider listing "generalizes to full multimethods" as one of the advantages but decided not to because I wasn't sure if that's a good or a bad thing...

# Mark S. Miller (15 years ago)

I certainly agree that it's good to have a more complete enumeration of the pros and cons. I think we largely agree on many of these but disagree on the weights. JavaScript today has adequate support for modularity and abstraction, and is mostly understandable. ES5/strict even more so. Common Lisp and PL/1 never were understandable. And Common Lisp is highly immodular. I agree that ES6 should grow some compared to ES5, but I highly value retaining ES5/strict's modularity, simplicity, understandability.

I withdraw the seeming comparison to Common Lisp. I was trying to make a more general point, but the implied comparison is not fair. Let's take Cecil instead, since it was designed by people with extraordinarily good taste (cc'ed), and since you recommended it as an example of multimethods done right. On your recommendation, I read it. Because of its pedigree, I really wanted to like it. In the end, I was reluctantly repelled by its complexity. If it is representative of the complexity costs of multimethods done right, I consider these costs to far outweight the benefits. Other things being equal, I would chose not to program in Cecil.

# Mark S. Miller (15 years ago)

On Tue, Jun 30, 2009 at 1:37 PM, Christian Plesner Hansen < christian.plesner.hansen at gmail.com> wrote:

Sorry about falling behind in a discussion I started. I'll try to catch up.

Are you referring to esdiscuss/2009-January/008535? It starts with:

Define a new nominal type named, let's say, "Operable". (I'm not stuck on

this name. Perhaps "Numeric"?) We don't yet know what nominal type system

Harmony will have, so for concreteness, and to separate issues, let's use

"instanceof" as our nominal type test, where only functions whose

[...suggested restrictions on when this test applies, but all without presuming any "type"s beyond what ES5 already supports...]

Yes that's the one. The main problem I had was with the use of "instanceof". However, as I read the proposal again I've become unsure if I've read something into it that it doesn't say. The way I understood it you would implement the + operator for Point instances something like:

Point.prototype['+'] = function (that) { if (that instanceof Point) { return new Point(this.x + that.x, this.y + that.y); } else { return that'reverse+'; } };

// Ditto 'reverse+'

Is that correct?

Almost. It's actually a bit worse than that, since presumaby, Points want to be addable with Numbers:

Point.prototype['+'] = function (that) { if (that instanceof Point) { return new Point(this.x + that.x, this.y + that.y); } else if (that instanceof Number) { // again ignoring the difference between numbers and Numbers return new Point(this.x + that, this.y + that); } else { return that'reverse+'; } };

===. Neither do we want

Point(3, 5) === Point(3, 5)

to be false.

Don't we? I can see that we want Point(3, 5) == Point(3, 5) to be true but ===?

Perhaps Complex(3, 5) === Complex(3, 5) is a better example?

In any case, I could live with these being false in order to avoid adding value types to the language. On this one, I'm on the fence.

It seems to me that the Operable-test and value-type issues are orthogonal to the core suggestion of your email: whether operator dispatch is based on Smalltalk-like double dispatch, in which the left side is asymmetrically in charge, or on mutual agreement which is indeed symmetric. The first bullet could be changed to your proposal without affecting the rest of the value-type issue.

That's probably true, though I don't know how Operable solves the '+' problem so I couldn't say for sure.

Ignoring Operable, in your proposal, how would you handle the need to preserve the legacy + behavior on legacy kinds of objects? The only other possibility I foresee is duck typing on the presence of the 'this+' and/or '+this' property names. That would work.

# Mark S. Miller (15 years ago)

On Tue, Jun 30, 2009 at 1:51 PM, Christian Plesner Hansen < christian.plesner.hansen at gmail.com> wrote:

Whether it should be extended to non-operator methods is something else we can debate, or defer. I claim there's nothing special about operators, and given other numeric types one might very well want atan2 multimethods.

I did briefly consider listing "generalizes to full multimethods" as one of the advantages but decided not to because I wasn't sure if that's a good or a bad thing...

Do have a look at Cecil. Multimethods probably can't get much better than that.

# Brendan Eich (15 years ago)

On Jun 30, 2009, at 1:04 PM, Mark S. Miller wrote:

On Mon, Jun 29, 2009 at 10:26 PM, Brendan Eich <brendan at mozilla.com>
wrote: On Jun 29, 2009, at 11:55 AM, Mark S. Miller wrote:

Let's try a reductio ad absurdum.

This doesn't settle anything since there is no "for all" claim in
Chambers, et al.'s words cited above. You can't reduce to an
absurdity something that according to its own proponents does not
apply in all cases.

A fine counterargument if you can distinguish which cases it does
and doesn't apply to. The syntactic distinction "operators" seems
silly. I'm glad you seem to include Math.atan2(), as that gets us
beyond the silly syntactic distinction.

Did you check out any of the papers I linked? They talk about non- silly, not-only-about-syntax use-cases (~20% by function population, I
recall Chambers writing somewhere) that include (dyadic) operators,
math functions, event sourcing/sinking, a few other multi-party
situations with symmetric relations and no obvious primary receiver.

I don't think there was ever a silly syntactic distinction about
operators in anything I've written, recently or in past years, on this
list or elsewhere. Value types want operator and literal
extensibility. The operator part is one of N use-cases for
multimethods IMHO. That's all.

It seems to me you are the one making a universal claim, for OOP
methods as the one true way, which could be argued to lead to
absurd (or just awkward) conclusions such as double dispatch (hard
for V8, easy for TraceMonkey :-P) and "reverse+", "reverse-", etc.
operator-method bloat.

Contradicted by the next text of mine you quote:

But not by your double-dispatch proposal for operators. That's one way
to add extensible operators. The other is to add multimethods. I don't
know of a distinct and canonical third way.

Talking about good old functions doesn't help rule out multimethods,
indeed it raises them as an option since dyadic operators appear to be
functions, not receiver-targeted methods.

To be honest I wasn't sure what your point about good old bar(y, z)
was, except to say that bar was the locus of responsibility (Tucker
noted this callee-is-responsible case too). Again this does not argue
against multimethods.

Likewise blame systems are great (costly, PLT Scheme users typically
add contracts at module boundaries to reduce overhead, intra-module
calls can be fast at the cost of losing the blame-carrying wrappers),
but I don't see how they require w or bar to bear singular
responsibility.

Multimethods really are multiple functions inhabiting bar, dispatched
by a "most specific argument type" matching algorithm, which can fail
but when it succeeds, invokes a single function. That's the
responsible function, whatever bar multimethod it is. ;-)

I was not arguing that w.foo(x) is always better than bar(y,z).
Rather, I was saying that currently neither violate locality of
responsibility. In these two expressions, w and bar are the
respective responsible parties.

Ok, that's what I thought. Now consider y % z where % is defined as in
Christian's proposal, amended by Allen's suggestion that the spec not
rely on non-internal, prototype-delegated properties. I'll use
obj[[internalId]] to denote internal property access and ^^ for
intersection:

let lst :List = y[["this%"]] ^^ z[["%this"]]; if (lst.length == 0) throw TypeError("% not defined on operands"); if (lst.length != 1) throw TypeError("ambiguous % operation"); return lst[0](y, z);

What party is responsible here, given that the only way any pair of
internal properties of the form y[["this%"]] and z[["%this"]] get
defined (on constructor prototypes, per Allen's followup) is by
Function.defineOperator being called with both y and z's
"types" (constructors).

So (to repeat a point from my last post) we don't actually need (or
want) internal instance properties to keep book -- we could use a
separate lexically scoped two-dimensional mapping for the % operator
indexed by "types" (see instanceof).

If the only observable difference between using internal instance
properties and using a separate lexical operator matrix is that you
claim Object.freeze(Point.prototype) should cause

Function.defineOperator('+', Point, Number, pointPlusNumber);

or similar calls passing Point fail, then I side with Allen where he
wrote:

"I wouldn't want to describe or specify these per object (or per
prototype) operator lists as properties. Too much baggage comes with
that concept. It'd just make them another internal characteristic of
objects. As already mentioned, if you think of them this way there
shouldn't be any interactions with Object.freeze or any other existing
semantics other than whatever we might choose to explicitly define."

But that's no surprise, because I'm in favor of letting Carol define
multimethods for types purveyed by Alice and Bob, where Alice and Bob
didn't foresee the combination Carol found useful.

I don't see how this is "irresponsible". If Alice and Bob freeze
prototypes, Carol could still hack wrappers, in the absence of
multimethods. If Alice and Bob don't want to provide their types to
Carol, they can hide them otherwise. Hiding != Freezing.

Were we to adopt multimethods, where atan2 somehow gets enhanced by
all imported modules defining overloadings of atan2 in that scope,
what is the value of the atan2 variable in that scope?

A multimethod.

If it is a function composed of all lexical contributors, what
happens when this function is passed as a value and then used in a
different scope?

Multimethods [*] are first-class values, sets of associated type- specific statically scoped functions. No dynamic scope, of course.

You define a multimethod, let's say with new syntax or new
Function.defineMultimethod or whatever strawman API you like (trade- offs about early error checking left for later discussion). So atan2
is not just a "good old function". It's something new, an overloaded
function that "has" multimethods (the ES4 proposal, FWIW, had new
syntax like |generic function atan2(x, y);| to get things started).

(I don't mean to belabor this point, but I do want to stress that
plain old functions can't be confused for multimethods and
unintentionally mixed together. JS replaces any previous binding when
processing a function declaration. This behavior would of course
continue, but if we add multimethods, then we have the choice to make
it an error to define a multimethod whose name is already bound to any
value, or to a plain old function, or whatever seems best.)

Anyway, code with access to the distinguished multimethod's lexical
binding then would add functions to it by denoting it in that scope or
using a reference to it passed to another function which could add
functions to the multimethod.

This is observably equivalent as far as I can tell to how Christian's
Function.defineOperator would seem to work -- except of course that
Function.defineOperator mutates internal "this+" and "+this"
properties given a first argument of '+', whereas a
Function.addMultimethod(atan2, Complex, Rational, function (c, r)
{...}) API would take an explicit reference to the multimethod being
augmented.

(EIBTI! :-P)

Finally, anyone with a reference (found in the lexical scope as the
basis case, or passed to other functions to introduce them to the
atan2 multimethod) can call atan2 on the several combinations of
argument types defined in it. Dispatch works as in Dylan or Cecil, if
we want to get fancy with subtypes; or simply as in Christian's
proposal (zero inheritance FTW ;-).

Short on time, so I should stop here and ask what seems unclear or
wrong.

/be

[*] The "method" name may mislead, but Java static methods don't have
a distinguished receiver, so we can cope. I'm using the "multimethod"
name instead of "generic method" or "generic function" -- generic
methods or generic functions cause confusion with "generics", and with
plain old JS functions that make few assumptions about argument types.

# Christian Plesner Hansen (15 years ago)

Costs that seem to leap out from Christian's mutual-asymmetric multimethod proposal:

  1. Two lookups, "this+" and "+this", required.
  2. Intersection.

(2) looks like a transformation or simplification of standard multimethod Class Precedence List usage to find the most specific method. Any system allowing multiple methods for a given operator will have something like it.

(1) seems avoidable, though -- assuming it is not a phantom cost. We have to deal with the symmetric vs. encapsulated issue, though. But notice that Christian's proposal does not use "this" in the method bodies!

The really nice thing about this from a cost standpoint is that it caches well. If changes to the operators understood by an object are reflected in the object's hidden class (to use v8 terminology) the same way changes to methods are, all the same caching techniques can be used. A global (hidden class, hidden class, operator)->function

cache can be used to ensure that the full operator resolution only has to be done once for any operand type pair, after which the result can be fetched by a simple hash table lookup. Similarly, inline caching can ensure that the hash lookup only has to be done once for any monomorphic call site; megamorphic call sites, which I would expect were rare, would have to do a hash lookup but not full operator resolution.

By the way, did you say "mutual-asymmetric multimethod" :-)?

Function.defineOperator in the proposal binds both "this+" and "+this". I agree these should not be plain old properties. But then why spec them as named internal properties of any kind?

Suppose we had a generic function "+" (syntax TBD) to which methods could be added for particular non-conflicting pairs of operand types. Then the lookup would be for two types A and B, but this can be optimized under the hood. See, e.g.,

Methods live on objects and their prototypes. Operators should too. Using a special internal property rather than plain old properties is fine by me but they should be directly associated with the objects involved. I'd argue why that is so important but I'd like to be sure I understand what you're proposing as an alternative. As I understand it you would associate the operators with the constructor functions similar to python's five minute multimethods. Is that correct?

# Christian Plesner Hansen (15 years ago)

Do have a look at Cecil. Multimethods probably can't get much better than that.

I'm not anti-multimethods. If I was doing a language from scratch it would have multimethods. That doesn't mean that adding them to JS is a good idea, or that mentioning them would further my case.

A side note: while cecil's model is nice and clean I wouldn't say that it was the platonic ideal of a multimethod model. For instance, I think it's reliance on implementation/representation types for dispatch rather than interface could be improved upon. But that's a whole other discussion.

# Christian Plesner Hansen (15 years ago)

Almost. It's actually a bit worse than that, since presumaby, Points want to be addable with Numbers:

Point.prototype['+'] = function (that) {   if (that instanceof Point) {    return new Point(this.x + that.x, this.y + that.y);  } else if (that instanceof Number) {    // again ignoring the difference between numbers and Numbers    return new Point(this.x + that, this.y + that);  } else {    return that'reverse+';  } };

I find the use of instanceof problematic for several reasons.

The connection between constructor functions and instances is fragile. Once you've created an instance you're free to change the constructor without those changes being reflected in the instance. What it means to be a Point can change throughout the lifetime of a program.

Furthermore, instanceof is an implementation type query, not an interface or behavior type query. The limitations of implementation types compared with interface types are well understood.

Perhaps Complex(3, 5) === Complex(3, 5) is a better example?

I really don't see it. What is the problem with two complex numbers being distinct object instances and strict equals reflecting that fact? If you want structural equality use ==, that's what it's there for.

Ignoring Operable, in your proposal, how would you handle the need to preserve the legacy + behavior on legacy kinds of objects? The only other possibility I foresee is duck typing on the presence of the 'this+' and/or '+this' property names. That would work.

Duck typing is built into the proposal already. It is folded into step 2 and 4 as a special case of bailing out if looking up the properties doesn't yield lists. If an internal property were used instead it would have to have a state that reflected "no operators are present", maybe the absence of the internal property, which would serve the same purpose. Instead of giving an error in step 2 and 4 you would fall through to the legacy behavior if either side didn't respond to operators.

# Brendan Eich (15 years ago)

On Jul 1, 2009, at 12:48 AM, Christian Plesner Hansen wrote:

(1) seems avoidable, though -- assuming it is not a phantom cost.
We have to deal with the symmetric vs. encapsulated issue, though. But
notice that Christian's proposal does not use "this" in the method bodies!

The really nice thing about this from a cost standpoint is that it caches well. If changes to the operators understood by an object are reflected in the object's hidden class (to use v8 terminology) the same way changes to methods are, all the same caching techniques can be used.

Implementations can optimize whether Function.defineOperator binds
internal "this+" and "+this" or adds a multimethod.

A global (hidden class, hidden class, operator)->function cache can be used to ensure that the full operator resolution only has to be done once for any operand type pair, after which the result can be fetched by a simple hash table lookup.

This cuts both ways. A multimethod reifies that "global" (lexically
scoped, rather) cache.

Similarly, inline caching can ensure that the hash lookup only has to be done once for any monomorphic call site; megamorphic call sites, which I would expect were rare, would have to do a hash lookup but not full operator resolution.

By the way, did you say "mutual-asymmetric multimethod" :-)?

Sorry, I kept typing "asymmetric" when I meant "symmetric". Tired here
(new baby at home :-)).

Adding "mutual" didn't help, did it? I was trying to capture the
reciprocal "this+"/"+this" intersection requirement. But it still
seems to me any such internal properties should constitute an
unobservable specification/implementation device.

Below you allow for internal properties, so presumably ("this+" in a)
would not evaluate to true for (a + b) in your examples. So why
require"this+" and "+this" bindings at all? Caching under the hood is
still doable but it is an implementation detail.

Methods live on objects and their prototypes. Operators should too.

That's dogma, and it doesn't prove itself to unbelievers.

Why should operators be methods? Binary operators have no
distinguished receiver. Indeed your proposal has nice generic
functions for addition, no |this| at all. So much for "methods".

Saying that methods should "live" (be bound) somewhere when they do
not use |this| to denote the receiver in which they are bound
(directly or on the prototype chain) seems like even weaker dogma.
Mark's double-dispatch is consistent with OOP. Your proposal seems to
require reciprocal external property mutations on prototypes but only
so the dispatch algorithm can do its simplified CPL thing. Why?

Using a special internal property rather than plain old properties is fine by me but they should be directly associated with the objects involved.

Directly? You wrote "and their prototypes" above, and your
Function.defineOperator examples were passed constructors (from which
to mutate prototypes), not direct instances:

Function.defineOperator('+', Point, Number, pointPlusNumber);

I'd argue why that is so important but I'd like to be sure I understand what you're proposing as an alternative. As I understand it you would associate the operators with the constructor functions similar to python's five minute multimethods. Is that correct?

Yes.

# Brendan Eich (15 years ago)

On Jul 1, 2009, at 1:33 AM, Christian Plesner Hansen wrote:

Almost. It's actually a bit worse than that, since presumaby,
Points want to be addable with Numbers:

Point.prototype['+'] = function (that) { if (that instanceof Point) { return new Point(this.x + that.x, this.y + that.y); } else if (that instanceof Number) { // again ignoring the difference between numbers and Numbers return new Point(this.x + that, this.y + that); } else { return that'reverse+'; } };

BTW, Mark's code reminds me of a paper on tracing double-dispatch I've
mentioned in past talks:

www.ics.uci.edu/~franz/Site/pubs-pdf/ICS-TR-07-10.pdf

The approach of double-dispatching on what you know and bottoming out
in a reverse-foo call when you don't know the other operand's type to
me looks like an application "when in doubt, use brute
force" (Thompson).

That can be an effective strategy, provided you're truly in doubt
about better ways to solve the problem. But it is verbose and probably
error prone. I much prefer something like your

function pointPlusNumber(a, b) { return new Point(a.x + b, a.y + b); }

Function.defineOperator('+', Point, Number, pointPlusNumber);

function numberPlusPoint(a, b) { return new Point(a + b.x, a + b.y); }

Function.defineOperator('+', Number, Point, numberPlusPoint);

function addPoints(a, b) { return new Point(a.x + b.x, a.y + b.y); }

Function.defineOperator('+', Point, Point, addPoints);

to the if/else mess cited above (plus the reverse+ other half).

How much sweeter could this be sugared?

function "+"(a :Point, b :Number) { return new Point(a.x + b, a.y + b); }

function "+"(a :Number, b :Point) { return new Point(a + b.x, a + b.y); }

function "+"(a :Point, b :Point) { return new Point(a.x + b.x, a.y + b.y); }

(Here the quoted operator names imply multimethod addition, where
previously I used |generic function| to Tucker's enthusiastic +∞).

A programming language should not necessarily require low-level coding
from its users just to conserve dispatch mechanism. Users are the
masters, we specifiers and implementors are the servants.

Yes, adding a novel dispatch mechanism adds complexity and cognitive
load for those learning the whole language. No, not everyone learns
the whole language at once. Complexity can be contained if the new
dispatch machinery is well-separated and provided it composes well.
Done right, it's not going to burn any n00bs.

Sure, we could leave out multimethods and use double dispatch. We
could leave out operators altogether and make 'em eat functions! But
the motivation here is usability, not theoretical computability.

I find the use of instanceof problematic for several reasons.

Yeah, that's why I used "types" in scare-quotes. But for now can we
defer the interface or behavior ("type" as a four-letter word) debate,
and try to agree on multimethods vs. double dispatch?

Perhaps Complex(3, 5) === Complex(3, 5) is a better example?

I really don't see it. What is the problem with two complex numbers being distinct object instances and strict equals reflecting that fact? If you want structural equality use ==, that's what it's there for.

No, because for two numbers a and b, a == b <=> a === b. In fact for

any two values, (typeof a == typeof b && a == b) <=> a === b. If we

want to support usable numeric and similar "scalar" type extensions,
we might want to preserve this logic.

# Christian Plesner Hansen (15 years ago)

This cuts both ways. A multimethod reifies that "global" (lexically scoped, rather) cache.

Sure, I'm not saying it's unique to my proposal. I mention it because it's an important aspect to keep in mind when considering the cost of implementing it in practice.

Sorry, I kept typing "asymmetric" when I meant "symmetric". Tired here (new baby at home :-)).

Congratulations! And congratulations on firefox 3.5 too by the way!

Below you allow for internal properties, so presumably ("this+" in a) would not evaluate to true for (a + b) in your examples. So why require"this+"

I'm all for internal properties rather than "this+" and "+this".

Methods live on objects and their prototypes.  Operators should too.

Saying that methods should "live" (be bound) somewhere when they do not use |this| to denote the receiver in which they are bound (directly or on the prototype chain) seems like even weaker dogma. Mark's double-dispatch is consistent with OOP. Your proposal seems to require reciprocal external property mutations on prototypes but only so the dispatch algorithm can do its simplified CPL thing. Why?

All things being equal I'd probably rather have operator functions outside objects or prototypes. But things are not equal.

Relying on a connection between constructor functions and instances is fragile, as I mentioned in a previous message to Mark about instanceof. You can make changes to a constructor function that is not reflected in existing instances. You can make changes both to instances and constructors that disconnect the two completely. Constructor functions are implementation types (as opposed to interface or behavior types) which is inherently problematic. I wouldn't make this weak connection the cornerstone of operator dispatch.

The connection between prototypes and instances is a much more direct one. If you change a prototype that change is reflected in the instances (modulo shadowing). If I want my type B to have all the same behavior as A I can mix all the methods of A into B's prototype and I will in effect have made B a subtype of A. If operators reside in A's prototype, in whatever form, I can do the same for operators. Whether or not A's '+' operator uses 'this', it's still a part of A's behavior and the most consistent place to put it is where A's other behavior is, its prototype.

Using a special internal property rather than plain old properties is fine by me but they should be directly associated with the objects involved.

Directly? You wrote "and their prototypes" above, and your Function.defineOperator examples were passed constructors (from which to mutate prototypes), not direct instances:

Right, I could have been more explicit about that. I do mean including prototypes, they fall under "objects involved".

# P T Withington (15 years ago)

On 2009-07-01, at 03:48EDT, Christian Plesner Hansen wrote:

Methods live on objects and their prototypes.

Only if you co-opt the word "method" to mean that. I would claim this
is just shorthand for "instance method". There is also "class method"
or "static method". There are other definitions (see below).

On 2009-07-01, at 10:23EDT, Brendan Eich wrote:

How much sweeter could this be sugared?

function "+"(a :Point, b :Number) { return new Point(a.x + b, a.y + b); }

function "+"(a :Number, b :Point) { return new Point(a + b.x, a + b.y); }

function "+"(a :Point, b :Point) { return new Point(a.x + b.x, a.y + b.y); }

(Here the quoted operator names imply multimethod addition, where
previously I used |generic function| to Tucker's enthusiastic +∞).

Indeed.

Not to suggest paint, but I think the syntactic sugar of Dylan has a
nice color:

a + b is sugar for \+(a, b)

\+ is a generic function (of 2 arguments) to which you can add
"methods". Methods are just syntactic sugar for specifying the
branches of a dispatch algorithm (portal.acm.org/citation.cfm?id=236338.236343 ). [That is since + is not a legal identifier, you have to escape
it to use it as an identifier, to name a generic function.]

If "method" has to be used uniquely in Javascript as the word that
means a computation attached to an instance, maybe we just need a new
word to mean "dispatch element of a generic function". I would have
said "generic function method", which (ambiguously) gets abbreviated
to "method", and may be the source of objection?

# Brendan Eich (15 years ago)

On Jul 1, 2009, at 8:14 AM, Christian Plesner Hansen wrote:

This cuts both ways. A multimethod reifies that "global" (lexically
scoped, rather) cache.

Sure, I'm not saying it's unique to my proposal. I mention it because it's an important aspect to keep in mind when considering the cost of implementing it in practice.

All of us optimizing JS VM hackers agree, you will have no failure to
object to over-specification that deoptimizes or rules out important
optimizations. But the spec should not overspecify in favor of a
particular optimization either, since what is observable other than
performance won't differ between the overspecified standard and a
hypothetical "just right" spec.

Of course getting it "just right" is hard.

Sorry, I kept typing "asymmetric" when I meant "symmetric". Tired here (new baby at home :-)).

Congratulations! And congratulations on firefox 3.5 too by the way!

Thanks. Helps to keep things in perspective ;-).

I'm all for internal properties rather than "this+" and "+this".

How about an alternative spec that does not preclude hidden class
optimizations based on + overloadings whether by multimethods or "this +"/"+this"?

I suppose we can't defer the "type" issue, though...

Relying on a connection between constructor functions and instances is fragile, as I mentioned in a previous message to Mark about instanceof. You can make changes to a constructor function that is not reflected in existing instances.

Constructor properties don't show up in instances, so I'm not sure why
this is a problem.

You can make changes both to instances and constructors that disconnect the two completely.

Yes, with extensions such as writable proto -- but that's
something we are trying to get rid of.

Likewise, for user-defined function foo, foo.prototype is writable --
but not so for built-in constructor functions, and not so for classes
as sugar (more below).

Point so far is not to disagree but to cite cases showing a mix of
higher- and lower-integrity connections between constructors and their
prototypes.

Constructor functions are implementation types (as opposed to interface or behavior types) which is inherently problematic. I wouldn't make this weak connection the cornerstone of operator dispatch.

I suppose Object.freeze could be seen as a "fix" here, but it is
advisory -- therefore not used when most needed.

The original "classes as sugar" sketch Mark did to general agreement
at Oslo included freezing the class object (constructor function)
and bind the class name as a const (neither writable nor
configurable in the binding object or frame).

So a higher-integrity "fix" would be to make classes as proposed for
Harmony (details still in flux, I think) the cornerstone for operator
dispatch. I think this is the plan for "value types", based on
discussion at the last TC39 meeting. Mark will let us know if he
disagrees.

The connection between prototypes and instances is a much more direct one. If you change a prototype that change is reflected in the instances (modulo shadowing). If I want my type B to have all the same behavior as A I can mix all the methods of A into B's prototype and I will in effect have made B a subtype of A. If operators reside in A's prototype, in whatever form, I can do the same for operators.

You could, but your proposal nicely avoided noisy ".prototype"
expressions all over, as Allen suggested in reply.

Adding Complex, Decimal, Rational, Point, etc. for most users implies,
nay cries out for being able to use these names without having to
suffix them with ".prototype". If the binding of the constructor name
in its object or frame and the binding of its 'prototype' property
are locked down due to how classes as sugar work, then we should be
able to rely on the constructor / prototype relationship.

Whether or not A's '+' operator uses 'this', it's still a part of A's behavior and the most consistent place to put it is where A's other behavior is, its prototype.

Could be, or one could use an optimized multimethod matrix, or some
other implementation approach. Again I do not see why the spec should
dictate observable prototype properties, internal or otherwise. If it
just uses this model but keeps it concealed, then I'm ok with it
provided we don't close the door on multimethods prematurely.

If we want multimethods, then we might prefer the spec to go a
different direction and avoid internal "this+"/"+this", etc.

Not sure this helps get a operators (and only operators) proposal
together. Good discussion, though!

# Maciej Stachowiak (15 years ago)

On Jun 28, 2009, at 7:05 AM, Christian Plesner Hansen wrote:

Following the decimal discussion where values came up I looked for information on that. The most substantial I could find was Mark's operator overloading strawman.

I think the current strawman has a number of weaknesses. It relies on the (hypothetical) type system which makes it fundamentally different from normal method dispatch. It uses double dispatch for all user-defined operators which makes optimization difficult. It is asymmetrical: the left hand side always controls dispatch.

I'd like to propose an alternative approach that avoids these problems. You could call it "symmetric" operator overloading. When executing the '+' operator in 'a + b' you do the following (ignoring inheritance for now):

1: Look up the property 'this+' in a, call the result 'oa' 2: If oa is not a list give an error, no '+' operator 3: Look up the property '+this' in b, call the result ob 4: If ob is not a list give an error, no '+' operator 5: Intersect the lists oa and ob, call the result r 6: If r is empty give an error, no '+' operator 7: If r has more than one element give an error, ambiguity 8: If r[0], call it f, is not a function give an error 9: Otherwise, evaluate f(a, b) and return the result.

The way to define a new operator is to add the same function to the 'this+' list of the left-hand side and the '+this' list on the right-hand side. This would probably be handled best by a simply utility function, say Function.defineOperator:

function pointPlusNumber(a, b) { return new Point(a.x + b, a.y + b); }

Function.defineOperator('+', Point, Number, pointPlusNumber);

Using this approach it's easy to extend symmetrically:

function numberPlusPoint(a, b) { return new Point(a + b.x, a + b.y); }

Function.defineOperator('+', Number, Point, numberPlusPoint);

In this case you need two different functions, one for Point+Number and one for Number+Point, but in many cases the same function can be used for both:

function addPoints(a, b) { return new Point(a.x + b.x, a.y + b.y); }

Function.defineOperator('+', Point, Point, addPoints);

This approach is completely symmetric in the two operands. Operator dispatch is fairly similar to ordinary method dispatch and doesn't rely on a type system. Programmers don't have to manually implement double dispatch. It my be a bit more work to do lookup but on the other hand it is straightforward to apply inline caching so often you'll only need to do that work once. It can be extended to deal with inheritance -- you might consider all operators up through the scope chain and pick the most specific one based on how far up the scope chain you had to look.

It may look a bit foreign but I think it has a lot of nice properties. Comments?

It seems like once you do the defineOperator calls you described
above, and assuming they work for primitive numbers and not just
Number objects, then adding two numbers would require two property
lookups, unless there is an exception for operating on two primitive
values.

, Maciej

# Christian Plesner Hansen (15 years ago)

I think we could get away with making an exception there. The spec algorithm for how to execute '+' would check if the operands were primitive numbers before attempting user-defined operators.

There are already cases where changes to Number.prototype don't propagate to primitive numbers, leading them to behave differently. For instance:

Number.prototype.valueOf = function () { return 4; }; print(1 + 1); // yields 2 print(1 + new Number(1)); // yields 5

# Christian Plesner Hansen (15 years ago)

Likewise, for user-defined function foo, foo.prototype is writable -- but not so for built-in constructor functions, and not so for classes as sugar (more below).

All JS code currently in existence is based on user-defined functions. For me that is the only case worth considering. That might explain the differences in our views.

I'm not a language revolutionary, I prefer gradual evolution.

The original "classes as sugar" sketch Mark did to general agreement at Oslo included freezing the class object (constructor function) and bind the class name as a const (neither writable nor configurable in the binding object or frame).

So a higher-integrity "fix" would be to make classes as proposed for Harmony (details still in flux, I think) the cornerstone for operator dispatch. I think this is the plan for "value types", based on discussion at the last TC39 meeting. Mark will let us know if he disagrees.

If you have the option to use a model that works with the existing language and that integrates well even (that's subjective of course) why would you limit it to classes or values? The reliance of these new features on each other will just lead them to become a language-within-a-language, rather than an evolution of the existing language.

You could, but your proposal nicely avoided noisy ".prototype" expressions all over, as Allen suggested in reply.

That was a typo.

Adding Complex, Decimal, Rational, Point, etc. for most users implies, nay cries out for being able to use these names without having to suffix them with ".prototype". If the binding of the constructor name in its object or frame and the binding of its 'prototype' property are locked down due to how classes as sugar work, then we should be able to rely on the constructor / prototype relationship.

If people can live with defining their methods as

Foo.prototype.bar = function () { ... }

I think they can live with writing .prototype when defining operators. In fact I like the uniformity that you have to use the prototype in both cases. Note that library users won't see this, only the library implementer. In any case, what users will "cry out for" is speculation.

Whether or not A's '+' operator uses 'this', it's still a part of A's behavior and the most consistent place to put it is where A's other behavior is, its prototype.

Could be, or one could use an optimized multimethod matrix, or some other implementation approach. Again I do not see why the spec should dictate observable prototype properties, internal or otherwise. If it just uses this model but keeps it concealed, then I'm ok with it provided we don't close the door on multimethods prematurely.

JS allows introspection and self-modification and both are widely used and cited as advantages of the language. I would want to extend that to cover operators as well. Having an internal property with some associated utility methods on Object is a nice way to get that.

# Brendan Eich (15 years ago)

On Jul 3, 2009, at 1:21 AM, Christian Plesner Hansen wrote:

Likewise, for user-defined function foo, foo.prototype is writable
-- but not so for built-in constructor functions, and not so for classes
as sugar (more below).

All JS code currently in existence is based on user-defined functions. For me that is the only case worth considering. That might explain the differences in our views.

That's doesn't square with your proposal:

function pointPlusNumber(a, b) { return new Point(a.x + b, a.y + b); }

Function.defineOperator('+', Point, Number, pointPlusNumber);

If you want to amend per Allen's suggestion by imposing fruitless,
verbose ".prototype" taxes on the last line, so it reads

Function.defineOperator('+', Point.prototype, Number.prototype,
pointPlusNumber);

(or with some other argument order), then what good have you done for
users? Unless Point and Number are frozen or otherwise configured in a
non-default way, the next line could change one or both .prototype
values and make later instances miss the operator.

Sure, that would be a case of "Doctor, it hurts when I do this" (so
don't do it), but the same argument applies to instanceof, yet one
writes (p instanceof Point), not (p instanceof Point.prototype).

You can't have it both ways. Either mutable .prototype (or mutable
function binding, e.g. Point in the global object, or whatever scope
it's bound in) is a problem that requires freezing or equivalent lock- down, or it isn't. In any event, there's no reason to impose
the .prototype tax, and instanceof-consistency (evolutionary :-P)
reason not to.

I'm not a language revolutionary, I prefer gradual evolution.

Your political rhetoric is off target. Classes as sugar are not
revolutionary, they build on evolutionary ES3 closures and ES5
Object.freeze and similar features by adding syntactic sugar. They are
mostly about "best practices" and "best usability" design defaults for
higher-integrity factories than functions called via new can ever be
by themselves (due to compatibility).

If you have the option to use a model that works with the existing language and that integrates well even (that's subjective of course) why would you limit it to classes or values?

Because we're talking about value types, with that (typeof a == typeof
b && a == b) <=> a === b implication extensible to user-defined value

types, along with numeric literal syntax.

Value types are new "UI". The benefits won't accrue to user-defined
functions as constructors without more opt-in API. And value types
deserve first-class syntax, just as classes as sugar do. C# has
structs, as Sam pointed out.

Without picking the exact syntax right now, the point remains that we
have not been proposing only operators, or even only operators for
user-defined constructor functions. We're talking about user-defined
value types, ideally with concise, convenient defining as well as
invoking syntax.

The reliance of these new features on each other will just lead them to become a language-within-a-language, rather than an evolution of the existing language.

There are lots of subsets in the language already, this is a teaching
and user preference outcome that we can't and shouldn't stop. Multiple
paradigms and well-ordered subsets are not a problem, they are a
success condition.

But adding operators without considering first-class value types (==/ === and literal syntax) looks like a mistake.

You could, but your proposal nicely avoided noisy ".prototype"
expressions all over, as Allen suggested in reply.

That was a typo.

Why require .prototype noise when instanceof does not? Talk about
evolution vs. revolution doesn't justify this break from instanceof's
reliance on users to keep their constructor.prototype values intact --
or not, as they please.

If people can live with defining their methods as

Foo.prototype.bar = function () { ... }

That's not the l33t way, of course:

Foo.prototype = { bar: function () {...} ... };

But this is noisy and error-prone still, and you are wrong that what
one "can live with" should be the last word. People "live with" all
sorts of problems in the world, including in JS as it is, and
certainly as it was before ES5 (which is definitely not the last word).

Classes as sugar proposes to sweeten such "method definitions", to
lighten the boilerplate tax and reduce opportunities for nesting/ bracing/"this"-binding errors.

I think they can live with writing .prototype when defining operators.

Your bikeshed is ugly and overlarge. :-P

In fact I like the uniformity that you have to use the prototype in both cases.

No uniformity with instanceof, is that incipient revolution?

Perhaps just inconsistency for the sake of low-level explicitness, to
be as fair as possible. Or possibly pedantry, but again: if user- defined constructor .prototype references are used and neither the
constructor binding in its scope object nor the .prototype property
are non-writable and non-configurable, then what is the point?

If the "type" name and .prototype are locked down (as with classes
as sugar), then there's no good reason for .prototype, it's just a
"syn-"tax and an unjustified inconsistency with respect to instanceof.

Note that library users won't see this, only the library implementer. In any case, what users will "cry out for" is speculation.

We're talking about UI now, so user wishes count. And my bikeshed is
prettier and smaller -- I invite others (Tucker can recuse himself ;-)
to weigh in.

function "+"(a :Point, b :Number) { return new Point(a.x + b, a.y + b); }

function "+"(a :Number, b :Point) { return new Point(a + b.x, a + b.y); }

function "+"(a :Point, b :Point) { return new Point(a.x + b.x, a.y + b.y); }

I'm not sold on Dylan's + (backslashes are troublesome and
overloaded), but like quoting, it is a clean extension to the grammar.

Whether or not A's '+' operator uses 'this', it's still a part of
A's behavior and the most consistent place to put it is where A's other behavior is, its prototype.

Could be, or one could use an optimized multimethod matrix, or some
other implementation approach. Again I do not see why the spec should
dictate observable prototype properties, internal or otherwise. If it just
uses this model but keeps it concealed, then I'm ok with it provided we don't
close the door on multimethods prematurely.

JS allows introspection and self-modification and both are widely used and cited as advantages of the language.

Self-modification is widely criticized too, even in this thread. See
Allen's measured post on Smalltalk history. Monkey-patching bites
back, the ability to do it is a mixed blessing.

I chose to make most things mutable in Netscape 2 (even built-in
constructor.prototype, if memory serves) because I had too little time
to get the core language right, and I knew users would need to monkey- patch. That was then. As noted earlier in this thread, with ES5's
Object.freeze etc., it will become harder over time to count on the
ability to monkey-patch.

I would want to extend that to cover operators as well.

Why? I'm asking for specific use-cases, e.g. one-off or otherwise ad- hoc operator bindings.

You still haven't addressed the ==/=== relation and other "value type"
issues such as literal notation extensibility.

Having an internal property with some associated utility methods on Object is a nice way to get that.

The internal property is invisible to users of the language, and could
be replaced by other implementation techniques. That's why I question
it as a spec device.

You really did propose symmetric multimethods, but hooked them up
"backstage of the language" using "this+" and "+this", etc. At this
stage I contend we should focus on the fundamental design elements
including multimethods, and not on the ES1-5-style internal property
spec hacks.

And of course prettier, more usable bikesheds are fair game, if anyone
wants to comment. Operators and other value type extensibility are
motivated by usability problems with functions and methods and numeric
literals jammed into strings.

# Christian Plesner Hansen (15 years ago)

There are lots of subsets in the language already, this is a teaching and user preference outcome that we can't and shouldn't stop. Multiple paradigms and well-ordered subsets are not a problem, they are a success condition. But adding operators without considering first-class value types (==/=== and literal syntax) looks like a mistake.

This is the critical point. Do you want operator overloading to extend to classic objects (that is, instances of user-defined functions) or to be restricted to the values/classes/types subset? That's not as much a technical discussion but a question of design philosophy. I assumed that integration with the existing language was desirable. If not then the proposal is moot.

No, because for two numbers a and b, a == b <=> a === b. In fact for any two values, (typeof a == typeof b && a == b) <=> a === b. If we want to support usable numeric and similar "scalar" type extensions, we might want to preserve this logic.

(I forgot to respond to this from a previous message)

What does "usable" mean? If operators did work for classic objects why not use them for scalars? There's === but I don't know why you would want it to be anything other than object identity. Is anybody relying on (typeof a == typeof b && a == b) <=> a === b?

# Brendan Eich (15 years ago)

On Jul 3, 2009, at 5:29 AM, Christian Plesner Hansen wrote:

This is the critical point. Do you want operator overloading to extend to classic objects (that is, instances of user-defined functions) or to be restricted to the values/classes/types subset? That's not as much a technical discussion but a question of design philosophy. I assumed that integration with the existing language was desirable. If not then the proposal is moot.

Operators are currently hardcoded in the language, but the
specification could be recast using multimethods or double dispach and
then extended to allow users to define operators on new "types" -- but
what would be the nature of those types?

Anything like a number (primitive, as you note the wrapping with
Number is an old hack) does not act like a reference (object, typeof x
== "object") type. It's a value type.

This quest for extensible value types was how TC39 generalized the
problem statement raised by decimal.

What does "usable" mean? If operators did work for classic objects why not use them for scalars?

"Usable" takes in operators for new value types, along with literal
notation. These are aspirations, not obviously out of reach.
Waldemar's original JS2/ES4 work from ~10 years ago had "units": www.mozilla.org/js/language/js20-2002-04/core/lexer.html#units

Since number, boolean, and string are not classic objects the
generalization from hardcoding does not proceed from "classic objects"
to novel scalar types, rather from the non-null/void primitive types.

There's === but I don't know why you would want it to be anything other than object identity. Is anybody relying on (typeof a == typeof b && a == b) <=> a === b?

Lots of code on the web happens to depend on this because some folks
(Doug Crockford recomments this) use === while others use == but
"usually" with same-typed operands. I think I've even seen some code
mix == and === usage.

# Christian Plesner Hansen (15 years ago)

Operators are currently hardcoded in the language, but the specification could be recast using multimethods or double dispach and then extended to allow users to define operators on new "types" -- but what would be the nature of those types?

Anything like a number (primitive, as you note the wrapping with Number is an old hack) does not act like a reference (object, typeof x == "object") type. It's a value type.

This quest for extensible value types was how TC39 generalized the problem statement raised by decimal.

If TC39 has decided that value types are the way to go then I agree, operators have to be considered in that context. On the other hand if operator overloading did work for ordinary objects I would suggest reconsidering whether value types were the simplest way to implement use cases like decimal. I'm no fan of having numbers and booleans hardcoded but at least it's an inconsistency that JS shares with most other languages.

What does "usable" mean?  If operators did work for classic objects why not use them for scalars?

"Usable" takes in operators for new value types, along with literal notation. These are aspirations, not obviously out of reach. Waldemar's original JS2/ES4 work from ~10 years ago had "units": www.mozilla.org/js/language/js20-2002-04/core/lexer.html#units

Custom literals is a nasty problem. To work with decimal literals, for instance, the compiler can't be allowed to interpret numeric constants; that has to be delegated to the decimal library. Value types would probably make some aspects of it easier (though you have to watch out if you want to avoid one-pass semantics) but as far as I can see only the simple aspects.

Since number, boolean, and string are not classic objects the generalization from hardcoding does not proceed from "classic objects" to novel scalar types, rather from the non-null/void primitive types.

That's a matter of which angle you want to view it from. But I take the point that the same holds for my view.

There's === but I don't know why you would want it to be anything other than object identity.  Is anybody relying on (typeof a == typeof b && a == b) <=> a === b?

Lots of code on the web happens to depend on this because some folks (Doug Crockford recomments this) use === while others use == but "usually" with same-typed operands. I think I've even seen some code mix == and === usage.

I won't try to guess if breaking this relation will give problems in practice. If it's really one of the central reasons for having value types it would be worth looking into.

# Brendan Eich (15 years ago)

On Jul 5, 2009, at 7:52 AM, Christian Plesner Hansen wrote:

Custom literals is a nasty problem. To work with decimal literals, for instance, the compiler can't be allowed to interpret numeric constants; that has to be delegated to the decimal library.

To clarify, we wouldn't make 123 or 3.14 be extensible (at least not
without some "use decimal" pragma, which seems very difficult to
implement, since it would not affect lexical scope or other compile- time constructs, but rather have runtime effects). You'd have to write
1.1m or similar.

# Alex Russell (15 years ago)

On Jul 3, 2009, at 1:21 AM, Christian Plesner Hansen wrote:

Likewise, for user-defined function foo, foo.prototype is writable
-- but not so for built-in constructor functions, and not so for classes
as sugar (more below).

All JS code currently in existence is based on user-defined functions. For me that is the only case worth considering. That might explain the differences in our views.

I'm not a language revolutionary, I prefer gradual evolution.

The original "classes as sugar" sketch Mark did to general
agreement at Oslo included freezing the class object (constructor function) and
bind the class name as a const (neither writable nor configurable in the
binding object or frame).

This point disturbs me. Making classes frozen solves no existing JS
programmer problems, introduces new restrictions with no mind paid to
the current (useful) patterns of WRT the prototype chain, and
introduces the need for a const that only seems there to make some
weird security use-cases work. And I say that with all sympathy to
weird language features and the security concerns in question.

Why should this stuff be the default? It's time to admit that built- ins in JS are weird and that as the weirdos, they deserve to work
harder to make things happen to/for them, particularly since language
implementer code size is incomparably cheap compared to script author
code size.

Questions of "integrity" here to my mind should be justified by why
they'll be good for ALL code, not just abuse of built-ins, and then
weighed against other possible solutions. Freezing classes seems
premature to me.

# Mark S. Miller (15 years ago)

On Mon, Jul 6, 2009 at 6:10 PM, Alex Russell<alex at dojotoolkit.org> wrote:

The original "classes as sugar" sketch Mark did to general agreement at Oslo included freezing the class object (constructor function) and bind the class name as a const (neither writable nor configurable in the binding object or frame).

This point disturbs me. Making classes frozen solves no existing JS programmer problems, introduces new restrictions with no mind paid to the current (useful) patterns of WRT the prototype chain, and introduces the need for a const that only seems there to make some weird security use-cases work. [...] Questions of "integrity" here to my mind should be justified by why they'll be good for ALL code, not just abuse of built-ins, and then weighed against other possible solutions. Freezing classes seems premature to me.

For low integrity patterns (or "loosey goosey" as Allen sometimes says), JavaScript has always been and will remain a fine language. In ES5 for the first time, it is possible to engage in high integrity patterns; but it is not easy. We don't need syntactic sugar to provide yet another way to express what can already be expressed easily. It is when something valuable becomes possible, but can't be made easy without syntactic sugar -- like high integrity programming in ES5 -- that syntactic sugar may be justified.

So no, it doesn't need to be justified for all code. It just needs to be justified by code that isn't well served by ES5 as it is. No one is proposing removing JavaScript's support for making low integrity programming easy.

As for whether high integrity is only of interest to security weirdos, I'll just remind everyone that the notion of objects [Simula, Smalltalk, Actors] -- or similarly abstract data types [Clu] -- was conceived from the beginning as: reusable packages of encapsulated state together with the code responsible for maintaining that state's invariants and presenting an abstract interface. That interface that quickly came to be characterized in terms of pre-conditions and post-conditions [Liskov].

Virtually none of the systems or literature that brought about this revolution was concerned with security per se. These notions were created for the sake of modularity, abstraction, and flexible composability. None of these concerns have gone away; hence the frequent complaint that JavaScript today provides inadequate support for serious large scale programming -- a need which classes-as-sugar hopes to address.

Finally, I make no apology for being a security weirdo or for wanting to see JavaScript become more securable. JavaScript is uniquely positioned as the fastest growing portion of many organizations' attack surface; and often also their weakest link. This is a role for which it was not designed and for which it has been ill suited. By the same token, JavaScript is also uniquely positioned to capitalize on prior work on programming language security. It's continued evolution should indeed be shaped by the unique niche it occupies.

# Brendan Eich (15 years ago)

On Jul 6, 2009, at 6:10 PM, Alex Russell wrote:

This point disturbs me. Making classes frozen solves no existing JS
programmer problems, introduces new restrictions with no mind paid
to the current (useful) patterns of WRT the prototype chain, and
introduces the need for a const that only seems there to make some
weird security use-cases work. And I say that with all sympathy to
weird language features and the security concerns in question.

Why should this stuff be the default?

What Mark said, I agree completely with his post: "this stuff" is
not "the default" -- function as constructor is the default, and no
one is salting that syntax. So why are you against sugar for high- integrity programmers? It's not as if those weirdos will take over the
world, right?

If you're worried that the 'class' keyword will hypnotize the masses
into locking things down overmuch, then you don't trust the masses
very much. They'll mostly avoid new stuff, and any who get burned by
inability to monkey-patch (which is not an unmixed blessing) will
retreat, and probably start a "don't use 'class'" movement.

This is not 'final' in Java (which pace Mark, I believe was overused,
including in the standard library, but which also has different enough
semantics that I wouldn't drag it in here as a precedent. We're
talking about sugar for ES3 and ES5 facilities already implemented, or
nearly implemented. There was no such underlying metaprogramming API
prefiguring final in Java, AFAIK.

There should be sugar for high- and low-integrity programmers, and
then they can get back to their standoff. :-/

# David-Sarah Hopwood (15 years ago)

Brendan Eich wrote:

On Jul 6, 2009, at 6:10 PM, Alex Russell wrote:

This point disturbs me. Making classes frozen solves no existing JS programmer problems, introduces new restrictions with no mind paid to the current (useful) patterns of WRT the prototype chain, and introduces the need for a const that only seems there to make some weird security use-cases work. And I say that with all sympathy to weird language features and the security concerns in question.

Why should this stuff be the default?

What Mark said, I agree completely with his post: "this stuff" is not "the default" -- function as constructor is the default, and no one is salting that syntax. So why are you against sugar for high-integrity programmers? It's not as if those weirdos will take over the world, right?

We security weirdos fully intend to take over the world. You have been warned.

# Mike Wilson (15 years ago)

Brendan Eich wrote:

If you're worried that the 'class' keyword will hypnotize the masses into locking things down overmuch, then you don't trust the masses very much. They'll mostly avoid new stuff, and any who get burned by inability to monkey- patch (which is not an unmixed blessing) will retreat, and probably start a "don't use 'class'" movement.

This is not 'final' in Java (which pace Mark, I believe was overused, including in the standard library, but which also has different enough semantics that I wouldn't drag it in here as a precedent. We're talking about sugar for ES3 and ES5 facilities already implemented, or nearly implemented.

Will class inheritance then desugar to classical prototype/constructor assignments?

Best Mike Wilson

# Brendan Eich (15 years ago)

On Jul 21, 2009, at 6:07 PM, Mike Wilson wrote:

Brendan Eich wrote:

If you're worried that the 'class' keyword will hypnotize the masses into locking things down overmuch, then you don't trust the masses very much. They'll mostly avoid new stuff, and any who get burned by inability to monkey- patch (which is not an unmixed blessing) will retreat, and probably start a "don't use 'class'" movement.

This is not 'final' in Java (which pace Mark, I believe was overused, including in the standard library, but which also has different enough semantics that I wouldn't drag it in here as a precedent. We're talking about sugar for ES3 and ES5 facilities already implemented, or nearly implemented.

Will class inheritance then desugar to classical prototype/constructor assignments?

The current class proposal,

strawman:classes

has zero inheritance.

# Mike Wilson (15 years ago)

Brendan Eich wrote:

On Jul 21, 2009, at 6:07 PM, Mike Wilson wrote:

Will class inheritance then desugar to classical prototype/constructor assignments?

The current class proposal,

strawman:classes

has zero inheritance.

Ah right, thanks for the pointer. Do you expect it to stay this way, and are there good reasons for not providing inheritance?

Best Mike

# Christian Plesner Hansen (15 years ago)

The current class proposal,

strawman:classes

has zero inheritance.

I was under the impression that the plan was to develop some form of inheritance model too. The strawman says that there is no inheritance to keep the design simple. Are you sure you're not throwing the baby out with the bathwater here?

# Alex Russell (15 years ago)

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1

On Jul 22, 2009, at 3:10 AM, Christian Plesner Hansen wrote:

The current class proposal,

strawman:classes

has zero inheritance.

I was under the impression that the plan was to develop some form of inheritance model too. The strawman says that there is no inheritance to keep the design simple. Are you sure you're not throwing the baby out with the bathwater here?

Lots of babies have already been thrown out in this proposal
(euphemistically named "Resolved Design Choices"):

  • private by default favors "high integrity" uses of the language,
    punishing the common case and current practice
  • closed-classes ignore both the lessons of Ruby and the way that
    prototypes are currently extended in, say, JQuery
  • the word "prototype" doesn't occur in the proposal anywhere,
    meaning that it's silent on the one existing and useful form of object/ factory-function delegation.

My sense of it is that this proposal is driven by a desire to avoid
hierarchy-as-composition problems (hositing of methods to high
positions in hierarchies, etc.) -- which is a well-founded fear.
However, it avoids it not by adding a composition mechanism like
traits, but instead seeks to turn classes into factories of objects
that can't be augmented to fix the errors of initial designs, can't be
easily composed to form new types, and don't fit well with how current
JS programmers think about their language.

In short, it helps Harmony classes make all of the same mistakes that
DOM host objects currently plague developers with.


Alex Russell slightlyoff at google.com alex at dojotoolkit.org BE03 E88D EABB 2116 CC49 8259 CF78 E242 59C3 9723

-----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.2 (Darwin)

iD8DBQFKZySoz3jiQlnDlyMRAjpeAJ9R0PkZOTVx7dtV05qzF2X0sisbRACgm3ta exbKWvCTD1Y8ZkR4BZMQu8U= =TLUq -----END PGP SIGNATURE-----

# Mark S. Miller (15 years ago)

On Wed, Jul 22, 2009 at 3:10 AM, Christian Plesner Hansen<christian.plesner.hansen at gmail.com> wrote:

The current class proposal,

strawman:classes

has zero inheritance.

I was under the impression that the plan was to develop some form of inheritance model too.  The strawman says that there is no inheritance to keep the design simple.  Are you sure you're not throwing the baby out with the bathwater here?

I have been taking the thread starting at esdiscuss/2009-March/009115

together with its references to relevant earlier threads as the strawman of interest. Several people have encouraged me to update the wiki. My apologies for not having done so.

The proposal in that latest thread itself does not support inheritance, which seems consistent with the general sense of the committee. However, if we do decide to support inheritance, the earlier thread at esdiscuss/2008-August/006941

shows how to support single inheritance and linearized multiple inheritance in the same framework.

Alex mentions traits. Traits seem like a more structured form of behavior sharing than classical inheritance. Lately Alex and I been reading prog.vub.ac.be/Publications/2009/vub-prog-tr-09-04.pdf.

Later today I will post some initial thoughts on how this could also be supported in within the same general framework.

The design of well structured inheritance-like behavior sharing machinery is far from settled. As with data types, the ideal outcome would be for ES6 itself to support well all of these as patterns without building any of them in, so that our users and library authors can try multiple experiments.

# Brendan Eich (15 years ago)

On Jul 22, 2009, at 3:10 AM, Christian Plesner Hansen wrote:

The current class proposal,

strawman:classes

has zero inheritance.

I was under the impression that the plan was to develop some form of inheritance model too. The strawman says that there is no inheritance to keep the design simple. Are you sure you're not throwing the baby out with the bathwater here?

Nothing is thrown out by starting simple and then considering our
options.

We all know many ways (too many!) to support inheritance. Would it
help to pick a winner prematurely, compared to giving people sugar for
high-integrity factories, which they must write the long way in ES5,
or not at all in current JS?

# Mike Wilson (15 years ago)

[Still staying in this thread although bad match to subject line, as I'd like to continue on the discussion of open vs closed classes]

Brendan Eich wrote:

On Jul 22, 2009, at 3:10 AM, Christian Plesner Hansen wrote:

The current class proposal,

strawman:classes

has zero inheritance.

I was under the impression that the plan was to develop some form of inheritance model too. The strawman says that there is no inheritance to keep the design simple. Are you sure you're not throwing the baby out with the bathwater here?

Nothing is thrown out by starting simple and then considering our
options.

I agree a lot with Alex about wanting the ability to create non- frozen classes with the new class mechanism (with respect for not having the whole picture as you in the committee have).

Once the new class mechanism was deployed I was hoping to be able to use it for any class-oriented problem, and not having to fall back on old-style prototypes/constructors for certain kinds of classes.

With the current (small) feature list of classes I guess it's hard to provide any hard arguments against keeping old-style constructors for open classes, and it is elegant to keep the purpose of existing language mechanisms. Though, to me it looks like there is a risk that this design will not fit so well when later expanding the feature list with f ex inheritance or class introspection, and that will leave open classes in the back-water and developers facing questions like:

  • Ah, you want to use the new non-clunky inheritance mechanism that doesn't force you to actually set up the inheritance through a constructor call to the superclass? Sorry, only available in frozen classes.
  • Or, you want the new inheritance mechanism that chains constructor calls at instance creation? Only available in frozen classes.
  • Or, you want to introspect the class to see what members instances will get, without creating and introspecting an actual instance? Only in frozen classes.

Of course I am only speculating about potential future additions here, but it is easy to fear that the old-style constructors will have a hard time keeping up with a class mechanism that actually has a class declaration to build features upon.

Best Mike