Protocol library as alternative to refinements (Russell Leggett)
On Mon, Oct 21, 2013 at 3:17 PM, Benjamin (Inglor) Gruenbaum <inglor at gmail.com> wrote:
what's "the default" in #4? The protocol's default? What's the behavior if no matching method is found?
The default is something I go into in a little bit of detail further down:
Collections.defaults({
each(iterator, context){
if (this.length === +this.length) {
for (var i = 0, length = this.length; i < length; i++) {
//notice we also get to use :: for a simple call replacement
if (context::iterator(this[i], i, this) === breaker) return;
}
} else {
var keys = this.keys();
for (var i = 0, length = keys.length; i < length; i++) {
if (context::iterator(this[keys[i]], this[i], this) === breaker) return;
}
}
},
This is defining a sort of default implementation of the method for the protocol, meaning that other types do not have to implement it in their own type specific implementation of the protocol.
Also, can anyone explain why this solves the performance problem scoped object extensions have? It still seems like it would have to check the environment for protocols and then check the methods on all available protocols and do type matching to the type of the method.
Collections.extend(Array)
seems awfully similar to an array extensions, how does the::
operator resolve the need for expensive lookup, can you explain that to me?
Yes, the reason is because no new scopes or environments have been created. The protocol and its methods are simply variables - objects like anything else. Getting a protocol's method is not really any different from:
import {map} from 'UnderscOOre';
//basically the same as
let {map} = Array.prototype;
//this is effectively the same whether you got it from the protocol or
the function
//pulled off of Array.prototype
arr::map( x => x+2);
The magic is what happens one of these protocol methods. The simplest to understand/naive approach to this would basically be that inside of a protocol, for each protocol method, you hold all the methods in a map going from type to implementation. The actual method used would inspect the |this| and follow the algorithm and use the maps to figure out which implementation to use. This would have a penalty - but it would be limited to the protocol method calls. Scoped extension would apply a penalty to all function calls everywhere. If protocols were natively supported, or at least some kind of hook for single dispatch on type, I'm pretty sure you could get function calls that were at least as fast as normal prototype based methods. Even if you didn't get native support, I have a feeling that something more clever than the naive approach could be used to hit the sweet spot and get some polymorphic inline caching, but maybe not.
Great, thanks for the clarifications. A few more scenarios:
----- Case 1:
I have an object O of type Foo. A protocol P.
- O has the structure {x:5,foo:7};
- The protocol implements
foo
but does not specifically for Foo (via .defaults).
What happens? Does P.foo
get invoked, or do we get an error because
O.foo
is not a function?
---- Case 2:
I have an object O of type Foo, Foo.prototype = new Bar. A protocol P.
- O has the structure {x:5,foo:7};
- The protocol implements
foo
for Bar specifically (but not for Foo)
What happens? (Similar to case above?)
---- Case 3:
I have an object O of type Foo, Foo.prototype = new Bar. A protocol P.
- O has the structure {x:5,y:7};
- Bar has a method foo
- The protocol implements
foo
for Bar specifically (but not for Foo)
What happens? Does it invoke Bar.foo or P.foo?
---- Case 4:
I have an object O of type Foo, Foo.prototype = new Bar. A protocol P.
- O has the structure {x:5,y:7};
- Bar has a method foo
- The protocol implements
foo
for Bar specifically (but not for Foo)
What happens? Does it invoke Bar.foo or P.foo ?
---- Case 5:
I have an object O of type Foo, I import two Protocol
s that implement a
method Foo
at the same specificity level - what happens?
I'll preface this by saying that I haven't made a formal proposal and this isn't an actual library. You're doing a good job of spotting some undefined behavior which would probably be better defined by trying it out. I'll give my opinions on them, but it could all use a good test drive before coming down on these.
To help the discussion I'll paste the basic algorithm I outlined:
- if the receiver has an own property of the same name, use that
- if the receiver's type matches an extension of the protocol, use the protocol method
- if the receiver has a method of the same name somewhere up the prototype chain, use that
- use the default if available
----- Case 1:
I have an object O of type Foo. A protocol P.
- O has the structure {x:5,foo:7};
- The protocol implements
foo
but does not specifically for Foo (via .defaults).What happens? Does
P.foo
get invoked, or do we get an error becauseO.foo
is not a function?
I can see most of your examples involve the interaction between the protocol method and a method supplied on the object itself. Clojure protocols don't have this type of interaction. I tossed them in because they seemed like they would work well with some of the examples brought up in the other thread. They definitely complicate things... I guess I would say that I would have this throw an error. Logically to me, either the an error should be thrown because it would try to invoke foo on the object, or we should give up on the idea of falling back on the object. Skipping foo on the object because its not a function seems too magical.
So here, O has an own property, so #1 and blows up
---- Case 2:
I have an object O of type Foo, Foo.prototype = new Bar. A protocol P.
- O has the structure {x:5,foo:7};
- The protocol implements
foo
for Bar specifically (but not for Foo)What happens? (Similar to case above?)
Same as above - error (or drop the feature)
---- Case 3:
I have an object O of type Foo, Foo.prototype = new Bar. A protocol P.
- O has the structure {x:5,y:7};
- Bar has a method foo
- The protocol implements
foo
for Bar specifically (but not for Foo)What happens? Does it invoke Bar.foo or P.foo?
Ah, yes, I had thought about this a bit, but it had never made it into my gist at all. Let me revise my algorithm: 4. Check up type chain for match 5. If method defined on protocol default, use that 6. Otherwise error
- false, continue
- false, continue
- false, continue
- true - use Bar.foo
---- Case 4:
I have an object O of type Foo, Foo.prototype = new Bar. A protocol P.
- O has the structure {x:5,y:7};
- Bar has a method foo
- The protocol implements
foo
for Bar specifically (but not for Foo)What happens? Does it invoke Bar.foo or P.foo ?
This looks the same as 3 to me, so Bar.foo
---- Case 5:
I have an object O of type Foo, I import two
Protocol
s that implement a methodFoo
at the same specificity level - what happens?
This is an easy one and comes down to the benefits of making use of normal variables and lexical scoping. Remember that protocols don't affect the types at all, and the protocols exist completely independently of each other. If you imported two protocols A and B that each have a method 'foo' defined for type Foo, the only thing you would have to be careful of is redefining your variables accidentally.
import {foo} from 'A';
import {foo} from 'B';
This is going to be a problem no matter what you're doing. You are importing two things with the same name, trying to introduce to variables with the same name. There's no magic for protocols here, you can just alias your imports to avoid name clashes.
import {foo as fooA} from 'A';
import {foo as fooB} from 'B';
o::fooA();
o::fooB();
Alternatively, if the have the protocol itself, you can access the methods directly and do:
o::A.methods.foo();
o::B.methods.foo();
Which isn't as nice, but is an option.
On Tue, Oct 22, 2013 at 12:15 AM, Russell Leggett <russell.leggett at gmail.com
wrote:
I'll preface this by saying that I haven't made a formal proposal and
this isn't an actual library. You're doing a good job of spotting some undefined behavior which would probably be better defined by trying it out. I'll give my opinions on them, but it could all use a good test drive before coming down on these.
This is exactly what I'm trying to do, figuring out how it acts in different scenarios. I know better solutions come from actual usage but at least I want to understand a behavior we can built from. I like this idea and it seems like something that can solve issues I have when coding.
I can see most of your examples involve the interaction between the
protocol method and a method supplied on the object itself... They definitely complicate things... I guess I would say that I would have this throw an error. ... Skipping foo on the object because its not a function seems too magical.
Skipping foo
on the object because it is not a function is too magical in
my opinion too. Working out how instance methods work in this scenario
seems like quite the task to me. The step of putting the protocol between
the object and the prototype sounds pretty hard to get right in particular.
I don't like thinking of reading from the prototype (as in the case of
fetching a method) any differently from reading from the own object. After
all sharing functionality is at the very core of prototypical inheritance.
---- Case 3: Ah, yes, I had thought about this a bit, but it had never made it into my
gist at all. Let me revise my algorithm:
- true - use Bar.foo
This makes some sense. About scope resolution more generally I just want to make a note that in C# extension methods, the extension method is always the last candidate. It would check anywhere in the inheritance chain before attempting to evaluate the extension method. For example:
public class Bar
{
public int GetFive() { return 5;}
}
public class Foo : Bar{}
public static class FooExt
{
static int GetFive(this Foo bar)
{
return 5555;
}
static string ToString(this Foo bar)
{
return "Hi";
}
}
static void Main(string[] args)
{
Console.WriteLine((new Foo()).GetFive()); // this prints 5
Console.WriteLine(new Foo())); // Uses the implementation of
Object.ToString .
}
```
This is a major difference between this proposal and C# extension methods.
However, I'm not sure it's bad. Would you mind making an argumnt for
prioritizing the protocol method over an object method?
>> I have an object O of type Foo, I import two `Protocol`s that implement
a method `Foo` at the same specificity level - what happens?
> This is an easy one and comes down to the benefits of making use of
normal variables and lexical scoping. Remember that protocols don't affect
the types at all, and the protocols exist completely independently of each
other.
Beautiful. This bit works out quite nicely.
I can see most of your examples involve the interaction between the protocol method and a method supplied on the object itself... They definitely complicate things... I guess I would say that I would have this throw an error. ... Skipping foo on the object because its not a function seems too magical.
Skipping
foo
on the object because it is not a function is too magical in my opinion too. Working out how instance methods work in this scenario seems like quite the task to me. The step of putting the protocol between the object and the prototype sounds pretty hard to get right in particular. I don't like thinking of reading from the prototype (as in the case of fetching a method) any differently from reading from the own object. After all sharing functionality is at the very core of prototypical inheritance.
Yeah, I'm not exactly married to it, but I liked the idea of being able to override at the instance level, therefore giving own properties highest priority.
---- Case 3: Ah, yes, I had thought about this a bit, but it had never made it into my gist at all. Let me revise my algorithm:
- true - use Bar.foo
This makes some sense. About scope resolution more generally I just want to make a note that in C# extension methods, the extension method is always the last candidate. It would check anywhere in the inheritance chain before attempting to evaluate the extension method. For example:
public class Bar { public int GetFive() { return 5;} } public class Foo : Bar{} public static class FooExt { static int GetFive(this Foo bar) { return 5555; } static string ToString(this Foo bar) { return "Hi"; } } static void Main(string[] args) { Console.WriteLine((new Foo()).GetFive()); // this prints 5 Console.WriteLine(new Foo())); // Uses the implementation of Object.ToString . } ``` This is a major difference between this proposal and C# extension methods. However, I'm not sure it's bad. Would you mind making an argumnt for prioritizing the protocol method over an object method?
Well I would start by pointing out that the two really are different mechanisms with different strengths and weaknesses. Protocols are more than a means of extension - they are also polymorphic and can really take advantage of the equivalent of interfaces across data types in addition to their extension. In a way, they are more similar to Haskell typeclasses than extension methods. There is also the obvious difference that protocols can be efficiently implemented in a dynamic language where extension methods cannot :)
So anyway, the reason why prioritizing protocols is because there is potential value in the protocol itself. The protocol is an interface - a contract which can be polymorphic over different types. The same way that two protocols can have methods with the same name but different semantics, it would make sense that a protocol could be defined, and need to be applied to a type that already has the method. There is value for the protocol version to override the type's built in version. Because there is no ambiguity in the intent of using the protocol instead of the prototype, I think protocol should win. C# uses uniform syntax so the intent cannot be known if the extension method was intended vs the type's method. Even interfaces in the Java/C# worlds can't handle that type of clashing. If two interfaces have methods with the same name and signature, you can only have a single implementation. In those languages it is rarely a problem because of the ability for overloading, but with JavaScript its just one name, one method.
I'm not opposed to simplifying the algorithm, and perhaps protocol methods should always take a backseat similar to C#, but that just doesn't seem right to me. If I create a protocol with a map method and I define it for Array, I would expect that to take priority, no? You're directly calling arr::map not arr.map - maybe its a ParallelCollection protocol and you want to override map to be parallel with web workers or something.
First off, this whole concept is brilliant. Thanks for sharing, Russell. Just a few comments inline...
On Tue, Oct 22, 2013 at 2:13 AM, Russell Leggett <russell.leggett at gmail.com>wrote:
I can see most of your examples involve the interaction between the protocol method and a method supplied on the object itself... They definitely complicate things... I guess I would say that I would have this throw an error. ... Skipping foo on the object because its not a function seems too magical.
Skipping
foo
on the object because it is not a function is too magical in my opinion too. Working out how instance methods work in this scenario seems like quite the task to me. The step of putting the protocol between the object and the prototype sounds pretty hard to get right in particular. I don't like thinking of reading from the prototype (as in the case of fetching a method) any differently from reading from the own object. After all sharing functionality is at the very core of prototypical inheritance.Yeah, I'm not exactly married to it, but I liked the idea of being able to override at the instance level, therefore giving own properties highest priority.
I think the idea of overriding at the instance level is appealing too, but doesn't dispatching on a string key open the door to the kinds of naming conflicts the concept so elegantly solves otherwise? One possible solution might be for the protocol to also expose a symbol for each method that could be used to override at the instance level.
I'd go further and suggest that it should walk the prototype too, since adding a specific symbol from a protocol would be a very intentional action. This might be a bit of a perf burden in naive implementations but it seems like the right thing -- there's something a little off about intervening between instance and prototype lookup. The whole point of the prototype is that it's a default logic -- attempting to short circuit this just smells wrong.
---- Case 3: Ah, yes, I had thought about this a bit, but it had never made it into my gist at all. Let me revise my algorithm:
- true - use Bar.foo
This makes some sense. About scope resolution more generally I just want to make a note that in C# extension methods, the extension method is always the last candidate. It would check anywhere in the inheritance chain before attempting to evaluate the extension method. For example:
public class Bar { public int GetFive() { return 5;} } public class Foo : Bar{} public static class FooExt { static int GetFive(this Foo bar) { return 5555; } static string ToString(this Foo bar) { return "Hi"; } } static void Main(string[] args) { Console.WriteLine((new Foo()).GetFive()); // this prints 5 Console.WriteLine(new Foo())); // Uses the implementation of Object.ToString . } ``` This is a major difference between this proposal and C# extension methods. However, I'm not sure it's bad. Would you mind making an argumnt for prioritizing the protocol method over an object method?
Well I would start by pointing out that the two really are different mechanisms with different strengths and weaknesses. Protocols are more than a means of extension - they are also polymorphic and can really take advantage of the equivalent of interfaces across data types in addition to their extension. In a way, they are more similar to Haskell typeclasses than extension methods. There is also the obvious difference that protocols can be efficiently implemented in a dynamic language where extension methods cannot :)
So anyway, the reason why prioritizing protocols is because there is potential value in the protocol itself. The protocol is an interface - a contract which can be polymorphic over different types. The same way that two protocols can have methods with the same name but different semantics, it would make sense that a protocol could be defined, and need to be applied to a type that already has the method. There is value for the protocol version to override the type's built in version. Because there is no ambiguity in the intent of using the protocol instead of the prototype, I think protocol should win. C# uses uniform syntax so the intent cannot be known if the extension method was intended vs the type's method. Even interfaces in the Java/C# worlds can't handle that type of clashing. If two interfaces have methods with the same name and signature, you can only have a single implementation. In those languages it is rarely a problem because of the ability for overloading, but with JavaScript its just one name, one method.
I'm not opposed to simplifying the algorithm, and perhaps protocol methods should always take a backseat similar to C#, but that just doesn't seem right to me. If I create a protocol with a map method and I define it for Array, I would expect that to take priority, no? You're directly calling arr::map not arr.map - maybe its a ParallelCollection protocol and you want to override map to be parallel with web workers or something.
I agree that arr.map is questionable for protocol walking, but it's just as questionable on the instance because of possible name conflicts!
If override used symbols, on the other hand, something like
arr[myProtocol.symbols.map]
makes it clear the intent was to override
arr::map
(assuming map
is derived from myProtocol
in this scope,
obviously). This also seems to beg for a prototype walk for
myProtocol.symbols.map
, finally falling back on the default
implementation given for the protocol.
This bridges the FP and OOP worlds beautifully -- it's a clean and pleasant syntax for intensional semantics.
Russell, thanks for the reply, it clarified a lot. I just wanted to mention that I did not bring up C# extension methods to suggest this behavior for protocols but just to illustrate how a different system for addressing a similar problem (in a different environment) does it. I do not think Protocols should behave this way :)
On Tue, Oct 22, 2013 at 9:13 AM, Russell Leggett <russell.leggett at gmail.com> wrote:
I don't like thinking of reading from the prototype (as in the case of
fetching a method) any differently from reading from the own object. After all sharing functionality is at the very core of prototypical inheritance.
Yeah, I'm not exactly married to it, but I liked the idea of being able to
override at the instance level, therefore giving own properties highest priority.
I think there is a very specific problem Protocols attempt to solve. I think that problem is sharing and adding functionality to a type in a scoped manner. Our other mechanisms for sharing functionality, prototypical inheritance and mixins do not scope well. What protocols give us is the ability to share functionality without having to modify the object. I think the syntax is also really nice and clever. (so far I'm not saying anything new)
The way method resolution on the object itself works right now in protocols sounds pretty complicated though. What if we make it a part of the protocol instead? What if protocols had a way to define preferring the object implementation? Don't we already have that in Protocols?
MyProtocol.extend(Array, {map:Array.prototype.map}); // sorry if that's incorrect syntax
Doesn't this let me define that when encountering an array I prefer its own implementation over the protocol's? What does it really cost us to drop checking the object and its prototype rather than using the protocol method always?
If anything, I'd use the object own properties and prototype chain as a last fallback after protocol options have been exhausted. I don't really see an obvious use case where I want to polymorphically prefer the object implementations over the protocols and if I do, I can just use Protocol.extend to specify that in that case.
The only other thing that bothers me in the spec is that you specify by classes which sounds a bit problematic to me (I don't see an obvious behavioral way though (how do I say "Array like"?), and having a default implementation kind of addresses that).
On Tue, Oct 22, 2013 at 5:35 PM, Dean Landolt <dean at deanlandolt.com> wrote:
I think the idea of overriding at the instance level is appealing too,
but doesn't dispatching on a string key open the door to the kinds of naming conflicts the concept so elegantly solves otherwise?
I agree that arr.map is questionable for protocol walking, but it's just
as questionable on the instance because of possible name conflicts!
What naming conflicts do you see here?
On Tue, Oct 22, 2013 at 11:26 AM, Benjamin (Inglor) Gruenbaum < inglor at gmail.com> wrote:
Russell, thanks for the reply, it clarified a lot. I just wanted to mention that I did not bring up C# extension methods to suggest this behavior for protocols but just to illustrate how a different system for addressing a similar problem (in a different environment) does it. I do not think Protocols should behave this way :)
On Tue, Oct 22, 2013 at 9:13 AM, Russell Leggett < russell.leggett at gmail.com> wrote:
I don't like thinking of reading from the prototype (as in the case of fetching a method) any differently from reading from the own object. After all sharing functionality is at the very core of prototypical inheritance. Yeah, I'm not exactly married to it, but I liked the idea of being able to override at the instance level, therefore giving own properties highest priority.
I think there is a very specific problem Protocols attempt to solve. I think that problem is sharing and adding functionality to a type in a scoped manner. Our other mechanisms for sharing functionality, prototypical inheritance and mixins do not scope well. What protocols give us is the ability to share functionality without having to modify the object. I think the syntax is also really nice and clever. (so far I'm not saying anything new)
The way method resolution on the object itself works right now in protocols sounds pretty complicated though. What if we make it a part of the protocol instead? What if protocols had a way to define preferring the object implementation? Don't we already have that in Protocols?
MyProtocol.extend(Array, {map:Array.prototype.map}); // sorry if that's incorrect syntax
Doesn't this let me define that when encountering an array I prefer its own implementation over the protocol's? What does it really cost us to drop checking the object and its prototype rather than using the protocol method always?
If anything, I'd use the object own properties and prototype chain as a last fallback after protocol options have been exhausted. I don't really see an obvious use case where I want to polymorphically prefer the object implementations over the protocols and if I do, I can just use Protocol.extend to specify that in that case.
The only other thing that bothers me in the spec is that you specify by classes which sounds a bit problematic to me (I don't see an obvious behavioral way though (how do I say "Array like"?), and having a default implementation kind of addresses that).
On Tue, Oct 22, 2013 at 5:35 PM, Dean Landolt <dean at deanlandolt.com> wrote:
I think the idea of overriding at the instance level is appealing too, but doesn't dispatching on a string key open the door to the kinds of naming conflicts the concept so elegantly solves otherwise? I agree that arr.map is questionable for protocol walking, but it's just as questionable on the instance because of possible name conflicts!
What naming conflicts do you see here?
Say you have an object for which you want to implement the Cowboy and
Canvas protocols (to borrow /be's favorite example). Both implement a
"draw" method, but when you try to import from both protocols you'll
naturally have to rename one or both. Now say you want to override Cowboy's
draw
method on an instance? You'll end up clobbering the Canvas
protocol's draw method with the obviously wrong function. Not cool. This
can be easily corrected with Symbols.
Dean Landolt <dean at deanlandolt.com> wrote:
Say you have an object for which you want to implement the Cowboy and
Canvas protocols (to borrow /be's favorite example). Both implement a
"draw" method, but when you try to import from both protocols you'll
naturally have to rename one or both. Now say you want to override Cowboy's
draw
method on an instance? You'll end up clobbering the Canvas
protocol's draw method with the obviously wrong function. Not cool. This
can be easily corrected with Symbols.
I'm not sure I understand the example. What does a Cowboy's draw method do?
Is it a specification of the Canvas protocol draw ? (In that case
.extend
ing the protocol seems to solve it). If you have a more concrete
use case that would really help.
I don't see how this is any different from other variables and general naming conflict issues when destructuring.
Say you have an object for which you want to implement the Cowboy and Canvas protocols (to borrow /be's favorite example). Both implement a "draw" method, but when you try to import from both protocols you'll naturally have to rename one or both. Now say you want to override Cowboy's
draw
method on an instance? You'll end up clobbering the Canvas protocol's draw method with the obviously wrong function. Not cool. This can be easily corrected with Symbols.
Yes, I'm liking this idea. Protocol always first - override through a symbol. Honestly, the more I think about it, the more I think overriding won't happen much and therefore isn't a huge problem, making it more specific through a symbol is not a bad idea.
Last question - what about the priority of the defaults? Are they still prioritized over prototype? I was worried at first about unintentional clobbering the other way, but then realized that its easy to check for the method in the default if you want to prioritize the prototype over the default.
Cowboy.defaults({
draw(){
if(typeof this.draw === "function"){
this.draw();
}else{
//something here
}
}
});
On Tue, Oct 22, 2013 at 12:19 PM, Benjamin (Inglor) Gruenbaum < inglor at gmail.com> wrote:
Dean Landolt <dean at deanlandolt.com> wrote:
Say you have an object for which you want to implement the Cowboy and Canvas protocols (to borrow /be's favorite example). Both implement a "draw" method, but when you try to import from both protocols you'll naturally have to rename one or both. Now say you want to override Cowboy's
draw
method on an instance? You'll end up clobbering the Canvas protocol's draw method with the obviously wrong function. Not cool. This can be easily corrected with Symbols.I'm not sure I understand the example. What does a Cowboy's draw method do? Is it a specification of the Canvas protocol draw ?
They are two entirely different protocols.
(In that case
.extend
ing the protocol seems to solve it). If you have a more concrete use case that would really help.
Picture any two protocols that share a method name. Think of another protocol with a "map" method that means something entirely different from Collections.methods.map. The specifics aren't important -- just the idea that these are two independent protocols. You don't want to override a method of one and accidentally override the other.
I don't see how this is any different from other variables and general naming conflict issues when destructuring.
The difference is that protocols purport to solve this confusion problem -- it's one of their primary motivations.
On Tue, Oct 22, 2013 at 12:44 PM, Russell Leggett <russell.leggett at gmail.com
wrote:
Say you have an object for which you want to implement the Cowboy and
Canvas protocols (to borrow /be's favorite example). Both implement a "draw" method, but when you try to import from both protocols you'll naturally have to rename one or both. Now say you want to override Cowboy's
draw
method on an instance? You'll end up clobbering the Canvas protocol's draw method with the obviously wrong function. Not cool. This can be easily corrected with Symbols.Yes, I'm liking this idea. Protocol always first - override through a symbol. Honestly, the more I think about it, the more I think overriding won't happen much and therefore isn't a huge problem, making it more specific through a symbol is not a bad idea.
Last question - what about the priority of the defaults? Are they still prioritized over prototype? I was worried at first about unintentional clobbering the other way, but then realized that its easy to check for the method in the default if you want to prioritize the prototype over the default.
Cowboy.defaults({ draw(){ if(typeof this.draw === "function"){ this.draw(); }else{ //something here } } });
This is an interesting point -- the implementation could choose whether or not to dispatch to an instance, and how. At this point I wouldn't call them "defaults" since they'd always be run, and would be responsible for their own dispatching. I still think dispatching on strings would defeat one of the biggest advantages of protocols, but this approach is flexible enough to allow that. Also, it doesn't try to intercede between own and prototype lookup, which is much nicer.
On Tue, Oct 22, 2013 at 12:53 PM, Dean Landolt <dean at deanlandolt.com> wrote:
On Tue, Oct 22, 2013 at 12:44 PM, Russell Leggett < russell.leggett at gmail.com> wrote:
Say you have an object for which you want to implement the Cowboy and
Canvas protocols (to borrow /be's favorite example). Both implement a "draw" method, but when you try to import from both protocols you'll naturally have to rename one or both. Now say you want to override Cowboy's
draw
method on an instance? You'll end up clobbering the Canvas protocol's draw method with the obviously wrong function. Not cool. This can be easily corrected with Symbols.Yes, I'm liking this idea. Protocol always first - override through a symbol. Honestly, the more I think about it, the more I think overriding won't happen much and therefore isn't a huge problem, making it more specific through a symbol is not a bad idea.
Last question - what about the priority of the defaults? Are they still prioritized over prototype? I was worried at first about unintentional clobbering the other way, but then realized that its easy to check for the method in the default if you want to prioritize the prototype over the default.
Cowboy.defaults({ draw(){ if(typeof this.draw === "function"){ this.draw(); }else{ //something here } } });
This is an interesting point -- the implementation could choose whether or not to dispatch to an instance, and how. At this point I wouldn't call them "defaults" since they'd always be run, and would be responsible for their own dispatching. I still think dispatching on strings would defeat one of the biggest advantages of protocols, but this approach is flexible enough to allow that. Also, it doesn't try to intercede between own and prototype lookup, which is much nicer.
The reason they are called defaults is because they are a fallback for when the protocol isn't specified for a specific type. If I have two types A and B. And I have a protocol P. P has a method "foo".
P.defaults({
foo:function(){
console.log("default foo");
}
});
P.extends(A,{
foo:function(){
console.log("A's foo");
}
})
let a = new A();
a::foo(); //outputs 'A's foo'
let b = new B(); //outputs 'default foo'
Basically, the default is useful for:
- generic methods that only depend on other protocol methods
- methods that want to do introspection on the object manually instead of dispatching on type
- cases with a sensible backup
Just to be clear, default implementations are totally optional. A protocol can have zero or more of them. I had originally been thinking that the default should be last, allowing for a prototype to go first, but if the default goes first, then it has the opportunity to defer to the prototype or ignore the prototype.
Revised algorithm:
- If receiver has protocol method symbol as a property, use that as override.
- Try to use protocol methods - start by checking receiver type mapping, then check up type hierarchy for any matches, and finally if no matches, use the default if defined.
- Finally, if no matches and no default, check prototype for method of same name.
Does that sound better?
On Tue, Oct 22, 2013 at 8:10 PM, Russell Leggett <russell.leggett at gmail.com> wrote:
Revised algorithm:
- If receiver has protocol method symbol as a property, use that as
override.
- Try to use protocol methods - start by checking receiver type mapping,
then check up type hierarchy for any matches, and finally if no matches, use the default if defined.
- Finally, if no matches and no default, check prototype for method of
same name.
Does that sound better?
Much :)
On Tue, Oct 22, 2013 at 2:34 PM, Benjamin (Inglor) Gruenbaum < inglor at gmail.com> wrote:
On Tue, Oct 22, 2013 at 8:10 PM, Russell Leggett < russell.leggett at gmail.com> wrote:
Revised algorithm:
- If receiver has protocol method symbol as a property, use that as override.
- Try to use protocol methods - start by checking receiver type mapping, then check up type hierarchy for any matches, and finally if no matches, use the default if defined.
- Finally, if no matches and no default, check prototype for method of same name. Does that sound better?
Much :)
Actually, let me revise 3: 3. Finally, if no matches and no default, attempt to call a method of the same name (not affected by variable name or aliasing) on the receiver.
Something like this:
const P = new Protocol("foo");
// naive, non-native implementation of P.methods.foo would be something
like function foo(...args){ if(P.symbols.foo in this){ //P.symbols holds a symbol for each method return thisP.symbol.foo;
}else if(P.contains(Object.getPrototypeOf(this))){
//contains and lookup might be backed by a weakmap or something,
//but it would also need to go up the type chain
let myFoo = P.lookup(Object.getPrototypeOf(this)).foo;
return this::myFoo(...args);
}else if(P.getDefaults().hasOwnProperty("foo")){
let defaultFoo = P.getDefaults().foo;
return this::defaultFoo(...args);
}else{
this.foo(...args);
}
}
If this seems acceptable, I'll update the gist.
On Tue, Oct 22, 2013 at 4:07 PM, Russell Leggett <russell.leggett at gmail.com>wrote:
On Tue, Oct 22, 2013 at 2:34 PM, Benjamin (Inglor) Gruenbaum < inglor at gmail.com> wrote:
On Tue, Oct 22, 2013 at 8:10 PM, Russell Leggett < russell.leggett at gmail.com> wrote:
Revised algorithm:
- If receiver has protocol method symbol as a property, use that as override.
- Try to use protocol methods - start by checking receiver type mapping, then check up type hierarchy for any matches, and finally if no matches, use the default if defined.
- Finally, if no matches and no default, check prototype for method of same name. Does that sound better?
Much :)
Actually, let me revise 3: 3. Finally, if no matches and no default, attempt to call a method of the same name (not affected by variable name or aliasing) on the receiver.
Something like this:
const P = new Protocol("foo"); // naive, non-native implementation of P.methods.foo would be
something like function foo(...args){ if(P.symbols.foo in this){ //P.symbols holds a symbol for each method return thisP.symbol.foo;
}else if(P.contains(Object.getPrototypeOf(this))){ //contains and lookup might be backed by a weakmap or
something, //but it would also need to go up the type chain let myFoo = P.lookup(Object.getPrototypeOf(this)).foo; return this::myFoo(...args);
}else if(P.getDefaults().hasOwnProperty("foo")){ let defaultFoo = P.getDefaults().foo; return this::defaultFoo(...args); }else{ this.foo(...args); } }
If this seems acceptable, I'll update the gist.
This seems sensible, though it's a bit more flexibility than I'd prefer. What's not completely clear to me is whether this dispatching is defined by the protocol method implementation or whether it's something that's standardized? If the latter, I'd be concerned by all this flexibility. If it's the former (and you can just grab a dispatch implementation from some module) I guess it doesn't matter.
So *if *the implementation controls the dispatching, the point I was trying to make about the "default" method not really being default is just that it could just as well be inlined. Sure, it'd be nicer if it was defined separately so they can be extracted and used independently, but assuming custom dispatch then defaults are really up to your dispatch algorithm, right? Do I have this about right?
On Tue, Oct 22, 2013 at 4:50 PM, Dean Landolt <dean at deanlandolt.com> wrote:
On Tue, Oct 22, 2013 at 4:07 PM, Russell Leggett < russell.leggett at gmail.com> wrote:
On Tue, Oct 22, 2013 at 2:34 PM, Benjamin (Inglor) Gruenbaum < inglor at gmail.com> wrote:
On Tue, Oct 22, 2013 at 8:10 PM, Russell Leggett < russell.leggett at gmail.com> wrote:
Revised algorithm:
- If receiver has protocol method symbol as a property, use that as override.
- Try to use protocol methods - start by checking receiver type mapping, then check up type hierarchy for any matches, and finally if no matches, use the default if defined.
- Finally, if no matches and no default, check prototype for method of same name. Does that sound better?
Much :)
Actually, let me revise 3: 3. Finally, if no matches and no default, attempt to call a method of the same name (not affected by variable name or aliasing) on the receiver.
Something like this:
const P = new Protocol("foo"); // naive, non-native implementation of P.methods.foo would be
something like function foo(...args){ if(P.symbols.foo in this){ //P.symbols holds a symbol for each method return thisP.symbol.foo;
}else if(P.contains(Object.getPrototypeOf(this))){ //contains and lookup might be backed by a weakmap or
something, //but it would also need to go up the type chain let myFoo = P.lookup(Object.getPrototypeOf(this)).foo; return this::myFoo(...args);
}else if(P.getDefaults().hasOwnProperty("foo")){ let defaultFoo = P.getDefaults().foo; return this::defaultFoo(...args); }else{ this.foo(...args); } }
If this seems acceptable, I'll update the gist.
This seems sensible, though it's a bit more flexibility than I'd prefer. What's not completely clear to me is whether this dispatching is defined by the protocol method implementation or whether it's something that's standardized? If the latter, I'd be concerned by all this flexibility. If it's the former (and you can just grab a dispatch implementation from some module) I guess it doesn't matter.
So *if *the implementation controls the dispatching, the point I was trying to make about the "default" method not really being default is just that it could just as well be inlined. Sure, it'd be nicer if it was defined separately so they can be extracted and used independently, but assuming custom dispatch then defaults are really up to your dispatch algorithm, right? Do I have this about right?
I'm afraid that I'm a little lost by your confusion, so I'll just do the best I can to answer your questions. First, the foo function I wrote out was simply a bit of code to specify the algorithm I was thinking of in more concrete terms. And again, its pretty naive - I was shooting for simpler semantics rather than most efficient. So, to try and clarify a little further:
const P = new Protocol("foo");
I picture the protocol constructor taking a variable number of string arguments, or possibly an object similar to Object.create - I went with strings for now because I couldn't think of what else you would need other than names. The resulting object would be an instance of Protocol. For each string argument you provided the constructor, the protocol would generate a special protocol method, and also an associated symbol. When you are calling the foo method in this case, it has to call an actual method first (unless there's native support) and that method has to do the dispatch to your supplied protocol method, the default method, or attempt to call the method on the instance. I was expecting the protocol library to completely handle this dispatch, so that foo function I created above would basically be internal to the protocol, not something written by the user of the library. Not sure if that clears anything up.
To answer your question more directly, the dispatching would be internal to protocols. If protocols were done as a library, it would be dictated by the library author, if it was native, then it would follow a spec and be implemented by the browser.
Yes, this looks solid and definitely like something I'll use. I'll try to go through use cases and find problems during the weekend.
What do you think would be the fastest way to get a prototype something working to play with?
On Wed, Oct 23, 2013 at 2:57 PM, Benjamin (Inglor) Gruenbaum < inglor at gmail.com> wrote:
Yes, this looks solid and definitely like something I'll use. I'll try to go through use cases and find problems during the weekend.
What do you think would be the fastest way to get a prototype something working to play with?
Well the library itself could be implemented now, but without "::" you wouldn't get the nice syntax. I've been looking into Traceur a bit to see how hard it would be to add "::" - its some fairly simple syntax sugar.
I thought you may find it interesting that while back I wrote library to full fill my desire for clojure like protocols & wrote a blog post
about it:
jeditoolkit.com/2012/03/21/protocol-based-polymorphism.html#post, Gozala/protocol
In a process of using it I discovered that it was not very javascripty & did another take on this in form of different library that let's you create functions and provide it's polymorphic implementations: Gozala/method
I & my team members found this really useful in real project that let us separate contracts between components that would share some interface but are not necessarily related. Initially library was generated unique names for each function that you create, although I hope to use private symbols once they're available:
var each = method()
In nutshell it would do something along this lines
const method = () => { const id = Math.random().toString(32).substr(2) const f = (self, …args) => self[id](self, …args) f.toString = () => id return f }
Although dispatch is little more complicated to incorporate host objects and default implementations. Library also does not mutate bulit-ins instead keeps internal hashes for them. Anyhow this enables defining implementations for this function per type:
MyIterable.prototype[each] = (myIterable, f) => ….
Unfortunately later I had to move towards method("your-own-unique-name")
approach (although old one is still there), because users on nodejs would constantly run into issues, because of nom's nutty habit of duplicating dependencies. That caused same library copy to have each
functions with different identifiers :( I'm afraid we'll face same issues with private symbols as well.
A while ago I also proposed TC39 to consider a small change to private symbols such that, created symbols would actually be invokable:
import Symbol from "@symbol"; const each = new Symbol("each", true);
each(iterator, x => x + 1)
Where above line would desugar to:
iterator[each](iterator, x => x + 1)
-- Irakli Gozalishvili Web: www.jeditoolkit.com
Russell Leggett <russell.leggett at gmail.com> wrote:
Very interesting.
what's "the default" in #4? The protocol's default? What's the behavior if no matching method is found?
Also, can anyone explain why this solves the performance problem scoped object extensions have? It still seems like it would have to check the environment for protocols and then check the methods on all available protocols and do type matching to the type of the method.
Collections.extend(Array)
seems awfully similar to an array extensions, how does the::
operator resolve the need for expensive lookup, can you explain that to me?