Direct proxies update
Le 24/11/2011 10:36, Tom Van Cutsem a écrit :
Hi,
As a follow-up to last week's TC39 meeting, I rearranged things on the wiki to reflect our current thinking on proxies. The previous Proxy API is now superseded by the direct proxies API harmony:direct_proxies. "var proxy = Proxy(target, handler); It is not necessary to use |new| to create new proxy objects." Just to be sure, it means that both "Proxy(target, handler)" and "new Proxy(target, handler)" produce the same result like Array for instance ?
freeze/seal/preventExtensions: For this particular case, I do not want to capture which function was called, but rather the intention of the programmer to set [[Extensible]] to false. If the proxy API does not provide a unique trap this particular operation, I have no way to create this unique trap myself. This can be annoying if another built-in sets [[Extensible]] to false. Even if my handler is a proxy, I can't know if the new trap is going to set [[Extensible]] to false.
enumerate/keys/getOwnPropertyNames: It could be an idea to merge these as well (maybe not enumerate which traverses the prototype?) into one trap with an argument. Or maybe 2 traps: one dedicated to enumeration of own properties, one for proto-climbing properties.
I think a mention should be made of for-of loops (harmony:iterators#iteration_via_proxies) in the proxy strawman just to keep it in mind (and the reflect API?) Also, on the iterator page, the iterate trap should rather default to the [[iterate]] internal of the target object instead of the enumerate trap of the proxy.
"Non-interceptable operations" => I think that some are lacking:
- Object.prototype.toString.call(proxy) (which reads the [[Class]])
- "" + proxy (which reads the [DefaultValue])
- (all operations which include a call to ToPrimitive)
I put my own notes on the discussion of direct proxies at the meeting on the old strawman page: strawman:direct_proxies#feedback_and_discussion.
Work in progress:
- Definition of a built-in handler that enables proxy handlers to still inherit all derived trap implementations, as suggested at the meeting: harmony:virtual_object_api
In the non-normative implementation, there is no import of the @reflect module (but it's used). Though the implementation is non-normative, i'd like to provide some feedback on it:
-
"has" default derived trap: => I think the last line should be Reflect.has(proto, name)
-
"enumerate" default derived trap: " // FIXME: filter duplicates from enumerableProps" => I have seen discussed additions to Math, String.prototype, Number,
but not about Array.prototype. In a thread, Dmitry suggested an Array.prototype.unique method. This one could be handy here. For the non-native implementation but also for native implementations as well. For inherited properties, why not calling Reflect.enumerate(proto) ?
- Definition of a standard "@reflect" module: harmony:reflect_api One observation I made while working on this module is that it's probably a bad idea to use keywords as trap names (delete, new), since such names cannot be straightforwardly imported/exported from modules.
How so?
We should probably consider using the names 'deleteProperty' and 'construct' instead.
If that's really the case, I would prefer changing identifiers that can be imported/exported from modules rather than changing names of the Reflect API.
Once again, great work :-)
On Thu, Nov 24, 2011 at 9:45 AM, David Bruant <bruant.d at gmail.com> wrote:
- Definition of a standard "@reflect" module: harmony:reflect_api One observation I made while working on this module is that it's probably a bad idea to use keywords as trap names (delete, new), since such names cannot be straightforwardly imported/exported from modules.
How so?
You can't do the following:
import {new, delete} from "@reflect";
because you can't bind new' and
delete'. Even if this were allowed,
then `new(...)' would still be a syntax error.
We should probably consider using the names 'deleteProperty' and 'construct' instead.
If that's really the case, I would prefer changing identifiers that can be imported/exported from modules rather than changing names of the Reflect API.
It's really not about modules -- these names are reserved, and can't be made into bindings.
Le 24/11/2011 16:04, Sam Tobin-Hochstadt a écrit :
On Thu, Nov 24, 2011 at 9:45 AM, David Bruant <bruant.d at gmail.com> wrote:
- Definition of a standard "@reflect" module: harmony:reflect_api One observation I made while working on this module is that it's probably a bad idea to use keywords as trap names (delete, new), since such names cannot be straightforwardly imported/exported from modules.
How so? You can't do the following:
import {new, delete} from "@reflect";
because you can't bind
new' and
delete'. Even if this were allowed, then `new(...)' would still be a syntax error.
Oh ok... It actually is more an issue of destructuring than modules themselves. Interestingly, it means that as soon as we have the module syntax out there, there will be pretty much no way to add a new reserved keyword (ever?), because someone may be using the identifier and adding the reserved keyword would break the module import.
We should probably consider using the names 'deleteProperty' and 'construct' instead.
So this sounds like a good idea to work around the issue.
If that's really the case, I would prefer changing identifiers that can be imported/exported from modules rather than changing names of the Reflect API. It's really not about modules -- these names are reserved, and can't be made into bindings. "import Reflect from "@reflect"" would work, though, right? The difference is interesting.
On Nov 24, 2011, at 16:37 , David Bruant wrote:
We should probably consider using the names 'deleteProperty' and 'construct' instead. So this sounds like a good idea to work around the issue.
Other possibility – a prefix: op_delete, op_new or opDelete and opNew
On Thu, Nov 24, 2011 at 10:37 AM, David Bruant <bruant.d at gmail.com> wrote:
Le 24/11/2011 16:04, Sam Tobin-Hochstadt a écrit :
On Thu, Nov 24, 2011 at 9:45 AM, David Bruant <bruant.d at gmail.com> wrote:
- Definition of a standard "@reflect" module: harmony:reflect_api One observation I made while working on this module is that it's probably a bad idea to use keywords as trap names (delete, new), since such names cannot be straightforwardly imported/exported from modules.
How so? You can't do the following:
import {new, delete} from "@reflect";
because you can't bind
new' and
delete'. Even if this were allowed, then `new(...)' would still be a syntax error. Oh ok... It actually is more an issue of destructuring than modules themselves. Interestingly, it means that as soon as we have the module syntax out there, there will be pretty much no way to add a new reserved keyword (ever?), because someone may be using the identifier and adding the reserved keyword would break the module import.
This is already the case in ES5 -- someone might be using `module' as a variable name, as in:
var module = 7;
and thus making `module' a reserved word would break this code. We're dealing with this for ES.next by a combination of an opt-in and contextual reserved words.
On Nov 24, 2011, at 7:37 AM, David Bruant wrote:
Le 24/11/2011 16:04, Sam Tobin-Hochstadt a écrit :
You can't do the following:
import {new, delete} from "@reflect";
because you can't bind
new' and
delete'. Even if this were allowed, then `new(...)' would still be a syntax error. Oh ok... It actually is more an issue of destructuring than modules themselves.
Sort of. It's not even really technically a problem with destructuring; we could allow that, but it would be useless, because you'd never be able to refer to them.
Interestingly, it means that as soon as we have the module syntax out there, there will be pretty much no way to add a new reserved keyword (ever?), because someone may be using the identifier and adding the reserved keyword would break the module import.
This has nothing to do with modules. Adding a reserved word is always backwards-incompatible because someone could already be using it as a variable. Modules don't change this situation at all.
"import Reflect from "@reflect""
Almost.
module Reflect from "@reflect";
You only use "import" to pull out exports from inside a module. (We've been experimenting with alternative syntaxes, btw. I'll report back on that soon.)
On Nov 24, 2011, at 1:36 AM, Tom Van Cutsem wrote:
...
- Definition of a standard "@reflect" module: harmony:reflect_api One observation I made while working on this module is that it's probably a bad idea to use keywords as trap names (delete, new), since such names cannot be straightforwardly imported/exported from modules. We should probably consider using the names 'deleteProperty' and 'construct' instead.
If we are going to have a @reflection module that is of broader applicability then just writing proxy handlers I'd like us to consider a Mirrors style API. Otherwise I'm a concern will continue to have a proliferation of reflection APIs as we move beyond Proxies into other use cases.
At allenwb/jsmirrors is a first cut of a mirrors API that I threw together earlier this year for JavaScript. I don't hold it up as a finished product but it could be a starting point for this sort of design.
At the core is a root question whether we want to expose a functional or object-oriented API for reflection functionality. These are two different styles each of which is probably favored by a different subset of our user community. I suspect that everyone knows which sub-community I align with. The main argument for the OO style is that it allows creation of client code that can be oblivious to the underlying implementation of the API. The allows for more flexible client code that has greater potential for reuse.
I haven't pushed for adopting mirrors into ES.next because I thought we already had too much on the table. However, if we are going to create new reflection APIs then I think we should carefully consider the pros and cons of the mirrors style.
2011/11/24 David Bruant <bruant.d at gmail.com>
"var proxy = Proxy(target, handler); It is not necessary to use new to create new proxy objects." Just to be sure, it means that both "Proxy(target, handler)" and "new Proxy(target, handler)" produce the same result like Array for instance ?
Indeed.
freeze/seal/preventExtensions: For this particular case, I do not want to capture which function was called, but rather the intention of the programmer to set [[Extensible]] to false. If the proxy API does not provide a unique trap this particular operation, I have no way to create this unique trap myself. This can be annoying if another built-in sets [[Extensible]] to false. Even if my handler is a proxy, I can't know if the new trap is going to set [[Extensible]] to false.
Not sure what you're after here. During the meeting, we reverted from protect(op) to splitting back into three traps as that allows freeze and seal to be turned into derived traps (in the virtual handler API). Also, AFAICT there is no built-in other than Object.preventExtensions/freeze/seal that sets [[Extensible]] to false. When using the VirtualHandler, you only need to override preventExtensions as freeze and seal depend on it.
enumerate/keys/getOwnPropertyNames: It could be an idea to merge these as well (maybe not enumerate which traverses the prototype?) into one trap with an argument. Or maybe 2 traps: one dedicated to enumeration of own properties, one for proto-climbing properties.
Our choice was to move away from grouping traps in this way. It's more consistent (no irregularities in the API), and in cases where you do want to distinguish between the two, it's a bit cleaner to separate it out into two methods rather than switching on the string.
I think a mention should be made of for-of loops (
harmony:iterators#iteration_via_proxies) in the proxy strawman just to keep it in mind (and the reflect API?) Also, on the iterator page, the iterate trap should rather default to the [[iterate]] internal of the target object instead of the enumerate trap of the proxy.
Indeed, I should hook up proxies to iterators.
"Non-interceptable operations" => I think that some are lacking:
- Object.prototype.toString.call(proxy) (which reads the [[Class]])
- "" + proxy (which reads the [DefaultValue])
- (all operations which include a call to ToPrimitive)
Indeed, but it wasn't my intention to be exhaustive.
I put my own notes on the discussion of direct proxies at the meeting on the old strawman page: < strawman:direct_proxies#feedback_and_discussion
.
Work in progress:
- Definition of a built-in handler that enables proxy handlers to still inherit all derived trap implementations, as suggested at the meeting: < harmony:virtual_object_api>
In the non-normative implementation, there is no import of the @reflect module (but it's used).
Actually, the goal is for VirtualHandler and that implementation to be part of the @reflect module itself.
Though the implementation is non-normative, i'd like to provide some feedback on it:
- "has" default derived trap: => I think the last line should be Reflect.has(proto, name)
Fixed. Good catch!
- "enumerate" default derived trap: " // FIXME: filter duplicates from enumerableProps" => I have seen discussed additions to Math, String.prototype, Number, but not about Array.prototype. In a thread, Dmitry suggested an Array.prototype.unique method. This one could be handy here. For the non-native implementation but also for native implementations as well. For inherited properties, why not calling Reflect.enumerate(proto) ?
Right again. The difference is important: if proto is itself a proxy, its enumerate trap will be invoked rather than its getOwnPropertyNames trap.
This is a nice pattern: get, set, enumerate and has (all the traps that can be invoked on proxies-as-prototypes) apply the corresponding Reflect operation on the proxy's own prototype if the operation needs to proceed up the prototype chain.
2011/11/24 Allen Wirfs-Brock <allen at wirfs-brock.com>
If we are going to have a @reflection module that is of broader applicability then just writing proxy handlers I'd like us to consider a Mirrors style API. Otherwise I'm a concern will continue to have a proliferation of reflection APIs as we move beyond Proxies into other use cases.
I'm not sure I understand. Additional reflection functionality can easily be added to the @reflect module. It need not be exclusive to Proxies.
At allenwb/jsmirrors is a first cut of a mirrors API that I threw together earlier this year for JavaScript. I don't hold it up as a finished product but it could be a starting point for this sort of design.
At the core is a root question whether we want to expose a functional or object-oriented API for reflection functionality. These are two different styles each of which is probably favored by a different subset of our user community. I suspect that everyone knows which sub-community I align with. The main argument for the OO style is that it allows creation of client code that can be oblivious to the underlying implementation of the API. The allows for more flexible client code that has greater potential for reuse.
I'm sympathetic to mirror-based APIs myself. However, note that a mirror-based API would require an extra allocation as opposed to the proposed API:
// Proposed API: Reflect.has(object, name)
// Mirror-style API: Mirror.on(object).has(name)
In the common case of forwarding an intercepted operation to a target object, a mirror API requires the allocation of a mirror on the target, just to be able to invoke the proper method, only for that mirror to be discarded right away.
I don't see mirrors as being in conflict with this API though. Mirrors can be perfectly layered on top.
I haven't pushed for adopting mirrors into ES.next because I thought we already had too much on the table. However, if we are going to create new reflection APIs then I think we should carefully consider the pros and cons of the mirrors style.
I don't understand why you think of the @reflect module as a "new" reflection API: all of the functionality in it (save for the VirtualHandler) was already present in the original Proxy proposal, where most of the Reflect.* methods were methods on the default ForwardingHandler. Putting them in a separate @reflect module seems the right thing to do now that we have a module system.
I'm sympathetic to mirrors, but I don't think it's an either/or story. A mirror-based API can be layered on top of the standard @reflect module. I'm not sure it needs to be standardized now though: the current API provides the minimum required functionality with minimum overhead.
Le 24/11/2011 22:11, Tom Van Cutsem a écrit :
2011/11/24 David Bruant <bruant.d at gmail.com <mailto:bruant.d at gmail.com>>
freeze/seal/preventExtensions: For this particular case, I do not want to capture which function was called, but rather the intention of the programmer to set [[Extensible]] to false. If the proxy API does not provide a unique trap this particular operation, I have no way to create this unique trap myself. This can be annoying if another built-in sets [[Extensible]] to false. Even if my handler is a proxy, I can't know if the new trap is going to set [[Extensible]] to false.
Not sure what you're after here. During the meeting, we reverted from protect(op) to splitting back into three traps as that allows freeze and seal to be turned into derived traps (in the virtual handler API). Also, AFAICT there is no built-in other than Object.preventExtensions/freeze/seal that sets [[Extensible]] to false. When using the VirtualHandler, you only need to override preventExtensions as freeze and seal depend on it.
enumerate/keys/getOwnPropertyNames: It could be an idea to merge these as well (maybe not enumerate which traverses the prototype?) into one trap with an argument. Or maybe 2 traps: one dedicated to enumeration of own properties, one for proto-climbing properties.
Our choice was to move away from grouping traps in this way. It's more consistent (no irregularities in the API), and in cases where you do want to distinguish between the two, it's a bit cleaner to separate it out into two methods rather than switching on the string.
I've been thinking about it more and the fact that derived traps are reintroduced makes the choice of splitting traps (both for [[Extensible]]:false and enumeration) a good choice. I hadn't ingested all the new things entirely, but it all makes sense now.
I put my own notes on the discussion of direct proxies at the meeting on the old strawman page: <http://wiki.ecmascript.org/doku.php?id=strawman:direct_proxies#feedback_and_discussion>. Work in progress: - Definition of a built-in handler that enables proxy handlers to still inherit all derived trap implementations, as suggested at the meeting: <http://wiki.ecmascript.org/doku.php?id=harmony:virtual_object_api>
In the non-normative implementation, there is no import of the @reflect module (but it's used).
Actually, the goal is for VirtualHandler and that implementation to be part of the @reflect module itself.
Ooooh, right!
(...)
This is a nice pattern: get, set, enumerate and has (all the traps that can be invoked on proxies-as-prototypes) apply the corresponding Reflect operation on the proxy's own prototype if the operation needs to proceed up the prototype chain.
Exactly!
Le 24/11/2011 22:29, Tom Van Cutsem a écrit :
2011/11/24 Allen Wirfs-Brock <allen at wirfs-brock.com <mailto:allen at wirfs-brock.com>>
If we are going to have a @reflection module that is of broader applicability then just writing proxy handlers I'd like us to consider a Mirrors style API. Otherwise I'm a concern will continue to have a proliferation of reflection APIs as we move beyond Proxies into other use cases.
I'm not sure I understand. Additional reflection functionality can easily be added to the @reflect module. It need not be exclusive to Proxies.
At https://github.com/allenwb/jsmirrors is a first cut of a mirrors API that I threw together earlier this year for JavaScript. I don't hold it up as a finished product but it could be a starting point for this sort of design. At the core is a root question whether we want to expose a functional or object-oriented API for reflection functionality. These are two different styles each of which is probably favored by a different subset of our user community. I suspect that everyone knows which sub-community I align with. The main argument for the OO style is that it allows creation of client code that can be oblivious to the underlying implementation of the API. The allows for more flexible client code that has greater potential for reuse.
I'm sympathetic to mirror-based APIs myself. However, note that a mirror-based API would require an extra allocation as opposed to the proposed API:
// Proposed API: Reflect.has(object, name)
// Mirror-style API: Mirror.on(object).has(name)
I have been thinking about this a lot and I don't find any advantage to "Mirror.on(object).(...rest)" over "Reflect.(object, ...rest)" ... for local objects. After reading bracha.org/mirrors.pdf , I have realized that the mirror API aims at more than providing reflection with a uniform API for other sorts of objects including, for instance, remote objects.
Unfortunately, I am not sure I can go further, because I haven't found a definition of what a remote object is and don't really know how reflecting on them differs from reflecting local objects. Among the questions:
- What is a remote object?
- How does it differ from a local object?
- Do you need a local object to "emulate" a remote object?
- Does reflecting on remote objects impose synchronisity (waiting for the remote object to "respond" before telling what the answer is)?
In the common case of forwarding an intercepted operation to a target object, a mirror API requires the allocation of a mirror on the target, just to be able to invoke the proper method, only for that mirror to be discarded right away.
I agree this sounds like an overhead if the goal is to restrict the API to local objects, but if we want to use the same API for promises, it may be necessary.
I don't see mirrors as being in conflict with this API though. Mirrors can be perfectly layered on top.
I haven't pushed for adopting mirrors into ES.next because I thought we already had too much on the table. However, if we are going to create new reflection APIs then I think we should carefully consider the pros and cons of the mirrors style.
I don't understand why you think of the @reflect module as a "new" reflection API: all of the functionality in it (save for the VirtualHandler) was already present in the original Proxy proposal, where most of the Reflect.* methods were methods on the default ForwardingHandler. Putting them in a separate @reflect module seems the right thing to do now that we have a module system.
I think that the difference is that the ForwardingHandler was limited by the restrictions imposed on proxies. The fact that the API is separated into a module decouples from this limitation enabling operations like Reflect.getPrototypeOf(object) or even Reflect.getClassString(object) which could not have been considered for the ForwardingHandler.
I'm sympathetic to mirrors, but I don't think it's an either/or story. A mirror-based API can be layered on top of the standard @reflect module. I'm not sure it needs to be standardized now though: the current API provides the minimum required functionality with minimum overhead.
Would it apply to promises the same way as well?
Le 26/11/2011 01:52, David Bruant a écrit :
Le 24/11/2011 22:29, Tom Van Cutsem a écrit :
2011/11/24 Allen Wirfs-Brock <allen at wirfs-brock.com <mailto:allen at wirfs-brock.com>>
At the core is a root question whether we want to expose a functional or object-oriented API for reflection functionality. (...)
I realize what that sentence meant yesterday, very late. And I realized also that all what Tom said was legitimate. A Mirror style API ("an object-oriented API") can be built on top of the Reflect API ("a functional API" as I understand it). The opposite it true, but comes with an overhead. Maybe in the future, it will be possible to optimize expressions like "Mirror.on(object).has('bla')" (used to implement Reflect.has('bla') if the Mirror style API is used), but it will always require some additional analysis. The opposite is not true.
Consequently, regarding the built-in implementation, I would favor a functional API as well, unless the mirror API has advantages I am oblivious to.
On Nov 25, 2011, at 8:29 AM, Tom Van Cutsem wrote:
2011/11/24 Allen Wirfs-Brock <allen at wirfs-brock.com>
If we are going to have a @reflection module that is of broader applicability then just writing proxy handlers I'd like us to consider a Mirrors style API. Otherwise I'm a concern will continue to have a proliferation of reflection APIs as we move beyond Proxies into other use cases.
I'm not sure I understand. Additional reflection functionality can easily be added to the @reflect module. It need not be exclusive to Proxies.
too many ways to do the same thing is desirable. We already have a number of reflection functions hung off of Object. Your proposal replicates most of those and adds others as functions in the @reflect module. Such duplication is probably unavoidable if we want to transition from the Object based APIs. But if we also added a mirrors based API that also duplicates some of the same functionality we will have three different ways to do some things. One "old way" and two "two ways". That seems like too many.
At allenwb/jsmirrors is a first cut of a mirrors API that I threw together earlier this year for JavaScript. I don't hold it up as a finished product but it could be a starting point for this sort of design.
At the core is a root question whether we want to expose a functional or object-oriented API for reflection functionality. These are two different styles each of which is probably favored by a different subset of our user community. I suspect that everyone knows which sub-community I align with. The main argument for the OO style is that it allows creation of client code that can be oblivious to the underlying implementation of the API. The allows for more flexible client code that has greater potential for reuse.
I'm sympathetic to mirror-based APIs myself. However, note that a mirror-based API would require an extra allocation as opposed to the proposed API:
// Proposed API: Reflect.has(object, name)
// Mirror-style API: Mirror.on(object).has(name)
In the common case of forwarding an intercepted operation to a target object, a mirror API requires the allocation of a mirror on the target, just to be able to invoke the proper method, only for that mirror to be discarded right away.
Yes, I thought about this. One way to avoid the per call allocation is for a proxy to keep as part of its state an appropriate mirror instance on the target object. Proxies that need to do mirror based reflection would create the mirror when the target is set. Proxies that don't reflect don't need to capture such a mirror.
I don't see mirrors as being in conflict with this API though. Mirrors can be perfectly layered on top.
I haven't pushed for adopting mirrors into ES.next because I thought we already had too much on the table. However, if we are going to create new reflection APIs then I think we should carefully consider the pros and cons of the mirrors style.
I don't understand why you think of the @reflect module as a "new" reflection API: all of the functionality in it (save for the VirtualHandler) was already present in the original Proxy proposal, where most of the Reflect.* methods were methods on the default ForwardingHandler. Putting them in a separate @reflect module seems the right thing to do now that we have a module system.
The difference is that they are now being exposed as general purpose reflection functions rather than being islolated as methods that are port of the Proxy subsystem.
I'm sympathetic to mirrors, but I don't think it's an either/or story. A mirror-based API can be layered on top of the standard @reflect module. I'm not sure it needs to be standardized now though: the current API provides the minimum required functionality with minimum overhead.
I think we should try to minimize reduncency in our API design. Too many ways to do the same thing causes confusion,
On Nov 26, 2011, at 11:52 AM, David Bruant wrote:
Le 24/11/2011 22:29, Tom Van Cutsem a écrit :
2011/11/24 Allen Wirfs-Brock <allen at wirfs-brock.com>
If we are going to have a @reflection module that is of broader applicability then just writing proxy handlers I'd like us to consider a Mirrors style API. Otherwise I'm a concern will continue to have a proliferation of reflection APIs as we move beyond Proxies into other use cases.
I'm not sure I understand. Additional reflection functionality can easily be added to the @reflect module. It need not be exclusive to Proxies.
At allenwb/jsmirrors is a first cut of a mirrors API that I threw together earlier this year for JavaScript. I don't hold it up as a finished product but it could be a starting point for this sort of design.
At the core is a root question whether we want to expose a functional or object-oriented API for reflection functionality. These are two different styles each of which is probably favored by a different subset of our user community. I suspect that everyone knows which sub-community I align with. The main argument for the OO style is that it allows creation of client code that can be oblivious to the underlying implementation of the API. The allows for more flexible client code that has greater potential for reuse.
I'm sympathetic to mirror-based APIs myself. However, note that a mirror-based API would require an extra allocation as opposed to the proposed API:
// Proposed API: Reflect.has(object, name)
// Mirror-style API: Mirror.on(object).has(name) I have been thinking about this a lot and I don't find any advantage to "Mirror.on(object).(...rest)" over "Reflect.(object, ...rest)" ... for local objects. After reading bracha.org/mirrors.pdf , I have realized that the mirror API aims at more than providing reflection with a uniform API for other sorts of objects including, for instance, remote objects.
Unfortunately, I am not sure I can go further, because I haven't found a definition of what a remote object is and don't really know how reflecting on them differs from reflecting local objects. Among the questions:
- What is a remote object?
- How does it differ from a local object?
- Do you need a local object to "emulate" a remote object?
- Does reflecting on remote objects impose synchronisity (waiting for the remote object to "respond" before telling what the answer is)?
Did you look at my blog posts and the jsmirrors code. It includes an example of using a common mirror API to access both local objects and a serialized external object representation. such an representation can easily be used to access live remote objects. In fact, on my to do it, is to extend jsmirros to do so for accessing objects in web workers.
On Nov 26, 2011, at 3:55 AM, David Bruant wrote:
Le 26/11/2011 01:52, David Bruant a écrit :
Le 24/11/2011 22:29, Tom Van Cutsem a écrit :
2011/11/24 Allen Wirfs-Brock <allen at wirfs-brock.com> At the core is a root question whether we want to expose a functional or object-oriented API for reflection functionality. (...) I realize what that sentence meant yesterday, very late. And I realized also that all what Tom said was legitimate. A Mirror style API ("an object-oriented API") can be built on top of the Reflect API ("a functional API" as I understand it). The opposite it true, but comes with an overhead. Maybe in the future, it will be possible to optimize expressions like "Mirror.on(object).has('bla')" (used to implement Reflect.has('bla') if the Mirror style API is used), but it will always require some additional analysis. The opposite is not true.
Consequently, regarding the built-in implementation, I would favor a functional API as well, unless the mirror API has advantages I am oblivious to.
I'm with you. JS has first class functions and objects, it is not an OOP-only or OOP-first language. The (dead? nearly) hand of Java weighed heavily on some parts, and methods make sense in many cases, but the cost of temporary objects shouldn't be imposed if a functional API at the lowest level suffices.
2011/11/28 Allen Wirfs-Brock <allen at wirfs-brock.com>
too many ways to do the same thing is desirable. We already have a number of reflection functions hung off of Object. Your proposal replicates most of those and adds others as functions in the @reflect module. Such duplication is probably unavoidable if we want to transition from the Object based APIs. But if we also added a mirrors based API that also duplicates some of the same functionality we will have three different ways to do some things. One "old way" and two "two ways". That seems like too many.
The duplication of existing Object.* reflection methods is unfortunate, but a direct consequence of evolutionary growth. I don't have any solutions for avoiding it.
In the common case of forwarding an intercepted operation to a target object, a mirror API requires the allocation of a mirror on the target, just to be able to invoke the proper method, only for that mirror to be discarded right away.
Yes, I thought about this. One way to avoid the per call allocation is for a proxy to keep as part of its state an appropriate mirror instance on the target object. Proxies that need to do mirror based reflection would create the mirror when the target is set. Proxies that don't reflect don't need to capture such a mirror.
That would work, although how does the proxy know which "mirror factory" to use? (if it uses the "default" one, there's no polymorphism and you might as well use the Reflect.* API)
I guess one could pass the proxy a mirror to the target, rather than a direct reference to the target itself. It still isn't as 'lean' though: Proxy(target,handler) vs. Proxy(Mirror.on(target), handler)
Le 28/11/2011 01:07, Allen Wirfs-Brock a écrit :
On Nov 26, 2011, at 11:52 AM, David Bruant wrote:
Le 24/11/2011 22:29, Tom Van Cutsem a écrit :
2011/11/24 Allen Wirfs-Brock <allen at wirfs-brock.com <mailto:allen at wirfs-brock.com>>
If we are going to have a @reflection module that is of broader applicability then just writing proxy handlers I'd like us to consider a Mirrors style API. Otherwise I'm a concern will continue to have a proliferation of reflection APIs as we move beyond Proxies into other use cases.
I'm not sure I understand. Additional reflection functionality can easily be added to the @reflect module. It need not be exclusive to Proxies.
At https://github.com/allenwb/jsmirrors is a first cut of a mirrors API that I threw together earlier this year for JavaScript. I don't hold it up as a finished product but it could be a starting point for this sort of design. At the core is a root question whether we want to expose a functional or object-oriented API for reflection functionality. These are two different styles each of which is probably favored by a different subset of our user community. I suspect that everyone knows which sub-community I align with. The main argument for the OO style is that it allows creation of client code that can be oblivious to the underlying implementation of the API. The allows for more flexible client code that has greater potential for reuse.
I'm sympathetic to mirror-based APIs myself. However, note that a mirror-based API would require an extra allocation as opposed to the proposed API:
// Proposed API: Reflect.has(object, name)
// Mirror-style API: Mirror.on(object).has(name) I have been thinking about this a lot and I don't find any advantage to "Mirror.on(object).(...rest)" over "Reflect.(object, ...rest)" ... for local objects. After reading bracha.org/mirrors.pdf , I have realized that the mirror API aims at more than providing reflection with a uniform API for other sorts of objects including, for instance, remote objects.
Unfortunately, I am not sure I can go further, because I haven't found a definition of what a remote object is and don't really know how reflecting on them differs from reflecting local objects. Among the questions:
- What is a remote object?
- How does it differ from a local object?
- Do you need a local object to "emulate" a remote object?
- Does reflecting on remote objects impose synchronisity (waiting for the remote object to "respond" before telling what the answer is)?
Did you look at my blog posts and the jsmirrors code. It includes an example of using a common mirror API to access both local objects and a serialized external object representation. such an representation can easily be used to access live remote objects.
I agree, but I don't understand how you can use the same API for both local and remote objects. Or maybe you don't (retrieve representation async-ly and create the mirror when the representation is here)?
In fact, on my to do it, is to extend jsmirros to do so for accessing objects in web workers.
I'm looking forward to seeing your implementation.
On Nov 28, 2011, at 7:04 PM, Tom Van Cutsem wrote:
2011/11/28 Allen Wirfs-Brock <allen at wirfs-brock.com> too many ways to do the same thing is desirable. We already have a number of reflection functions hung off of Object. Your proposal replicates most of those and adds others as functions in the @reflect module. Such duplication is probably unavoidable if we want to transition from the Object based APIs. But if we also added a mirrors based API that also duplicates some of the same functionality we will have three different ways to do some things. One "old way" and two "two ways". That seems like too many.
The duplication of existing Object.* reflection methods is unfortunate, but a direct consequence of evolutionary growth. I don't have any solutions for avoiding it.
I agree. However, it would be desirable to minimize the number of additional layers evolutionary growth.
In the common case of forwarding an intercepted operation to a target object, a mirror API requires the allocation of a mirror on the target, just to be able to invoke the proper method, only for that mirror to be discarded right away.
Yes, I thought about this. One way to avoid the per call allocation is for a proxy to keep as part of its state an appropriate mirror instance on the target object. Proxies that need to do mirror based reflection would create the mirror when the target is set. Proxies that don't reflect don't need to capture such a mirror.
That would work, although how does the proxy know which "mirror factory" to use? (if it uses the "default" one, there's no polymorphism and you might as well use the Reflect.* API)
The same way it knows which functions it needs to call. It is either hard coded or parameterized. For example:
Proxy(target, new WhateverHandler(target))
if Whatever handler needs to create and retain a particular kind of mirror it does so:
function WhateverHandler(target) {
this.mirror = NativeReflectionMirror.on(target);
}
or the responsibility for knowing what kind to mirror to use might reside in the Proxy factory: Proxy(surrogateTarget, new HandlerForMirror(RemoteMirror.for(remoteID))
I guess one could pass the proxy a mirror to the target, rather than a direct reference to the target itself. It still isn't as 'lean' though: Proxy(target,handler) vs. Proxy(Mirror.on(target), handler)
Because of the invariant validation mechanism, the target needs to be a direct reference to a native object rather than a mirror.
If I understand the general usage model, then in the general case it may not be as lean as Proxy(target,handler) regardless of whether Mirrors are involved. If the handler needs to retain any per proxy instance state then a new handler is going to have to be instantiated for each Proxy instance in order to capture that state. Proxy(target, new MyHandler(args))
On Nov 29, 2011, at 2:50 AM, David Bruant wrote:
Le 28/11/2011 01:07, Allen Wirfs-Brock a écrit :
On Nov 26, 2011, at 11:52 AM, David Bruant wrote:
Le 24/11/2011 22:29, Tom Van Cutsem a écrit :
2011/11/24 Allen Wirfs-Brock <allen at wirfs-brock.com>
If we are going to have a @reflection module that is of broader applicability then just writing proxy handlers I'd like us to consider a Mirrors style API. Otherwise I'm a concern will continue to have a proliferation of reflection APIs as we move beyond Proxies into other use cases.
I'm not sure I understand. Additional reflection functionality can easily be added to the @reflect module. It need not be exclusive to Proxies.
At allenwb/jsmirrors is a first cut of a mirrors API that I threw together earlier this year for JavaScript. I don't hold it up as a finished product but it could be a starting point for this sort of design.
At the core is a root question whether we want to expose a functional or object-oriented API for reflection functionality. These are two different styles each of which is probably favored by a different subset of our user community. I suspect that everyone knows which sub-community I align with. The main argument for the OO style is that it allows creation of client code that can be oblivious to the underlying implementation of the API. The allows for more flexible client code that has greater potential for reuse.
I'm sympathetic to mirror-based APIs myself. However, note that a mirror-based API would require an extra allocation as opposed to the proposed API:
// Proposed API: Reflect.has(object, name)
// Mirror-style API: Mirror.on(object).has(name) I have been thinking about this a lot and I don't find any advantage to "Mirror.on(object).(...rest)" over "Reflect.(object, ...rest)" ... for local objects. After reading bracha.org/mirrors.pdf , I have realized that the mirror API aims at more than providing reflection with a uniform API for other sorts of objects including, for instance, remote objects.
Unfortunately, I am not sure I can go further, because I haven't found a definition of what a remote object is and don't really know how reflecting on them differs from reflecting local objects. Among the questions:
- What is a remote object?
In reflection discussion, it often means an object that exists in a isolated object domain that is distinct from the object domain that is needs to examine/manipulate the object. For example, think objects within a web worker. Typically we are talking about objects of the same language in both domains.
The canonical example is a tool such as an object inspector that you would like to be able to target at either local objects (objects in the same object domain as the inspector implementation) or "remote" objects.
- How does it differ from a local object?
There is no direct way to reference a remote object (it is in a different object domain) so any primitive reflection APIs that are based upon local object references can't manipulate a remote object.
- Do you need a local object to "emulate" a remote object?
That is essentially what a local mirror on a remote object is. It makes a remote object accessible using the same reflection API that is used to access a local object. It accomplishes such uniform reference by adding a level of indirect to all reflection APIs, even local direct object accesses.
- Does reflecting on remote objects impose synchronisity (waiting for the remote object to "respond" before telling what the answer is)?
Did you look at my blog posts and the jsmirrors code. It includes an example of using a common mirror API to access both local objects and a serialized external object representation. such an representation can easily be used to access live remote objects. I agree, but I don't understand how you can use the same API for both local and remote objects. Or maybe you don't (retrieve representation async-ly and create the mirror when the representation is here)?
I'm guessing (let me know if I'm wrong) that what you are missing that leads to the above question is that "object reference" arguments to mirror methods (and return value) are themselves mirrors rather than direct object references. So, assume we have executed something like:
let mr = Mirror.on(someObject); //someObject is a regular object reference
then, assuming that object mirrors have a "get" method for accessing a property value an expression like:
let m2= mr.get('propName');
//m2 is a mirror on the value of someObject.propName
let m3 = m2.get('anotherName);
//m3 is a mirror on someObject.propName.anotherName
If instead we had initialized mr as:
let mr = RemoteJSMirror.on(remoteObjectURL); or let mr = JSONSerializationMirror.on(jsonString, rootID);
the above code sequence for setting m2 and m3 would still be valid. The only difference is that that values of m2 and m3 would be either RemoteObjectMirror or JSONSerializedObjectMirror instances instead of NativeObjectMirror instances . (all just made up names).
The only issue that I know of relates to the fact that access to remote objects state probably requires the use of async APIs and call backs while all of the Mirror APIs I have ever seen are most naturally expressed in a synchronous style. I'm not sure what the best resolution of that issue will be. It certainly would be possible to make an async mirrors API that also works with local objects. Anybody writing code that needed to work with any kind of mirror would want to use that async API. However, that would be awkward and expensive for cases where you know that synchronous access is possible, such as probably most Proxy uses of reflection. Perhaps for those cases a simpler synchronous mirrors API is needed. Unfortunately, that would leave us with two variants of the mirrors API.
BTW, it occurs to me that there probably is a similar issue if anyone tries to use Proxy to actually create a remote object access facade. Proxy Handler calls are synchronous but what if a trap handler (say a get) needs to make an async call to a to obtain the state needed to produce the value returned by the trap...
2011/11/29 Allen Wirfs-Brock <allen at wirfs-brock.com>
Because of the invariant validation mechanism, the target needs to be a direct reference to a native object rather than a mirror.
Indeed.
If I understand the general usage model, then in the general case it may not be as lean as Proxy(target,handler) regardless of whether Mirrors are involved. If the handler needs to retain any per proxy instance state then a new handler is going to have to be instantiated for each Proxy instance in order to capture that state. Proxy(target, new MyHandler(args))
Indeed, although it remains possible using the current API to write stateless handlers that maintain their state in a WeakMap keyed by their target (or the handlers themselves).
There will indeed be cases where handlers want to maintain other state directly. Still, there will also be cases where the handler does not need to retain extra state, and it's a good thing that in those cases, no additional object allocation is necessary.
2011/11/29 Allen Wirfs-Brock <allen at wirfs-brock.com>
The only issue that I know of relates to the fact that access to remote objects state probably requires the use of async APIs and call backs while all of the Mirror APIs I have ever seen are most naturally expressed in a synchronous style. I'm not sure what the best resolution of that issue will be. It certainly would be possible to make an async mirrors API that also works with local objects. Anybody writing code that needed to work with any kind of mirror would want to use that async API. However, that would be awkward and expensive for cases where you know that synchronous access is possible, such as probably most Proxy uses of reflection. Perhaps for those cases a simpler synchronous mirrors API is needed. Unfortunately, that would leave us with two variants of the mirrors API.
I agree with your analysis. Having both sync and async variants of the mirrors API is unfortunate, but I think it's absolutely necessary if you want mirrors to work with remote objects. I think most on this list will agree that hiding asynchrony is to be avoided. But perhaps the sync and async mirror APIs could be developed such that there is a trivial (perhaps mechanical) conversion between both. (If we would have promises, the async API could be the sync API with some (return) types lifted from T to Promise<T>)
My position at this time is that Mirrors are promising, but the details still need to be worked out (especially re. remote objects) and it's a pretty big design exercise. Therefore, this may be a task better left to userland libraries in the short term.
BTW, it occurs to me that there probably is a similar issue if anyone tries to use Proxy to actually create a remote object access facade. Proxy Handler calls are synchronous but what if a trap handler (say a get) needs to make an async call to a to obtain the state needed to produce the value returned by the trap...
Yes, two relevant points with respect to this:
- Mark and I tried to express something like this: an "eventual" reference to a local object, enforcing asynchronous access to its target. The example is on the old harmony:proxies page: < harmony:proxies#an_eventual_reference_proxy
This code depends on promises (the "get" trap returns a promise for the result) and basically enables only synchronous access to properties known to be non-configurable or non-writable. It's a good example of a piece of code that exploits the new ES5 "invariants" to be able to safely cache immutable properties.
Promises are really the abstraction you want to bridge synchronous APIs with asynchronous operation.
- When layering remote objects over a restful API, ideally you want obj[name] to perform an HTTP GET and obj.name(a,b,c) to perform an HTTP POST. Proxies cannot distinguish property access from method invocation (which is just get+apply), so they currently can't perform the above mapping.
Le 29/11/2011 12:00, Tom Van Cutsem a écrit :
2011/11/29 Allen Wirfs-Brock <allen at wirfs-brock.com <mailto:allen at wirfs-brock.com>>
Thanks for your explanations in the other message.
The only issue that I know of relates to the fact that access to remote objects state probably requires the use of async APIs and call backs while all of the Mirror APIs I have ever seen are most naturally expressed in a synchronous style. I'm not sure what the best resolution of that issue will be. It certainly would be possible to make an async mirrors API that also works with local objects. Anybody writing code that needed to work with any kind of mirror would want to use that async API. However, that would be awkward and expensive for cases where you know that synchronous access is possible, such as probably most Proxy uses of reflection.
I went to the same sort of reflection when I thought about how node.js models I/O. The initial intention (which is still strongly there but a bit changed) is that when you manipulate something in memory, you do something synchronous ("var a = f();"), but when doing an I/O, you do something asynchronous ("f(function(a){});"). This is a very elegant model, but what when the database is on memory?
I think that we (node, Mirror API, a lot of people in the JavaScript community) are trying to achieve 2 goals that do not seem really going well together:
- uniform API
- support for both synchronous and asynchronous
It seems to me that 2) imposes programming styles that impact APIs (returning a function or passing the result of a call as an argument of the given callback).
I can't think of any language that is able to do both.
Perhaps for those cases a simpler synchronous mirrors API is needed. Unfortunately, that would leave us with two variants of the mirrors API.
I agree with your analysis. Having both sync and async variants of the mirrors API is unfortunate, but I think it's absolutely necessary if you want mirrors to work with remote objects. I think most on this list will agree that hiding asynchrony is to be avoided.
I agree. Since it does not seem possible to do both and that we don't want to impose synchronisity, having 2 APIs seems to be the way to go, unfortunately (unless my analysis is wrong or someone comes up with a genius idea to support uniform API for sync and async)
But perhaps the sync and async mirror APIs could be developed such that there is a trivial (perhaps mechanical) conversion between both. (If we would have promises, the async API could be the sync API with some (return) types lifted from T to Promise<T>)
My position at this time is that Mirrors are promising, but the details still need to be worked out (especially re. remote objects) and it's a pretty big design exercise. Therefore, this may be a task better left to userland libraries in the short term.
BTW, it occurs to me that there probably is a similar issue if anyone tries to use Proxy to actually create a remote object access facade. Proxy Handler calls are synchronous but what if a trap handler (say a get) needs to make an async call to a to obtain the state needed to produce the value returned by the trap...
Yes, two relevant points with respect to this:
- Mark and I tried to express something like this: an "eventual" reference to a local object, enforcing asynchronous access to its target. The example is on the old harmony:proxies page: harmony:proxies#an_eventual_reference_proxy
This code depends on promises (the "get" trap returns a promise for the result) and basically enables only synchronous access to properties known to be non-configurable or non-writable. It's a good example of a piece of code that exploits the new ES5 "invariants" to be able to safely cache immutable properties.
Promises are really the abstraction you want to bridge synchronous APIs with asynchronous operation.
Promises sounds promising, but I'm not sure it will entirely fill the gap. If we're trying to write a function f which works both with local and remote objects, we end up with this:
function f(o){ var a = o.bla; // In the synchronous case, a is the value. // if o is a localFarReferenceMaker, a is a promise.
// In order to continue for both cases, one would have to do
something like: if(isPromise(a)){ a.when(function(res){ // do something with the value in res }); } else{ // do the exact same thing with the value in a } }
Obviously, an idea is to factorize the common behavior:
function f(o){ var a = o.bla; // In the synchronous case, a is the value. // if o is a localFarReferenceMaker, a is a promise.
function g(res){
// do something with the value in res
}
if(isPromise(a)){
a.when(g);
}
else{
g(a);
}
}
And we get to some canonical case where I don't know how to uniformize "a.when(g)" and "g(a)". Maybe something with generators? Maybe a new form a function?
Maybe the goal of trying to have a function which works with both local and remote values is vain (in general or in JavaScript in particular)?
Hence the "bang" ("!") syntax, I guess. I think proxies can distinguish, but at some cost: "obj.name" returns a function proxy with a promise as target (no big deal :-p ). This sends neither GET nor POST but waits for the end of the "event loop turn". If the proxy function has been called send a POST (as many times as the function proxy has been called with respective arguments each time). If the event loop turn is over and the function has not been called, then perform a GET and the (function proxy) promise stops being callable. The point is that at the end of the "event loop turn", you know what the intention of the author is but you haven't started fulfilling promises, so it may be worth waiting to know whether to do a GET or a POST.
It comes with the cost of waiting for the end of "event loop turn" which may never come... but promises won't be fulfilled anyway if it doesn't come, so whether you did a GET, a POST or nothing changes from a network perspective, but does not from a JavaScript program perspective. And since the idea is fresh, I may be ignoring some program correctness cost.
.. The initial intention (which is still strongly there but a bit changed) is that when you manipulate something in memory, you do something synchronous ("var a = f();"), but when doing an I/O, you do something asynchronous ("f(function(a){});"). This is a very elegant model, but what when the database is on memory? .. I think that we (node, Mirror API, a lot of people in the JavaScript community) are trying to achieve 2 goals that do not seem really going well together:
- uniform API
- support for both synchronous and asynchronous
It seems to me that 2) imposes programming styles that impact APIs (returning a function or passing the result of a call as an argument of the given callback).
I can't think of any language that is able to do both.
Just wanted to answer this comment: it is possible to smooth over the differences between these two styles, by introducing generality that is only needed in one of the two. Several languages support syntax that makes the general case easier to write (links below), and I think that ES should have such support, too.
Starting with the synchronous code pattern
var a = f(); ..code..
assuming 'a' is for local use in '..code..' , this can be rewritten to
( function(a) { ..code.. } )( f() );
Now, to add a little flexibility, imagine a function 'then', to pass 'f's result to 'a' in the simplest case ("call f, then pass its result to a"), or to add additional steps if necessary (eg, error handling)
then( function(a) { ..code.. } )( f() );
The unnamed function is now a callback parameter, and it is customary to pass such as the last argument, so lets switch to
then( f() )( function(a) { ..code.. } );
and if we want to cater for different kinds of 'then' (error handling, asynch code, ..), we might want to turn 'then' into a method of 'f's result
f().then( function(a) { ..code.. } );
which is close to the second style you mentioned, but with an important refinement: instead of every operation taking a callback, every operation returns an object that conforms to a simple interface (having a method 'then' which takes a callback). Promises just happen to be 'then'-ables that work asynchronously.
So, if we had syntactic sugar for transforming, say,
let a <- f(); ..code.. // (1)
into (modulo 'this'/'arguments', as usual)
f().then( function(a) { ..code.. } );
Then we could write code (1) that looks similar (but not equal!) to normal synchronous imperative code, no matter whether the 'then' method in question directly passes the result to the callback, or whether it registers the callback with a result that happens to be a promise.
Having this kind of syntax sugar for working with 'then'-ables is worth it (you'll also want to support the special case where 'f's result value is ignored) because the 'then'-able pattern works for many more cases than just synch vs async, and because the bulk of the work goes into the libraries (which are easier to evolve and extend), not into the language (which just makes the libraries possible/syntactically useable).
(for instance, since you have full access to the callback and the intermediate results, you can build in error handling, or even backtracking search/parsing, and many of the individual solutions compose to form new 'then'-ables, eg. asynch+error handling).
Not surprisingly, none of this is new, so ES could borrow proven ideas from other languages!-) Monad comprehensions and do-notation in Haskell [1], computation expressions and asynch workflows in F# [2], the proposed Erlando parse-transformers for Erlang [3], ..
[1] www.haskell.org/onlinereport/haskell2010/haskellch3.html#x8-470003.14 [2] msdn.microsoft.com/en-us/library/dd233182.aspx [3] www.rabbitmq.com/blog/2011/05/17/can-you-hear-the-drums-erlando
Don Syme's blog entry shows how the F# feature is remarkably similar in style to ES.next quasi's, just over statement blocks instead of strings (one has a bit of syntax and some code that helps interpreting that syntax). An alternative view is as proxies for blocks, rather than objects (one defines traps for each piece of block syntax):
On the application to remote/local object: it is often considered bad style to make remote work look like local work (completely different considerations and characteristics apply regarding latencies, communication and remote computation failures, etc.), and we can consider it dangerous to make the complex case (asynch/remote) look exactly like the simple case (synch/local).
We can, however, make the simple case look like the complex case, eg, we can make local work look like remote work (local just happens to be a fast and reliable remote worker that never encounters many of the failure modes we need to handle in general), or we can make synch code look like asynch code (that just happens to be executed without delays).
We can then think about making the complex case easier to write, so much so that using the complex code pattern for simple use cases is no longer considered ridiculous - the code for the complex case is almost as easy to write and read as the code for the simple case.
As a welcome extra benefit, as the complex code becomes easier to write, forgetting proper error handling becomes less likely, and error handling patterns can be factored out instead of being repeated and oversimplified everywhere.
Claus clausreinke.github.com
2011/11/29 David Bruant <bruant.d at gmail.com>
Promises sounds promising, but I'm not sure it will entirely fill the gap. If we're trying to write a function f which works both with local and remote objects, we end up with this:
function f(o){ var a = o.bla; // In the synchronous case, a is the value. // if o is a localFarReferenceMaker, a is a promise.
// In order to continue for both cases, one would have to do something
like: if(isPromise(a)){ a.when(function(res){ // do something with the value in res }); } else{ // do the exact same thing with the value in a } }
No, it need not be this complicated. Using the strawman Q API (< strawman:concurrency>):
function f(o) { var a = o.bla; Q(a).when(function(res) { // do something with the value in res }); }
Q(x) is a no-op if x is already a promise. If x is a local value, Q(x) turns it into a local promise for that value. The general rule here is: if your code needs to handle both local and remote values, deal with the remote/async case only. The local case should be a subset of the remote case.
<snip>
And we get to some canonical case where I don't know how to uniformize "a.when(g)" and "g(a)". Maybe something with generators? Maybe a new form a function?
Generators can help in avoiding inversion of control. That's dealt with by the Q.async part of the concurrency strawman < strawman:concurrency#q.async>.
Maybe the goal of trying to have a function which works with both local and remote values is vain (in general or in JavaScript in particular)?
Not if one is careful to design the remote API such that it also operates sensibly on local values.
- When layering remote objects over a restful API, ideally you want obj[name] to perform an HTTP GET and obj.name(a,b,c) to perform an HTTP POST. Proxies cannot distinguish property access from method invocation (which is just get+apply), so they currently can't perform the above mapping.
Hence the "bang" ("!") syntax, I guess. I think proxies can distinguish, but at some cost: "obj.name" returns a function proxy with a promise as target (no big deal :-p ). This sends neither GET nor POST but waits for the end of the "event loop turn". If the proxy function has been called send a POST (as many times as the function proxy has been called with respective arguments each time). If the event loop turn is over and the function has not been called, then perform a GET and the (function proxy) promise stops being callable. The point is that at the end of the "event loop turn", you know what the intention of the author is but you haven't started fulfilling promises, so it may be worth waiting to know whether to do a GET or a POST.
It comes with the cost of waiting for the end of "event loop turn" which may never come... but promises won't be fulfilled anyway if it doesn't come, so whether you did a GET, a POST or nothing changes from a network perspective, but does not from a JavaScript program perspective. And since the idea is fresh, I may be ignoring some program correctness cost.
This replaces the need for a primitive to distinguish "get" from "invoke" with the need for a primitive to execute code at the end of the current event loop turn. Also, they are not equivalent: |var f = o.x; f();| would perform an HTTP GET "x" under the original semantics, and an HTTP POST "x" given your proposal.
Anyhow, the discussion about proxies being able to distinguish GET from POST is moot in light of the concurrency strawman, since it proposes not to overload "." for remote access, instead introducing distinct Q.get/Q.post operations (or "!" syntactic sugar).
Le 29/11/2011 18:40, Tom Van Cutsem a écrit :
2011/11/29 David Bruant <bruant.d at gmail.com <mailto:bruant.d at gmail.com>>
Promises sounds promising, but I'm not sure it will entirely fill the gap. If we're trying to write a function f which works both with local and remote objects, we end up with this: ----- function f(o){ var a = o.bla; // In the synchronous case, a is the value. // if o is a localFarReferenceMaker, a is a promise. // In order to continue for both cases, one would have to do something like: if(isPromise(a)){ a.when(function(res){ // do something with the value in res }); } else{ // do the exact same thing with the value in a } } -----
No, it need not be this complicated. Using the strawman Q API (strawman:concurrency):
function f(o) { var a = o.bla; Q(a).when(function(res) { // do something with the value in res }); }
Q(x) is a no-op if x is already a promise. If x is a local value, Q(x) turns it into a local promise for that value. The general rule here is: if your code needs to handle both local and remote values, deal with the remote/async case only. The local case should be a subset of the remote case.
Oh ok, interesting. ... but does that mean that as soon as we bring concurrency (and asynchronisity) to ECMAScript, every API manipulating objects (or potentially any remote value)?
(...) Also, they are not equivalent: |var f = o.x; f();| would perform an HTTP GET "x" under the original semantics, and an HTTP POST "x" given your proposal. ... you're right :
On Tue, Nov 29, 2011 at 10:01 AM, David Bruant <bruant.d at gmail.com> wrote:
Le 29/11/2011 18:40, Tom Van Cutsem a écrit :
[...]
The general rule here is: if your code needs to handle both local and remote values, deal with the remote/async case only. The local case should be a subset of the remote case.
Oh ok, interesting. ... but does that mean that as soon as we bring concurrency (and asynchronisity) to ECMAScript, every API manipulating objects (or potentially any remote value)?
Hi David, could you complete your question? Thanks.
Le 29/11/2011 19:05, Mark S. Miller a écrit :
On Tue, Nov 29, 2011 at 10:01 AM, David Bruant <bruant.d at gmail.com <mailto:bruant.d at gmail.com>> wrote:
Le 29/11/2011 18:40, Tom Van Cutsem a écrit :
[...]
The general rule here is: if your code needs to handle both local and remote values, deal with the remote/async case only. The local case should be a subset of the remote case.
Oh ok, interesting. ... but does that mean that as soon as we bring concurrency (and asynchronisity) to ECMAScript, every API manipulating objects (or potentially any remote value)
should be design in the async style (additional callback argument instead of return value)
?
Hi David, could you complete your question? Thanks.
sorry.
I think that the answer to my question is to keep designing APIs as it has been but to return a promise for the asynchronous case and the API client will use the pattern Tom showed ('Q(a).when(function(val){})'). The Reflection API could do that (that's actually what Tom suggested at some point) and a proxy reflecting a remote object could also return promises.
Promises and the unifying Q(a).when seems to be what save us from designing two APIs. Looking forward to seeing this in ECMAScript.
Very much like Tom said about Mirror.on(obj).has, maybe that for the local case, instanciating a promise for a local value could be avoided. What about 'Q.when(a, function(val){});' or 'When(a, function(val){})'? in which a is either a promise or a local value and this acts like we'd expect 'Q(a).when(function(val){})' to.
On Tue, Nov 29, 2011 at 11:03 AM, David Bruant <bruant.d at gmail.com> wrote:
Le 29/11/2011 19:05, Mark S. Miller a écrit :
On Tue, Nov 29, 2011 at 10:01 AM, David Bruant <bruant.d at gmail.com> wrote:
Le 29/11/2011 18:40, Tom Van Cutsem a écrit :
[...]
The general rule here is: if your code needs to handle both local and remote values, deal with the remote/async case only. The local case should be a subset of the remote case.
Oh ok, interesting. ... but does that mean that as soon as we bring concurrency (and asynchronisity) to ECMAScript, every API manipulating objects (or potentially any remote value)
should be design in the async style (additional callback argument instead of return value)
?
Hi David, could you complete your question? Thanks.
sorry.
I think that the answer to my question is to keep designing APIs as it has been but to return a promise for the asynchronous case and the API client will use the pattern Tom showed ('Q(a).when(function(val){})').
Yes. Or 'Q(a).get("foo")' or 'Q(a).send("foo", b, c)' or their respective sugared form 'a ! foo' or 'a ! foo(b, c)', depending on what you want to do with 'a'. Note that if 'a' designates a remote object, in 'Q(a).when(function(val){...})', 'val' will still be bound to a far reference which is still a form of promise whose "." access the promise API rather than the API of the remote target object. If you invoke the designated object's API simply with "!", that works whether 'a' is a non-promise, a promise for a local object, or a promise for a remote object. In all cases, the value of the infix "!" expression is reliably a promise.
The Reflection API could do that (that's actually what Tom suggested at some point) and a proxy reflecting a remote object could also return promises.
I don't understand.
Promises and the unifying Q(a).when seems to be what save us from designing two APIs. Looking forward to seeing this in ECMAScript.
Me too! Except for the infix "!" sugar, all this can be accomplished today by using a Q library, such as Kris Kowal's.
Very much like Tom said about Mirror.on(obj).has, maybe that for the local case, instanciating a promise for a local value could be avoided. What about 'Q.when(a, function(val){});' or 'When(a, function(val){})'? in which a is either a promise or a local value and this acts like we'd expect 'Q(a).when(function(val){})' to.
Are you just concerned with avoiding an extra allocation, or am I missing some other issue here?
Le 29/11/2011 21:24, Mark S. Miller a écrit :
On Tue, Nov 29, 2011 at 11:03 AM, David Bruant <bruant.d at gmail.com <mailto:bruant.d at gmail.com>> wrote:
Le 29/11/2011 19:05, Mark S. Miller a écrit :
On Tue, Nov 29, 2011 at 10:01 AM, David Bruant <bruant.d at gmail.com <mailto:bruant.d at gmail.com>> wrote: Le 29/11/2011 18:40, Tom Van Cutsem a écrit : [...]
The general rule here is: if your code needs to handle both local and remote values, deal with the remote/async case only. The local case should be a subset of the remote case.
Oh ok, interesting. ... but does that mean that as soon as we bring concurrency (and asynchronisity) to ECMAScript, every API manipulating objects (or potentially any remote value)
should be design in the async style (additional callback argument instead of return value)
? Hi David, could you complete your question? Thanks.
sorry. I think that the answer to my question is to keep designing APIs as it has been but to return a promise for the asynchronous case and the API client will use the pattern Tom showed ('Q(a).when(function(val){})').
Yes. Or 'Q(a).get("foo")' or 'Q(a).send("foo", b, c)' or their respective sugared form 'a ! foo' or 'a ! foo(b, c)', depending on what you want to do with 'a'. Note that if 'a' designates a remote object, in 'Q(a).when(function(val){...})', 'val' will still be bound to a far reference which is still a form of promise whose "." access the promise API rather than the API of the remote target object. If you invoke the designated object's API simply with "!", that works whether 'a' is a non-promise, a promise for a local object, or a promise for a remote object. In all cases, the value of the infix "!" expression is reliably a promise.
The Reflection API could do that (that's actually what Tom suggested at some point) and a proxy reflecting a remote object could also return promises.
I don't understand.
In order to support reflection of both local and remote objects, the Reflection API could return promises: "Reflection.has(o, 'a')" would return a boolean if o is local or a promise to a boolean if o is remote.
For the second part, I was saying that harmony:proxies#an_eventual_reference_proxy could be reimplemented and return promises instead of the setTimeout(0). But I'm a bit confused with this example, because some things are async (defineProperty, delete, etc.), but some others are synchronous (getOwnPropertyNames, has, etc.). Shouldn't everything return promises?
Very much like Tom said about Mirror.on(obj).has, maybe that for the local case, instanciating a promise for a local value could be avoided. What about 'Q.when(a, function(val){});' or 'When(a, function(val){})'? in which a is either a promise or a local value and this acts like we'd expect 'Q(a).when(function(val){})' to.
Are you just concerned with avoiding an extra allocation, or am I missing some other issue here?
Avoiding an extra allocation is the only worry for this last point. Very much like Tom worried about mirror allocation at esdiscuss/2011-November/018734
Digression about memory in JS implementations: I've been following the MemShrink effort in Firefox. Data structures have been shrinked, fragmentation has been reduced making a better use of memory, but I have seen much less work toward reducing the number of allocations. This is certainly because the study of when an allocation is required or not is usually complicated. I don't know what the exact status of implementations is, but what happens in current JS engines when the expression '[].forEach.call' is met? Is the allocation of an array actually performed? Hopefully not, I would not be surprised if it was.
Back to promises, it seems that Q(p).when(f) may become a common programming pattern to express "if p is a local value, call f at next turn with p as argument. If p is a promise, call f with its resolution when resolved". If it becomes so, it means that a Q(p) will generate a promise to throw away in the local value case. As usual in JavaScript, static analysis won't be possible to avoid the allocation, because Q(p) could return anything (since Q could be overridden or come from who-knows-where). On the other hand, if we have a functional API like 'when(p, f)', we avoid the allocation by design and are able to express the exact same thing.
Taken from a different perspective, if we start designing APIs which return either a local value or a promise to a value, maybe that the promise API should work with both (instead of having to being forced to turn everything into a promise before using the API as it is now). p.when is the only part of the API that would be affected, I think.
Looking through Promise methods (strawman:concurrency#promise_methods), I realize that these (besides p.when and p.end) could just be replaced by the Reflection API being adapted to work with promises.
On Nov 30, 2011, at 8:15 AM, David Bruant wrote:
Avoiding an extra allocation is the only worry for this last point. Very much like Tom worried about mirror allocation at esdiscuss/2011-November/018734
Digression about memory in JS implementations: I've been following the MemShrink effort in Firefox. Data structures have been shrinked, fragmentation has been reduced making a better use of memory, but I have seen much less work toward reducing the number of allocations. This is certainly because the study of when an allocation is required or not is usually complicated.
A sign that your garbage collector isn't good enough: People are writing style guides that tell developers that they should avoid allocating objects.
Objects serve as one of our primary abstraction mechanisms (the other is functions and function closures have similar allocation issues). Anytime you you tell programmers not to allocate you take away their ability use abstraction to deal with complexity.
A good GC should (and can) make allocation and reclamation of highly ephemeral objects so cheap that developers simply shouldn't worry about it. This is not to say that there are no situations where excessive allocations may cause performance issues but such situations should be outliers that only need to be dealt with when they are actually identified as being a bottleneck. To over simplify: a good bump allocation makes object creation nearly as efficient as assigning to local variables and a good multi-generation ephemeral collector has a GC cost that is proportional to the number of retained objects not the number of allocated objects. Objects that are created and discarded within the span of a single ephemeral collection cycle should have a very low cost. This has all been demonstrated in high perf memory managers for Smalltalk and Lisp.
I don't know what the exact status of implementations is, but what happens in current JS engines when the expression '[].forEach.call' is met? Is the allocation of an array actually performed? Hopefully not, I would not be surprised if it was.
I suspect they don't optimize this although arguably they should. However, if you buy my argument then it really doesn't make much difference. Implementations should put the effort into building better GCs.
... Looking through Promise methods (strawman:concurrency#promise_methods), I realize that these (besides p.when and p.end) could just be replaced by the Reflection API being adapted to work with promises.
Stated slightly differently, a promise can be thought of as a specific kind of object mirror.
Le 29/11/2011 23:07, Allen Wirfs-Brock a écrit :
On Nov 30, 2011, at 8:15 AM, David Bruant wrote:
Avoiding an extra allocation is the only worry for this last point. Very much like Tom worried about mirror allocation at esdiscuss/2011-November/018734
Digression about memory in JS implementations: I've been following the MemShrink effort in Firefox. Data structures have been shrinked, fragmentation has been reduced making a better use of memory, but I have seen much less work toward reducing the number of allocations. This is certainly because the study of when an allocation is required or not is usually complicated.
A sign that your garbage collector isn't good enough: People are writing style guides that tell developers that they should avoid allocating objects.
Objects serve as one of our primary abstraction mechanisms (the other is functions and function closures have similar allocation issues). Anytime you you tell programmers not to allocate you take away their ability use abstraction to deal with complexity.
I agree with you with some restictions.
- For a native API, the cost of function closure is null (since the function does not need a scope to capture variable)
- Objects are an interesting abstraction as long as they have a state. For the specific example of Reflection API, the stateless API that Tom started seems to prove that a reflection API does not need a state. In that case, why bothering allocating objects? That's the same reason why math functions are properties of the Math object and not "math objects". However, having an object-oriented DOM makes a lot of sense to me since objects have a state (children, node type, etc.). I'm not sure we could easily and conviniently turn the DOM into a set of stateless functions.
A good GC should (and can) make allocation and reclamation of highly ephemeral objects so cheap that developers simply shouldn't worry about it.
I agree on the reclamation part, but I don't understand what a GC can do about allocation of ephemeral (or not) objects.
This is not to say that there are no situations where excessive allocations may cause performance issues but such situations should be outliers that only need to be dealt with when they are actually identified as being a bottleneck. To over simplify: a good bump allocation makes object creation nearly as efficient as assigning to local variables and a good multi-generation ephemeral collector has a GC cost that is proportional to the number of retained objects not the number of allocated objects. Objects that are created and discarded within the span of a single ephemeral collection cycle should have a very low cost. This has all been demonstrated in high perf memory managers for Smalltalk and Lisp.
If a garbage collection is triggered when a generation is full, then, your GC cost remains proportional to your number of allocation.
If a garbage collection is triggered at constant intervals, then it probably runs for nothing (or too few) too often.
I don't know what the exact status of implementations is, but what happens in current JS engines when the expression '[].forEach.call' is met? Is the allocation of an array actually performed? Hopefully not, I would not be surprised if it was.
I suspect they don't optimize this although arguably they should. However, if you buy my argument then it really doesn't make much difference. Implementations should put the effort into building better GCs.
For this particular case where the object is not ephemeral, but completely useless, a GC will still cost you something (even if very small), while static analysis can tell you to not allocate at all. I'm not talking about a smaller cost of allocation+discard, but nullifying it with a constant (and small) time of static analysis.
var a = [1]; function f(e, i){a[i] = Math.random();}
while(true){ [].forEach.call(a, f); }
Without static analysis, the first array is allocated and this will run the GC. With static analysis, the GC has no reason to run: the first array does not need to be allocated since its reference is never used anywhere after the retrieval of forEach (which is looked up directly on Array.prototype if the implementation is conformant to ES5.1).
I'll take actual garbage as a metaphor, I am pro recycling (garbage collection), but to recycling, I prefer to avoid buying things with excessive packaging. This way I produce less garbage (less allocation). Maybe should we apply basics of ecology to memory management? ;-)
I agree with you that abstractions are a good thing and I won't compromise them if they are necessary. But it should not be an excuse to allocate for no reason, even if it's cheap. And while garbage collection should be improved, if we can find cheap ways to allocate less (at the engine or programmer level), we should apply them.
... Looking through Promise methods (strawman:concurrency#promise_methods), I realize that these (besides p.when and p.end) could just be replaced by the Reflection API being adapted to work with promises.
Stated slightly differently, a promise can be thought of as a specific kind of object mirror.
Interesting :-)
On Nov 30, 2011, at 10:24 AM, David Bruant wrote:
Le 29/11/2011 23:07, Allen Wirfs-Brock a écrit :
... Objects serve as one of our primary abstraction mechanisms (the other is functions and function closures have similar allocation issues). Anytime you you tell programmers not to allocate you take away their ability use abstraction to deal with complexity. I agree with you with some restictions.
- For a native API, the cost of function closure is null (since the function does not need a scope to capture variable)
- Objects are an interesting abstraction as long as they have a state. For the specific example of Reflection API, the stateless API that Tom started seems to prove that a reflection API does not need a state. In that case, why bothering allocating objects?
The state is explicitly passed as arguments. Most important is the first argument that identifies the object. The client must keep track of this state and explicitly associate it with each call. Clients have been known to make mistakes and pass the wrong object to such methods. One of the things that an object based API does is make the association of between that state and the functions implicit by encapsulating the state and the functions together as an object and automatically associating them during method calls. This makes it easy for clients to do things that are hard given the other approach. For example, it allows a client to be written to that is capable of transparently dealing with different implementations of a common API. In an earlier message I described the example of an "inspector" client that is able to display information about objects without knowing where or how the object is implemented. A different reason for using objects in a reflection API is so you can easily attenuate authority. For example, for many clients it may be sufficient to provide them with non-mutating mirrors that only allow inspection. They do this by excluding from the mirror objects all mutation methods.
That's the same reason why math functions are properties of the Math object and not "math objects".
Which works fine as long as you only have one kind of number. But if you add multiple numeric data types then you are either going to have to have additional Math objects (ArbitraryPrecisionMath, DecimalFloatMath, etc), have generic functions (a dual of objects), or turn them into methods.
However, having an object-oriented DOM makes a lot of sense to me since objects have a state (children, node type, etc.). I'm not sure we could easily and conviniently turn the DOM into a set of stateless functions.
The same way you do it in C or Pascal or assembly languages. You have state (often structs) and functions and try to make sure you always call the appropriate functions with the right kind of state. That's what objects do for you. They automates the necessary house keeping.
A good GC should (and can) make allocation and reclamation of highly ephemeral objects so cheap that developers simply shouldn't worry about it. I agree on the reclamation part, but I don't understand what a GC can do about allocation of ephemeral (or not) objects.
A good bump allocator simply has a linear memory area where objects all allocated simply by "bumping" the pointer to the next available slot. If you need to allocated a three slot object you just increment the allocation point by (3+h)*slotSize, fill in the object slots, and finally compare against a upper bound. This is actually quite similar to how local variables are allocated on the stack. h is the number of overhead needed to form an "object header" so the slots can be processed as an object. Header size is dependent upon trade-offs in the overall design. 2 is a pretty good value, 1 is possible, 3 or more suggests that there may be room to tighten up the design. For JS, you have to assume that you are on a code path that is not enough that the implementation has actually been able to assign a "shape" to the object (in this case knows that it has t3 slots, etc.) that is being allocated. (It you aren't on such a hot path why do you care).
This is not to say that there are no situations where excessive allocations may cause performance issues but such situations should be outliers that only need to be dealt with when they are actually identified as being a bottleneck. To over simplify: a good bump allocation makes object creation nearly as efficient as assigning to local variables and a good multi-generation ephemeral collector has a GC cost that is proportional to the number of retained objects not the number of allocated objects. Objects that are created and discarded within the span of a single ephemeral collection cycle should have a very low cost. This has all been demonstrated in high perf memory managers for Smalltalk and Lisp. If a garbage collection is triggered when a generation is full, then, your GC cost remains proportional to your number of allocation.
Typically, a ephemeral GC would be trigger when the bump point exceeds the limit (perhaps after doing so, and switch to a new allocation zone several times.)
However, GC cost isn't usually proportional to the number of allocations. Programs typically reach a steady state where the number of ephemeral objects that survive stabilizes at some level (actually most programs shift over time between several steady state phases). When a program is in such a stead state, once you exceed a base threshold changing the frequency of GC doesn't really change how many ephemeral objects will survive a collection. The execution time of a copying collector is proportional to the number of surviving objects (garbage objects are just left behind, untouched). So the size of the allocation zone determines how frequently a GC is done, but the actual cost of a GC is some fixed overhead to enter/leave the GC plus the cost of scavenging the surviving objects. Bigger allocation zones means less GC total overhead, but individual GCs cost about the same, no matter how frequently they are performed or how many object are allocated between them.
If a garbage collection is triggered at constant intervals, then it probably runs for nothing (or too few) too often.
I don't know what the exact status of implementations is, but what happens in current JS engines when the expression '[].forEach.call' is met? Is the allocation of an array actually performed? Hopefully not, I would not be surprised if it was.
I suspect they don't optimize this although arguably they should. However, if you buy my argument then it really doesn't make much difference. Implementations should put the effort into building better GCs. For this particular case where the object is not ephemeral, but completely useless, a GC will still cost you something (even if very small), while static analysis can tell you to not allocate at all. I'm not talking about a smaller cost of allocation+discard, but nullifying it with a constant (and small) time of static analysis.
var a = [1]; function f(e, i){a[i] = Math.random();}
while(true){ [].forEach.call(a, f); }
Without static analysis, the first array is allocated and this will run the GC. With static analysis, the GC has no reason to run: the first array does not need to be allocated since its reference is never used anywhere after the retrieval of forEach (which is looked up directly on Array.prototype if the implementation is conformant to ES5.1).
So, lift the [].forEach out of the loop. Ideally, implementations will do this for you. But, I don't see how this advances any useful discussion about the utility of objects. In fact, this loop, with a good GC should have very fast GCs when they are triggered. This is because it isn't allocating anything that remains alive beyond a single iteration of the loop. When the allocation zone fills up the GC starts ups traces roots, finds only a single object that needs to service that cycle, copies it, and resets.
I'll take actual garbage as a metaphor, I am pro recycling (garbage collection), but to recycling, I prefer to avoid buying things with excessive packaging. This way I produce less garbage (less allocation). Maybe should we apply basics of ecology to memory management? ;-)
You also have to trade-off the runtime cost of doing the data collection and analysis to enable to you eliminate the optimization. It isn't clear that it will always be cheaper than then just letting a good GC do its job.
I agree with you that abstractions are a good thing and I won't compromise them if they are necessary. But it should not be an excuse to allocate for no reason, even if it's cheap. And while garbage collection should be improved, if we can find cheap ways to allocate less (at the engine or programmer level), we should apply them.
The starting point of this discussion, is that I content that that is are good reasons to want to abstract over reflection functions using object based mirrors. The object serve a useful purpose.
Le 30/11/2011 06:56, Allen Wirfs-Brock a écrit :
On Nov 30, 2011, at 10:24 AM, David Bruant wrote:
Le 29/11/2011 23:07, Allen Wirfs-Brock a écrit :
... Objects serve as one of our primary abstraction mechanisms (the other is functions and function closures have similar allocation issues). Anytime you you tell programmers not to allocate you take away their ability use abstraction to deal with complexity. I agree with you with some restictions.
- For a native API, the cost of function closure is null (since the function does not need a scope to capture variable)
- Objects are an interesting abstraction as long as they have a state. For the specific example of Reflection API, the stateless API that Tom started seems to prove that a reflection API does not need a state. In that case, why bothering allocating objects? The state is explicitly passed as arguments. Most important is the first argument that identifies the object. The client must keep track of this state and explicitly associate it with each call.
Indeed. I realized after posting that what I said was stupid.
Clients have been known to make mistakes and pass the wrong object to such methods.
Was this a motivation for the creation object-oriented languages?
This is an interesting argument, I think that a particular case where such error happens is when you have methods like: appendChild(a, b). It may be confusing, indeed, while a.appendChild(b) makes more clear that (hopefully) b is appended to a.
Back to the design of a Reflection API, I think I agree that it may be more clear to have 'mirror.on(a).hasPrototype(b)', than 'Reflect.hasPrototype(a, b)' if that's what you're advocating for.
One of the things that an object based API does is make the association of between that state and the functions implicit by encapsulating the state and the functions together as an object and automatically associating them during method calls. This makes it easy for clients to do things that are hard given the other approach. For example, it allows a client to be written to that is capable of transparently dealing with different implementations of a common API. In an earlier message I described the example of an "inspector" client that is able to display information about objects without knowing where or how the object is implemented. A different reason for using objects in a reflection API is so you can easily attenuate authority. For example, for many clients it may be sufficient to provide them with non-mutating mirrors that only allow inspection. They do this by excluding from the mirror objects all mutation methods.
I think what I am missing is understanding how this is better than creating your own abstraction and whitelisting methods you want to use from a functional API. Also, it's as easy to attenuate a Reflection functional API, by excluding methods you do not want. In each case, there is a need for an action from the person who wants to attenuate authority on the reflection API and it is not clear that the object-oriented API will make this task easier.
A good GC should (and can) make allocation and reclamation of highly ephemeral objects so cheap that developers simply shouldn't worry about it. I agree on the reclamation part, but I don't understand what a GC can do about allocation of ephemeral (or not) objects. A good bump allocator
I thought it was an expression, not a sort of allocator...
simply has a linear memory area where objects all allocated simply by "bumping" the pointer to the next available slot. If you need to allocated a three slot object you just increment the allocation point by (3+h)*slotSize, fill in the object slots, and finally compare against a upper bound. This is actually quite similar to how local variables are allocated on the stack. h is the number of overhead needed to form an "object header" so the slots can be processed as an object. Header size is dependent upon trade-offs in the overall design. 2 is a pretty good value, 1 is possible, 3 or more suggests that there may be room to tighten up the design. For JS, you have to assume that you are on a code path that is not enough that the implementation has actually been able to assign a "shape" to the object (in this case knows that it has t3 slots, etc.) that is being allocated. (It you aren't on such a hot path why do you care).
This is not to say that there are no situations where excessive allocations may cause performance issues but such situations should be outliers that only need to be dealt with when they are actually identified as being a bottleneck. To over simplify: a good bump allocation makes object creation nearly as efficient as assigning to local variables and a good multi-generation ephemeral collector has a GC cost that is proportional to the number of retained objects not the number of allocated objects. Objects that are created and discarded within the span of a single ephemeral collection cycle should have a very low cost. This has all been demonstrated in high perf memory managers for Smalltalk and Lisp. If a garbage collection is triggered when a generation is full, then, your GC cost remains proportional to your number of allocation. Typically, a ephemeral GC would be trigger when the bump point exceeds the limit (perhaps after doing so, and switch to a new allocation zone several times.)
However, GC cost isn't usually proportional to the number of allocations. Programs typically reach a steady state where the number of ephemeral objects that survive stabilizes at some level (actually most programs shift over time between several steady state phases).
Interesting. I would guess that this is research result. Do you have a link to a paper to such research?
When a program is in such a stead state, once you exceed a base threshold changing the frequency of GC doesn't really change how many ephemeral objects will survive a collection. The execution time of a copying collector is proportional to the number of surviving objects (garbage objects are just left behind, untouched). So the size of the allocation zone determines how frequently a GC is done, but the actual cost of a GC is some fixed overhead to enter/leave the GC plus the cost of scavenging the surviving objects. Bigger allocation zones means less GC total overhead, but individual GCs cost about the same, no matter how frequently they are performed or how many object are allocated between them.
If a garbage collection is triggered at constant intervals, then it probably runs for nothing (or too few) too often.
I don't know what the exact status of implementations is, but what happens in current JS engines when the expression '[].forEach.call' is met? Is the allocation of an array actually performed? Hopefully not, I would not be surprised if it was. I suspect they don't optimize this although arguably they should. However, if you buy my argument then it really doesn't make much difference. Implementations should put the effort into building better GCs. For this particular case where the object is not ephemeral, but completely useless, a GC will still cost you something (even if very small), while static analysis can tell you to not allocate at all. I'm not talking about a smaller cost of allocation+discard, but nullifying it with a constant (and small) time of static analysis.
var a = [1]; function f(e, i){a[i] = Math.random();}
while(true){ [].forEach.call(a, f); }
Without static analysis, the first array is allocated and this will run the GC. With static analysis, the GC has no reason to run: the first array does not need to be allocated since its reference is never used anywhere after the retrieval of forEach (which is looked up directly on Array.prototype if the implementation is conformant to ES5.1). So, lift the [].forEach out of the loop.
I realize that I was wrong. 'forEach' could be a getter on Array.prototype which manipulates the |this| value. In this case, the array needs to be allocated.
Ideally, implementations will do this for you. But, I don't see how this advances any useful discussion about the utility of objects.
I think the discussion forked to 2 subjects. First is utility of objects on which I mostly agree with you. Second topic is whether allocating (useful objects or not) less matters. I think it does, but the more you respond, the less I do.
In fact, this loop, with a good GC should have very fast GCs when they are triggered. This is because it isn't allocating anything that remains alive beyond a single iteration of the loop. When the allocation zone fills up the GC starts ups traces roots, finds only a single object that needs to service that cycle, copies it, and resets.
I'll take actual garbage as a metaphor, I am pro recycling (garbage collection), but to recycling, I prefer to avoid buying things with excessive packaging. This way I produce less garbage (less allocation). Maybe should we apply basics of ecology to memory management? ;-) You also have to trade-off the runtime cost of doing the data collection and analysis to enable to you eliminate the optimization. It isn't clear that it will always be cheaper than then just letting a good GC do its job.
That's the reason I mentioned 'cheap ways to allocate less' afterward. Constant (short) time analysis is likely to be better than linear (against program lifetime) cheap GC.
I agree with you that abstractions are a good thing and I won't compromise them if they are necessary. But it should not be an excuse to allocate for no reason, even if it's cheap. And while garbage collection should be improved, if we can find cheap ways to allocate less (at the engine or programmer level), we should apply them. The starting point of this discussion, is that I content that that is are good reasons to want to abstract over reflection functions using object based mirrors. The object serve a useful purpose.
Besides having an API that is less error prone (for methods like "hasPrototype" or "isPrototypeOf"), I still don't really see other reasons. But I have to admit that as far as I'm concerned, it could be enough to switch to a mirror-like API. Especially after the discussion about progress in memory management.
Thanks for your patience and all your explanations, Allen.
As a follow-up to last week's TC39 meeting, I rearranged things on the wiki to reflect our current thinking on proxies. The previous Proxy API is now superseded by the direct proxies API < harmony:direct_proxies>. I put my
own notes on the discussion of direct proxies at the meeting on the old strawman page: < strawman:direct_proxies#feedback_and_discussion
Work in progress:
Definition of a built-in handler that enables proxy handlers to still inherit all derived trap implementations, as suggested at the meeting: < harmony:virtual_object_api>
Definition of a standard "@reflect" module: < harmony:reflect_api>
One observation I made while working on this module is that it's probably a bad idea to use keywords as trap names (delete, new), since such names cannot be straightforwardly imported/exported from modules. We should probably consider using the names 'deleteProperty' and 'construct' instead.