Do Anonymous Exports Solve the Backwards Compatibility Problem?
On Tue, Dec 18, 2012 at 12:56 PM, Kevin Smith <khs4473 at gmail.com> wrote:
At first glance, it seems like anonymous exports might provide a way for pre-ES6 (read: Node) modules and ES6 modules to coexist. After all:
exports = function A() {};
just looks so much like:
module.exports = function A() {};
But is that the case? Does this oddball syntax actually help?
My conclusion is that it does not, unless the loading environment is willing to statically analyze every single module it wishes to load. Moreover, the desired interop is not even possible without performing static analysis.
I feel this is mixing up backcompat dependency matching (which is has much larger issues than exports assignment) with a preference to just not have exports assignment. I believe the backcompat issues and parsing things are workable. I have done some code experiments, but we need a more info on the module loader API, specifically the runtime API, like System.set/get before getting a solid answer on it.
exports assignment is not about backcompat specifically, although it helps. Exports assignment is more about keeping the anonymous natures of modules preserved. In ES modules, modules do not name themselves if it is a single module in a file. The name is given by the code that refers to that code.
If a module only exports one thing, and chooses a name, it is effectively naming itself. Example from the browser world: jQuery and Zepto provide similar functionality. If jQuery exports its value as "jQuery", then that would mean Zepto would then need to export a "jQuery" exports if it wanted to be used in places where the jQuery module is used. But if someone just wanted to use Zepto as "Zepto", then Zepto would need to add more export properties. Saying "well have them both use $" is just as bad. It is simpler to just allow each of them to export a function as the module value, and avoids these weird naming issues.
Assigning a single exports also nudges people to make small modules that do one thing.
It is a design aesthetic that has been established in the JS community, both in node and in AMD modules, in real code used by many people. So allowing export assignment is more about paving an existing cowpath than a specific technical issue with backcompat.
James
Assigning a single exports also nudges people to make small modules that do one thing.
A Node-ism for which the benefit is not yet proven : )
It is a design aesthetic that has been established in the JS community, both in node and in AMD modules, in real code used by many people. So allowing export assignment is more about paving an existing cowpath than a specific technical issue with backcompat.
But that cowpath was only created because of the problems inherent in a dynamic, variable-copy module system, as I argue here ( gist.github.com/4337062). In CommonJS, modules are about variables. In ES6, modules are about bindings. The difference is subtle, but makes all the difference.
The Zepto/jQuery example is not typical, but it's fair game. What's going on there? Well, jQuery is defining an interface, presumably consisting of a function named "jQuery". If Zepto wants to provide the same interface, then it should export bindings with the same name: a function named "jQuery". It's unreasonable to demand that a part of an interface (in this case, the function name), be defined by the user of that interface.
On 19 December 2012 20:18, James Burke <jrburke at gmail.com> wrote:
exports assignment is not about backcompat specifically, although it helps. Exports assignment is more about keeping the anonymous natures of modules preserved. In ES modules, modules do not name themselves if it is a single module in a file. The name is given by the code that refers to that code.
I don't buy this, because the name for the export would just be a local name. You can still bind it to whatever you want on the import side. That's what we have lexical scoping for.
For all levels below, the module has to pick names anyway. I seriously fail to see the point of trying so hard for this one special case.
Assigning a single exports also nudges people to make small modules that do one thing.
It rather nudges people into exporting an object as a module, instead of writing a real module. The only "benefit" of that is that they lose all static checking.
On Wed, Dec 19, 2012 at 11:44 AM, Kevin Smith <khs4473 at gmail.com> wrote:
But that cowpath was only created because of the problems inherent in a dynamic, variable-copy module system, as I argue here (gist.github.com/4337062). In CommonJS, modules are about variables. In ES6, modules are about bindings. The difference is subtle, but makes all the difference.
Those slightly different things are still about naming, and my reply was about naming. Whether it is a "variable" or a "binding", end result is whether the caller of the code need to start with a name specified by the module or with a name of the caller's choosing. The same design aesthetics are in play.
This is illustrated by an example from Dave Herman, for a language (sorry I do not recall which), where developers ended up using "_t", or some convention like that, to indicate a single export value that they did not want to name. As I recall, that language had something more like "bindings" than "variables". That would be ugly to see a "_t" convention in JS (IMO).
In summary, I do not believe there is a technical issue with export assignment and backcompat, which was the what started this thread. A different argument (and probably different thread) against export assignment needs to be made, with more details on the actual harm it causes.
If the desire to not have export assignment is a style preference, it will be hard to make that argument given the style in use in existing JS, both in node and AMD. Real world use and adoption should have have more weight when making the style choice.
James
On 19 December 2012 21:29, James Burke <jrburke at gmail.com> wrote:
This is illustrated by an example from Dave Herman, for a language (sorry I do not recall which), where developers ended up using "_t", or some convention like that, to indicate a single export value that they did not want to name. As I recall, that language had something more like "bindings" than "variables". That would be ugly to see a "_t" convention in JS (IMO).
That language would be ML (or its Ocaml dialect), which happens to have the most advanced module system of all languages by far. The convention is to use "t" as an internal type name, and I've never heard anybody complain about it. ;) It's an acquired taste, I suppose.
It's also worth noting that Dave's comparison is somewhat inaccurate. The convention is used to name the primary abstract type defined by a module, not the only export -- modules with only one export practically never show up in ML programming, which perhaps is a relevant data point in itself.
In summary, I do not believe there is a technical issue with export assignment and backcompat, which was the what started this thread.
Why not? I've attempted to show that it's not possible to correctly use this feature for backward compatibility without parsing the code first. Pre-parsing everything at runtime isn't practical. Therefore, using this feature for backward compatibility at runtime isn't practical.
Of course, you could use other ad hoc methods to tell the loader not to
apply the export =
trick, but then you'd be layering tricks upon tricks.
I think the best migration strategy for Node will be to use require
for
Node-style modules, and to use import syntax for ES6 modules. Node needs
to modify it's URL resolution rules for ES6 modules anyway. : )
For the web, the answer is transcompilation from ES6 to ES5, using CommonJS-style function wrappers to emulate modules.
On Dec 19, 2012, at 12:59 PM, Andreas Rossberg <rossberg at google.com> wrote:
It's also worth noting that Dave's comparison is somewhat inaccurate. The convention is used to name the primary abstract type defined by a module, not the only export
That doesn't disagree with what I said. I don't really get the obsession with "just one value" either (it's some pretty dubious sophistry, IMO). I think the key is when you have a module that provides a primary abstraction. That's what I said in the meeting. In ML that can take the form of a type; in JS it can take the form of a constructor, class, and/or function. The concept you end up reaching for is unifying the idea of the module and the abstraction itself. That's what you're doing with .t in ML and that's what's going on in JS with jQuery, node-optimist, etc etc.
Honestly I don't have all that strong feelings about this issue. I think the anonymous export idea is the cleanest approach to support an idiom that fits with JS without ruining it for static exports. I also think JS would be fine without it. I'm almost inclined to just let others fight it out... ;
On Dec 18, 2012, at 12:56 PM, Kevin Smith <khs4473 at gmail.com> wrote:
At first glance, it seems like anonymous exports might provide a way for pre-ES6 (read: Node) modules and ES6 modules to coexist.
That's not what anonymous exports are for. They're there to support the use case of modules that want to support a popular idiom, a style.
There are two directions that an interop strategy will have to deal with. First, we might want an ES6 module to be loaded by a pre-ES6 module:
// "es5-module.js" var ES6Module = require("es6-module.js");
We might want to use this when a dependency is upgraded to ES6 modules and we want to leave the dependent alone. Now, since ES6 modules are asynchronous, and require is synchronous, we must load "es6-module.js" before "es5-module.js" is executed. The only way to do that is to statically analyze "es5-module.js", searching for calls to require.
It's much easier than that. It's Node. Module loading is synchronous. Node can simply provide a synchronous module loading form. That's not going to be standardized in ES6, but there's nothing inconsistent about providing it. (Remember: JavaScript does not idiomatically use synchronous I/O, but it does not disallow synchronous I/O.)
If you want to use Node modules in the browser, well, you'll have to get more clever, but I don't think we need to solve this problem in ES6.
What about the other direction? Let's say that we want to load an ES5 module from an ES6 module:
Loader hooks are the answer here.
On Dec 19, 2012, at 11:44 AM, Kevin Smith <khs4473 at gmail.com> wrote:
But that cowpath was only created because of the problems inherent in a dynamic, variable-copy module system, as I argue here (gist.github.com/4337062). In CommonJS, modules are about variables. In ES6, modules are about bindings. The difference is subtle, but makes all the difference.
It took me a while to understand what you were saying, so let me try to explain it for others who may have been confused like me:
<kevin's point>
In CommonJS, since a module is just an object, extracting it with var
dereferences the current value but does not alias the object's property. So the local variable in the client module gets a stale copy of the exported binding, rather than being an alias for that export. By contrast, ES6's import
provides an alias for a module's binding.
So in Node, if you want to keep a live view of the exports of a module, you should pass the module around as an object and always dereference it. This is a big justification for the "just one value" idiom -- it allows a module to have mutable exports without breaking client code. </kevin's point>
That's an interesting point, and one that I admit I hadn't thought about. Thing is, I'm really not sure it's the primary justification for the "just one value" idiom. It always seemed to me it was about being able to unify the primary abstraction provided by a module with the module itself. For example, when I pointed out to substack that you could easily do:
// foo.js
export function foo() { ... }
// client.js
import foo from "foo"
or
import { foo: myLocalNameForFoo } from "foo"
He complained that it still forces the client to refer to it by a name. IOW, the same complain about the .t thing in ML.
The Zepto/jQuery example is not typical, but it's fair game. What's going on there? Well, jQuery is defining an interface, presumably consisting of a function named "jQuery". If Zepto wants to provide the same interface, then it should export bindings with the same name: a function named "jQuery". It's unreasonable to demand that a part of an interface (in this case, the function name), be defined by the user of that interface.
Here I disagree; after all, the whole point of having local renaming (the x: y
destructuring syntax) is to allow the client to provide the name that it wants locally. If a module only has a single binding, then it's fine for it to be anonymous, since there's no other possible binding it could be ambiguous with, and then it's just the same phenomenon. It's just that in this case, it has no default name, so the client has to name it.
As I've said, I don't have extremely strong feelings about this issue. In my experience, SML, Ocaml, Haskell, and Racket all do fine without it; the convention is a little wordy but it's just not a big deal. In Rust, we mostly punted -- for a while we had option::t and people didn't like that convention, so now we do option::Option -- but we do have anonymous traits which allow you associate a bunch of operations with a type, which kind of ends up like a module unified with a type.
But at the end of the day, I don't see any way to resolve this debate; I think really what this is about is a question of taste. Neither approach is broken. I think we'll just have to pick an approach and go with it.
On Dec 19, 2012, at 12:05 PM, Andreas Rossberg <rossberg at google.com> wrote:
Assigning a single exports also nudges people to make small modules that do one thing. It rather nudges people into exporting an object as a module, instead of writing a real module. The only "benefit" of that is that they lose all static checking.
I don't think that's fair. It's just an anonymous export. The contents of an export are always dynamic.
All this "nudge" stuff makes me itch, though. I have about ε sympathy for enforcing/encouraging aesthetics and styles. That ain't JavaScript's way, and I sure don't trust TC39 (i.e., me) to take on a paternalistic role.
On Dec 19, 2012, at 3:22 PM, David Herman <dherman at mozilla.com> wrote:
On Dec 19, 2012, at 12:05 PM, Andreas Rossberg <rossberg at google.com> wrote:
Assigning a single exports also nudges people to make small modules that do one thing. It rather nudges people into exporting an object as a module, instead of writing a real module. The only "benefit" of that is that they lose all static checking.
I don't think that's fair. It's just an anonymous export. The contents of an export are always dynamic.
All this "nudge" stuff makes me itch, though. I have about ε sympathy for enforcing/encouraging aesthetics and styles. That ain't JavaScript's way, and I sure don't trust TC39 (i.e., me) to take on a paternalistic role.
Chatted with James and realized I wasn't clear here.
What I mean is, I don't believe that we should be making decisions based on how we will cause people to have good (or our preferred) style. I don't think we know how to make those kinds of predictions, and at least in this particular issue I'm skeptical that it actually works. Regardless of cowpaths, we don't exactly have empirical data on whether any particular aspect of the design influences people's design of their libraries. I'm not comfortable with a decision resting on that.
Instead, I've found the most persuasive argument so far to be that a module that encapsulates a single abstraction -- which is a common phenomenon in basically every language ever -- typically wants to name the abstraction and the module with the same name, and you'd like to be able to elide the two somehow.
OTOH, export =
still doesn't provide all the conveniences that you get from Node. In particular, contrast:
var x = require('quux').foo().bar().baz().yippee();
with:
import 'quux' as quux;
var x = quux.foo().bar().baz().yippee();
This is why Isaac wanted an expression form, like:
var x = (import 'quux').foo().bar().baz().yippee();
I'm warm to the expression form. It's completely compatible with declarative exports; (import 'quux') is simply a shorthand for a declarative import-as followed by a reference to the module name. It does have a cost for compilation time, though, since the compiler can't just scan the AST shallowly for imports at top-level.
In a thread you may not have caught up on, Andreas did argue for a special form such as
module foo at "foo";
for anonymous import, so that the system can check that "foo" indeed does
export = ...
and throw otherwise. Sorry if you did see this and reply (in which case I missed the reply!). If not, whaddya think?
From: es-discuss-bounces at mozilla.org [mailto:es-discuss- bounces at mozilla.org] On Behalf Of Brendan Eich Sent: Wednesday, December 19, 2012 23:11
In a thread you may not have caught up on, Andreas did argue for a special form such as
module foo at "foo";
for anonymous import, so that the system can check that "foo" indeed does
export = ...
and throw otherwise. Sorry if you did see this and reply (in which case I missed the reply!). If not, whaddya think?
IMO this is undesirable. In such a situation, modules can no longer be abstraction boundaries. Instead you must peek inside each module and see which form it exported itself using.
If we instead had
import foo from "foo";
where foo
became either the module instance object (in the multi-export case) or the singly-exported value (single-export case), abstraction boundaries are preserved much more neatly.
This goes both ways, of course. I.e., ideally, this should work too:
module "glob" {
function glob() {
}
glob.sync = function () { };
}
import { sync } from "glob";
(see npmjs.org/package/glob)
This was much of the motivation behind Yehuda and I's proposal, FWIW.
Domenic Denicola wrote:
-----Original Message----- From: es-discuss-bounces at mozilla.org [mailto:es-discuss- bounces at mozilla.org] On Behalf Of Brendan Eich Sent: Wednesday, December 19, 2012 23:11
In a thread you may not have caught up on, Andreas did argue for a special form such as
module foo at "foo";
for anonymous import, so that the system can check that "foo" indeed does
export = ...
and throw otherwise. Sorry if you did see this and reply (in which case I missed the reply!). If not, whaddya think?
[What mis-cited? gmail?]
IMO this is undesirable. In such a situation, modules can no longer be abstraction boundaries. Instead you must peek inside each module and see which form it exported itself using.
You have to know what a module exports, period. That is the abstraction boundary, the edge you must name or otherwise denote.
All Andreas is arguing for is a runtime error when you try to denote an anonymous export but the module does not match. This matters, since as Kevin and Dave just went through, and Andreas already explained, exports alias and mutation makes this observable.
Brendan Eich wrote:
exports alias and mutation makes this observable.
Er, "imports alias exports and ...."
On 19 December 2012 23:05, David Herman <dherman at mozilla.com> wrote:
On Dec 19, 2012, at 12:59 PM, Andreas Rossberg <rossberg at google.com> wrote:
It's also worth noting that Dave's comparison is somewhat inaccurate. The convention is used to name the primary abstract type defined by a module, not the only export
That doesn't disagree with what I said. I don't really get the obsession with "just one value" either (it's some pretty dubious sophistry, IMO). I think the key is when you have a module that provides a primary abstraction. That's what I said in the meeting.
Yes, but unless it is the only export, you cannot make it anonymous anyway. That is, even if such an anonymous export feature existed in ML, it would not be applicable to the case where the "t" convention is used. (Which is part of the reason why I consider anonymous export very much a corner case feature.)
I'd also like to note that the main motivation for the convention in Ocaml (instead of just giving the type a proper name -- which, btw, is what Standard ML prefers) is to ease the use of modules as arguments to other, parameterised modules (a.k.a. functors). Such a feature does not even exist in ES6, so in my mind, the analogy isn't really all that relevant.
In ML that can take the form of a type; in JS it can take the form of a
constructor, class, and/or function. The concept you end up reaching for is unifying the idea of the module and the abstraction itself. That's what you're doing with .t in ML and that's what's going on in JS with jQuery, node-optimist, etc etc.
I think I disagree that that's an accurate description of what's going on in ML. ;)
More importantly, though, convention is one thing, baking it into the language another. I've become deeply skeptical of shoe-horning orthogonal concerns into one "unified" concept at the language level. IME, that approach invariably leads to baroque, kitchen sink style language constructs that yet scale poorly to the general use case. (The typical notion of a class in mainstream OO languages is a perfect example.)
One of the nicer aspects of pre-ES6 JavaScript is that it doesn't have too much of that sort of featurism.
On 20 December 2012 05:24, Brendan Eich <brendan at mozilla.com> wrote:
Domenic Denicola wrote:
IMO this is undesirable. In such a situation, modules can no longer be
abstraction boundaries. Instead you must peek inside each module and see which form it exported itself using.
You have to know what a module exports, period. That is the abstraction boundary, the edge you must name or otherwise denote.
All Andreas is arguing for is a runtime error when you try to denote an anonymous export but the module does not match.
A static error, actually.
On Thu, Dec 20, 2012 at 5:54 AM, Andreas Rossberg <rossberg at google.com> wrote:
On 20 December 2012 05:24, Brendan Eich <brendan at mozilla.com> wrote:
Domenic Denicola wrote:
IMO this is undesirable. In such a situation, modules can no longer be abstraction boundaries. Instead you must peek inside each module and see which form it exported itself using.
You have to know what a module exports, period. That is the abstraction boundary, the edge you must name or otherwise denote.
All Andreas is arguing for is a runtime error when you try to denote an anonymous export but the module does not match.
A static error, actually.
While I sympathize with this desire, there are definite drawbacks, which is why we haven't done this so far.
We want to support both a syntax for 'import a module, and bind a particular identifier to the single anonymous export' and a syntax for 'import a module, and bind an identifier to the module instance object'. We could make these different syntaxes, but then (a) we need to similar syntaxes, which will confuse people when they use the wrong one and it doesn't work, and (b) you can't switch the implementation of a module from 'single export' to 'multiple export' without breaking clients.
The latter scenario isn't important for the 'module exports a single function which is identified with the module' case, but it is important for gradual migration to ES6. It's much easier to convert an ES5 library that attaches a single value to the global object to a single-export module by wrapping it with some boilerplate, or configuring a loader hook to do such wrapping, than it is to do a fundamental conversion to use ES6 features. Therefore, I imagine that existing libraries will go through a period where they're usable via the module system as single-exports modules exporting their current object. Later, we might like to convert these modules more fully, and that should be possible without breaking clients.
The latter scenario isn't important for the 'module exports a single
function which is identified with the module' case, but it is important for gradual migration to ES6. It's much easier to convert an ES5 library that attaches a single value to the global object to a single-export module by wrapping it with some boilerplate, or configuring a loader hook to do such wrapping, than it is to do a fundamental conversion to use ES6 features. Therefore, I imagine that existing libraries will go through a period where they're usable via the module system as single-exports modules exporting their current object. Later, we might like to convert these modules more fully, and that should be possible without breaking clients.
At first I thought so too.
This is exactly the use case that my OP addresses. The logic goes like this: in order to apply that boilerplate, you have to know whether the module is ES5 or ES6. In order to know that, you have to parse it. Pre-parsing every single module is not practical for a production system. Therefore applying such boilerplate is not practical for a production system.
No - the solution for Node WRT ES6 modules, in my mind, is to "pull off the bandaid". The solution should not be to make compromises on the module design side.
On Thu, Dec 20, 2012 at 11:22 AM, Kevin Smith <khs4473 at gmail.com> wrote:
The latter scenario isn't important for the 'module exports a single function which is identified with the module' case, but it is important for gradual migration to ES6. It's much easier to convert an ES5 library that attaches a single value to the global object to a single-export module by wrapping it with some boilerplate, or configuring a loader hook to do such wrapping, than it is to do a fundamental conversion to use ES6 features. Therefore, I imagine that existing libraries will go through a period where they're usable via the module system as single-exports modules exporting their current object. Later, we might like to convert these modules more fully, and that should be possible without breaking clients.
At first I thought so too.
This is exactly the use case that my OP addresses. The logic goes like this: in order to apply that boilerplate, you have to know whether the module is ES5 or ES6. In order to know that, you have to parse it. Pre-parsing every single module is not practical for a production system. Therefore applying such boilerplate is not practical for a production system.
I don't think this is right. Certainly if you want to take arbitrary code, which might be an ES6 module or an ES5 library, and wrap it in a module only if needed, then parsing is required. However:
(a) Parsing is fine in a production build system. (b) Not every use case has to handle both input cases. For example, a tool to convert an ES5 library to an ES6 module, which might be run as part of a module loader hook, should just assume that its input is written in ES5. The same goes for ahead-of-time conversion tools.
No - the solution for Node WRT ES6 modules, in my mind, is to "pull off the bandaid". The solution should not be to make compromises on the module design side.
Whether to have anonymous exports is not about making a compromise in the design for compatibility (modulo the syntax issue discussed above). This is an idiom that's widely used in JS today, and we're not trying to tell people what style they should use in the future.
On 20 December 2012 14:17, Sam Tobin-Hochstadt <samth at ccs.neu.edu> wrote:
We want to support both a syntax for 'import a module, and bind a particular identifier to the single anonymous export' and a syntax for 'import a module, and bind an identifier to the module instance object'. We could make these different syntaxes, but then (a) we need to similar syntaxes, which will confuse people when they use the wrong one and it doesn't work, and (b) you can't switch the implementation of a module from 'single export' to 'multiple export' without breaking clients.
Argument (a) does not convince me for two reasons. First, it very much sounds like an argument for premature dumbdownification. Second, and more importantly, I don't even believe the premise, namely that the potential for confusion is greater than with overloading one syntax with two subtly different meanings.
If you want to avoid confusion, don't introduce anonymous exports in the first place. ;) Seriously, no matter what syntax we pick for anonymous imports, I'm sure that any confusion that ensues will be dwarfed by the question why an export like
export = {a: ..., b: ..., c: ...}
cannot be imported with
import {a, b, c} from "..."
whereas it works for
export {a: ..., b: ..., c: ...}
Would you risk a bet against this ending up among the Top 3 of ES module WTFs? :)
Your point (b) is more interesting, at least in terms of a transition path like you describe. But do we have any kind of evidence that such an intermediate point on a transition path is particularly useful? And that it will actually be relevant and/or workable for a significant number of library implementers? Unless there is strong evidence, I'd be reluctant to put some confusing hack into the language, eternally, that is only potentially relevant for a limited time for a limited number of people.
I concur with Kevin's analysis that the emergence of singleton exports in home-brewed JS module systems rather was a means than an end. Is there even a single example out there of a language-level module system that has something similar?
Andreas Rossberg wrote:
More importantly, though, convention is one thing, baking it into the language another. I've become deeply skeptical of shoe-horning orthogonal concerns into one "unified" concept at the language level. IME, that approach invariably leads to baroque, kitchen sink style language constructs that yet scale poorly to the general use case. (The typical notion of a class in mainstream OO languages is a perfect example.)
That's a good concern, but not absolute. How do you deal with the counterargument that, without macros, the overhead of users having to glue together the orthogonal concerns into a compound cliché is too high and too error-prone?
One of the nicer aspects of pre-ES6 JavaScript is that it doesn't have too much of that sort of featurism.
So people keep telling me. Yet I see ongoing costs from all the module-pattern, power-constructor-pattern, closure-pattern lack of learning, slow learning, mis-learning, fetishization, and bug-habitat surface area.
On Thu, Dec 20, 2012 at 8:22 AM, Kevin Smith <khs4473 at gmail.com> wrote:
This is exactly the use case that my OP addresses. The logic goes like this: in order to apply that boilerplate, you have to know whether the module is ES5 or ES6. In order to know that, you have to parse it. Pre-parsing every single module is not practical for a production system. Therefore applying such boilerplate is not practical for a production system.
That was not my impression of how backcompat would be done. I was under the impression it would be more like this:
-
The module loader API exposes a "runtime" API that is not new syntax, just an API. From some earlier Module Loader API drafts, I thought it was something like System.get() to get a dependency, System.set() to set the value that will be used as the export.
-
Base libraries that need to live in current ES and ES.next worlds (jquery, underscore, backbone, etc…) would not use the ES.next module syntax, but feature detect the System API and call it to participate in an ES.next module scenario, similar to how a module today detects if it wants to register for node, AMD or browser globals:
umdjs/umd/blob/master/returnExportsGlobal.js
- Modules using the ES.next module syntax will most likely be contained to "app logic" at first because not all browsers will have ES.next capabilities right away, and only apps that can restrict themselves to ES.next browsers will use the module syntax. Everything else will use the runtime API.
Otherwise, forcing existing libraries that need to exist in non-ES.next browsers to provide a "ES.next" copy of their library that force the use of new JS module syntax is effectively creating a "2JS" system, and if that is going to happen, might as well do more backwards incompatible changes for ES.next. Previous discussion on this list seem to indicate a desire to keep with 1JS.
For using ES5 libraries that do not call the ES Module Loader runtime API, a "shim" declarative config could be supported by the ES Module Loader API, similar to the one in use by AMD loaders:
requirejs.org/docs/api.html#config-shim
this allows the end developer to consume the old code in a modular fashion, and the parsing is done by the ES Module Loader, not userland JS.
So, there is not a case where someone would try to ship a module loader that does full JS parsing to detect new module syntax, except for more experimental purposes. Or just one used in dev, but then do a build to translate to ES5 syntax, converting module syntax to the runtime API forms so that it could run in either ES.next browsers or in ES5 browsers with an API shim.
No - the solution for Node WRT ES6 modules, in my mind, is to "pull off the bandaid". The solution should not be to make compromises on the module design side.
With the runtime System API, node can adapt their module system to use the ES.next Module Loader API hooks for resolve/fetch and hopefully a way to register a require function for each module that underneath calls System.get() and module.exports calling System.set.
However, the ES.next Module Loader API is doing the actual parsing of the file, scanning for ES.next module syntax, so Node itself does not need to deliver an in-JS parser.
Maybe instead of (I would like to see in addition to) System.set()
there is a System.exports, like the CommonJS exports
, that would
allow avoiding the "exports assignment" pattern for modules that want
to do that.
Summary:
If that all of the above holds true (getting clarification on the Module Loader API is needed), then I do not believe the original post about parsing of old and new code is a strong case for avoiding export assignment.
James
On Thu, Dec 20, 2012 at 2:44 PM, James Burke <jrburke at gmail.com> wrote:
If that all of the above holds true (getting clarification on the Module Loader API is needed),
Just to clarify about the Module Loader API:
- System.set/System.get are very much still a part of the design. In fact, the loader design has changed very little recently, despite the revisions we recently presented to the static behavior of modules.
- I agree with James that loader hooks/feature detection/System.set are the right way to handle legacy compatibility.
- I don't see what a mutable
exports
object would add on top of this system, but maybe I'm not understanding what you're saying.
Finally, even though the syntax is export = expression
, this is
semantically not an assignment, and there's no "export object" being
changed when things are written this way.
On Thu, Dec 20, 2012 at 11:51 AM, Sam Tobin-Hochstadt <samth at ccs.neu.edu> wrote:
- I don't see what a mutable
exports
object would add on top of this system, but maybe I'm not understanding what you're saying.
It is one way to allow circular dependencies in CommonJS/Node/AMD systems. The other way is to call require() at runtime to get the cached module value at the time of actual use. Some examples here:
requirejs.org/docs/api.html#circular
James
On 20 December 2012 19:39, Brendan Eich <brendan at mozilla.com> wrote:
Andreas Rossberg wrote:
More importantly, though, convention is one thing, baking it into the language another. I've become deeply skeptical of shoe-horning orthogonal concerns into one "unified" concept at the language level. IME, that approach invariably leads to baroque, kitchen sink style language constructs that yet scale poorly to the general use case. (The typical notion of a class in mainstream OO languages is a perfect example.)
That's a good concern, but not absolute. How do you deal with the counterargument that, without macros, the overhead of users having to glue together the orthogonal concerns into a compound cliché is too high and too error-prone?
One of the nicer aspects of pre-ES6 JavaScript is that it doesn't have
too much of that sort of featurism.
So people keep telling me. Yet I see ongoing costs from all the module-pattern, power-constructor-pattern, closure-pattern lack of learning, slow learning, mis-learning, fetishization, and bug-habitat surface area.
Sorry, what I wrote may have been a bit unclear. I didn't try to argue against features in general. I agree that it is important to grow a language where the need arises. What I argued against was the particular approach of accumulating all sorts of ad hoc features and extensions in one monolithic language concept.
Andreas Rossberg wrote:
On 20 December 2012 19:39, Brendan Eich <brendan at mozilla.com <mailto:brendan at mozilla.com>> wrote:
Andreas Rossberg wrote: More importantly, though, convention is one thing, baking it into the language another. I've become deeply skeptical of shoe-horning orthogonal concerns into one "unified" concept at the language level. IME, that approach invariably leads to baroque, kitchen sink style language constructs that yet scale poorly to the general use case. (The typical notion of a class in mainstream OO languages is a perfect example.) That's a good concern, but not absolute. How do you deal with the counterargument that, without macros, the overhead of users having to glue together the orthogonal concerns into a compound cliché is too high and too error-prone? One of the nicer aspects of pre-ES6 JavaScript is that it doesn't have too much of that sort of featurism. So people keep telling me. Yet I see ongoing costs from all the module-pattern, power-constructor-pattern, closure-pattern lack of learning, slow learning, mis-learning, fetishization, and bug-habitat surface area.
Sorry, what I wrote may have been a bit unclear. I didn't try to argue against features in general. I agree that it is important to grow a language where the need arises. What I argued against was the particular approach of accumulating all sorts of ad hoc features and extensions in one monolithic language concept.
Yes, we have too much of that in how multifarious and compound JS's function is already. People use it for constructors, procedures, functions, closures, modules, statics, and more. You could say this is all fine (I don't object in general!) but it is an exercise in pattern-building, of necessity. And frequently used, verbose and error-prone patterns are definitely feature requests.
So the particular approach -- in particular -- that you are questioning is adding export = to ES6 modules. I agree it is ad-hoc. It also seems likely to confuse, compared to the self-hosted NPM precedent. It's one of those almost-but-not-quite-the-same things where the differences seem likely (to me at any rate) to bite back. We could defer it with a strawman that implementors could agree on as an experimental extension, to prove or disprove the idea. That seems better for ES6 and Harmony.
On Dec 20, 2012, at 1:29 PM, Brendan Eich <brendan at mozilla.com> wrote:
So the particular approach -- in particular -- that you are questioning is adding export = to ES6 modules. I agree it is ad-hoc. It also seems likely to confuse, compared to the self-hosted NPM precedent. It's one of those almost-but-not-quite-the-same things where the differences seem likely (to me at any rate) to bite back.
I think the complaint about the syntactic similarity and semantic difference between export = { ... }
and export { ... }
is a very strong point, even though I don't believe it's Andreas's primary objection. ;-)
We could defer it with a strawman that implementors could agree on as an experimental extension, to prove or disprove the idea. That seems better for ES6 and Harmony.
I'm sympathetic. After all, leaving it out doesn't prevent the "just one thing" style, it just requires people to name their one thing. It means that the code examples at substack/node-optimist#plus-optimist-comes-with-usage-and-demand go from this:
var argv = require('optimist')
.usage('Usage: $0 -x [num] -y [num]')
.demand(['x','y'])
.argv;
to this:
import optimist from 'optimist';
var argv = optimist
.usage('Usage: $0 -x [num] -y [num]')
.demand(['x','y'])
.argv;
I'm frankly just not convinced that it's important enough to solve, especially given the real issues Andreas has raised, and the controversy and bikeshedding it engenders. Maybe we just chalk this up as one point on which Node's dynamic, synchronous module loading wins some convenience over ES6 modules. I can live with that.
I'm going to shift my focus away from this conversation towards more work on fleshing out the details of the compilation, loading, and linking semantics. I'll ping the list when I've got results to show on the wiki.
On Dec 20, 2012, at 19:02, "David Herman" <dherman at mozilla.com> wrote:
On Dec 20, 2012, at 1:29 PM, Brendan Eich <brendan at mozilla.com> wrote:
So the particular approach -- in particular -- that you are questioning is adding export = to ES6 modules. I agree it is ad-hoc. It also seems likely to confuse, compared to the self-hosted NPM precedent. It's one of those almost-but-not-quite-the-same things where the differences seem likely (to me at any rate) to bite back.
I think the complaint about the syntactic similarity and semantic difference between
export = { ... }
andexport { ... }
is a very strong point, even though I don't believe it's Andreas's primary objection. ;-)
I'd be a fan of coloring the bikeshed export only { … }
The module loader API exposes a "runtime" API that is not new syntax, just an API. From some earlier Module Loader API drafts, I thought it was something like System.get() to get a dependency, System.set() to set the value that will be used as the export.
Base libraries that need to live in current ES and ES.next worlds (jquery, underscore, backbone, etc…) would not use the ES.next module syntax, but feature detect the System API and call it to participate in an ES.next module scenario, similar to how a module today detects if it wants to register for node, AMD or browser globals:
There is a slightly annoying mismatch here, though: ES6 modules are compile-time constructs, so jquery et all cannot completely integrate by using ES6 runtime APIs. If code depends on jquery, jquery will need to be loaded explicitly, by hand, before the dependency resolution for the caller starts (ie, a separate script element), even if jquery starts to use System.set to register itself.
This wasn't an issue with ES5 module libraries, where everything was runtime and nothing was checked - you could have dependencies that registered themselves (or were registered by shims) on load.
One might be able to have a special-purpose loader, though, which knows about jquery and handles it in its resolve/load hooks, similar to config shim?
- Modules using the ES.next module syntax will most likely be contained to "app logic" at first because not all browsers will have ES.next capabilities right away, and only apps that can restrict themselves to ES.next browsers will use the module syntax. Everything else will use the runtime API.
I'd prefer to use transpilers, mapping new syntax to runtime constructs in old engines. That way, all newly-written code can use the same, new syntax, but the compile-time checking advantages only come into play when the transpilation step is removed, and ES6 engines are used.
We are now in the odd situation that there is a user base for ES6 modules in TypeScript, but since the ES6 module spec is still in progress, TS has a mix of partially-implemented old spec and not-yet-implemented new spec.
The idea is to use modern module syntax, and transpile to AMD or CommonJS or ES6, as needed. Currently, TS coders try out external modules, find them cumbersome, and fall back to reference paths and internal modules (which translates to includes+iifes), but that is merely a result of the current spec and implementation state.
For using ES5 libraries that do not call the ES Module Loader runtime API, a "shim" declarative config could be supported by the ES Module Loader API, similar to the one in use by AMD loaders:
requirejs.org/docs/api.html#config-shim
this allows the end developer to consume the old code in a modular fashion, and the parsing is done by the ES Module Loader, not userland JS.
I'd very much like to see a config-shim-look-alike implemented in terms of the updated ES6 modules spec, just to be sure it is possible. This is important enough that it should be part of the ES6 modules test suite.
Claus
At first glance, it seems like anonymous exports might provide a way for pre-ES6 (read: Node) modules and ES6 modules to coexist. After all:
just looks so much like:
But is that the case? Does this oddball syntax actually help?
My conclusion is that it does not, unless the loading environment is willing to statically analyze every single module it wishes to load. Moreover, the desired interop is not even possible without performing static analysis.
There are two directions that an interop strategy will have to deal with. First, we might want an ES6 module to be loaded by a pre-ES6 module:
We might want to use this when a dependency is upgraded to ES6 modules and we want to leave the dependent alone. Now, since ES6 modules are asynchronous, and require is synchronous, we must load "es6-module.js" before "es5-module.js" is executed. The only way to do that is to statically analyze "es5-module.js", searching for calls to require.
However, since require allows an arbitrary expression argument, there are many cases in Node where this static analysis will fail.
What about the other direction? Let's say that we want to load an ES5 module from an ES6 module:
Let's say that the ES5 module looks like this:
We could dynamically add the following text to the end of "es5-module.js":
And thereby export the necessary binding. But if we use such a trick on an ES6 module, we could run into problems:
This would presumably result in an error! The only way to avoid such problems (without resorting to something like "package language version flags") is to statically analyze "es5-module.js" and only apply the trick if ES6 module declarations are not found.
So interop implies static analysis. And since parsing and analyzing javascript, in javascript, for every single loaded module would be quite a performance hit, I think such a strategy is infeasible.
So where does that leave anonymous exports? My personal opinion is "nowhere".