ES Modules: suggestions for improvement
Also relevant to this thread, post on the same topic by Isaacs (node.js lead) : blog.izs.me/post/25906678790/on-es-6-modules
David
Le 26/06/2012 06:24, James Burke a écrit :
As I understand it, two issues drive the need for standardization of modules:
- we want one environment for all JS,
- to move beyond the limitations of RequireJS and CommonJS requires parsing, and that is considered too expensive for library implementations. The first point is obvious, the second one is implicit in the blog posts by Isaac and James.
In other words, I gather that Isaac, James and others believe that there exists a parser based dependency analysis solution that does not require the significant new ES harmony syntax. Since we have Traceur, esprima-harmony and similar transpilers, we can try ES-Harmony vs TBD-better-option. But we need said option to show up.
jjb
On Tue, Jun 26, 2012 at 10:12 AM, John J Barton <johnjbarton at johnjbarton.com> wrote:
As I understand it, two issues drive the need for standardization of modules: 1) we want one environment for all JS, 2) to move beyond the limitations of RequireJS and CommonJS requires parsing, and that is considered too expensive for library implementations. The first point is obvious, the second one is implicit in the blog posts by Isaac and James.
Another point that I believe Isaac is making is that too much syntax
is likely to confuse developers and allowing certain features, such as
nested modules or import *
, can be harmful to programmer efficiency
in the long term, if used.
For the purpose of discussion, I made a gist [0] of Isaac's proposal. Most of the module examples [1] are there.
[0] gist.github.com/2997742 [1] harmony:modules_examples
Isaac, feel free to correct it.
Le 26/06/2012 16:44, David Bruant a écrit :
Also relevant to this thread, post on the same topic by Isaacs (node.js lead) : blog.izs.me/post/25906678790/on-es-6-modules "Furthermore, |let| already gives us destructuring assignment. If a module exports a bunch of items, and we want several of them, then do |var {x,y,z} = import 'foo'| or some such." => Excellent idea. That combined with the single export idea reduces the
amount of new syntax to introduce. The Module proposal has a local renaming feature which I think should be kept: Initial proposal: import { draw: drawShape } from shape; import { draw: drawGun } from cowboy; Could become: let {draw: drawShape} = import './shape.js' let {draw: drawGun} = import './cowboy.js'
I would actually reverse the order: let {drawShape: draw} = import './shape.js' let {drawGun: draw} = import './cowboy.js' But that's a matter of taste.
By the way, the local renaming for destructuring is relevant regardless of modules.
function f(point1, point2){
let {x1:x, y1:y} = point1,
{x2:x, y2:y} = point2;
// ...
}
I'd like to answer on this post proposal as it brings some interesting points, but also raises some questions:
- Loader.define(<path>, <program text>) defines a module at the
specified <path>, with the <program text> contents. That <program text>
is statically analyzed for any import statements. => I don't understand this part. Why would you need to define a module
at a specified path? Either there is a JS file at this path already or there is none, no?
- Whenever an import <path> is encountered in <program text> then the
Loader.resolve(requestPath, callerPath, callback) is called. This method should return a fully qualified path. If this method returns boolean true, then it will not be considered resolved until the callback is called. (The argument to the callback is the string path.) If it does not return true, and does not return a string path, then this is an error, and throws. => If syntax calls the Loader.resolve method, I don't understand who
sets the callback. Regardless, Loader.resolve(requestPath, callerPath) seems like it could be a synchronous operation without too big of a performance penalty. I disagree with the idea of calling the dynamic value of Loader.resolve on syntax. We have seen that this resulted in potential attack (like when the dynamic Array constructor was used in Chrome's JSON.parse).
"For security, the Loader object could be frozen with |Object.freeze| to prevent additional changes." => This is not enough. People shouldn't have to opt-in for security,
mostly because they don't do it. I woud call for security by default here and having "import <path>" call the built-in Loader.resolve instead
of the dynamic one. If people want to override the Loader API, they would have to forget about syntax. Or a new syntax could be introduced, making clear that it's dangerous. Maybe something like "importDyn".
- Once a module is resolved to a full path string, then Loader.load(fullPath, callback) is called. callback should not be called until Loader.define(fullPath, contents) is called. This should be called at most once for any given fullPath. (Is the callback even necessary? Why not just wait for Loader.define and throw any errors encountered?) => I guess I understand better Loader.define, but I intuit that the API
could be reworked to remove it.
- The Loader.main(fullPath) method executes the module referenced by fullPath (which must have already been defined), as well as evaluating each of the modules that it imports. => It seems that the method should be called Loader.load, here, but
that's a nitpick
- Within a module, the export <expression> statement marks the result
of <expression> as the exported value from the module. There can be at
most one export statement in a module, and the exported expression is the module's export. To export more than one thing, export an object with more than one thing on it. => This and destructuring. I love the combinaison.
Modules export a single value. Exporting a second time throws.
=> This can even be made a parse-time error.
Maybe this is not a valid cause for syntax addition. I'm not sure.
There are hairy problems around cyclic dependencies, so it's worth at least having the option to address with static magic that has not yet been fully imagined.
- The global object within a module context is equivalent to Object.create(<global>) from the main global context. (The important
thing is that leaks aren't leaky outside the module, but for example, x typeof Error still works, because it uses the same Error function.) 9) If a module does not contain an export statement, then its global object is its export. This is to provide support for legacy modules that create a global object (such as jQuery) rather than using an export statement. (Too magical? Probably. Also, what about having exports inheriting from global is weird. Is there a simpler way to make existing libs place nicely with this approach?) => Making the global scope of a file the implicit export object sounds
like an excellent idea. I concur that Object.create(<global>) may not be
the best idea. Recently, Allen introduced different ideas to deal with the global environment [1]. Maybe there are things to leverage here. Still, not having syntactical "export" statements requires execution to know what is being exported and may delay error detection. Maybe that's a good thing to have both: either benefit from some early-error mechanism or painlessly leverage existing code.
David
The linked blog post is a very rough cut of where my thoughts are on the subject. Expect changes and cleanup. It does not represent a fully-baked (or even half-baked) idea, but more like a general direction.
I expect to clean it up and propose something at least half-baked to this list soon, incorporating some of the feedback that I've gotten from that blog post.
On Tue, Jun 26, 2012 at 11:34 AM, Thaddee Tyl <thaddee.tyl at gmail.com> wrote:
Another point that I believe Isaac is making is that too much syntax is likely to confuse developers
Developers are very good at getting un-confused by new syntax, and newbies are very good at becoming less new. That's not much of a hazard.
The bigger hazard is that we can't remove the syntax we add, and historically, humans don't have a perfect track record at expecting consequences, so we should try to reduce additions to the smallest set possible to deliver the more important functionality. In other words, if some % of what we do is a mistake, we halve our mistakes by doing half as much. If we can focus what we do on the things that are very essential to what we need, we can probably beat those odds :)
and allowing certain features, such as nested modules or
import *
, can be harmful to programmer efficiency in the long term, if used.
Bingo. Favoring the exports
object instead of module.exports
was
a mistake. Implementing import *
in Python and Java was a mistake.
Copying existing successful systems is good, but we should avoid copying their mistakes if possible.
Isaac, feel free to correct it.
Will do. Probably not until after NodeConf early next week.
I share some of your concerns as well. I like the idea of "import" just returning an object, which can be destructured using let. I also like the idea of eliminating the "import *" syntax. However, I think that dynamic exports ("export <expression>") might not be as useful as it seems.
In my modules, I use the "export <expression>" form for the following
reasons:
-
When I want to export a single function (perhaps a constructor), and I don't want importers to unnecessarily repeat the function name:
var MyClass = require("MyClass").MyClass; // Boo! var MyClass = require("MyClass"); // Better!
-
When I want to rename an export:
function shortName() { ... }
module.exports = { longName: shortName };
-
When I want to group together the exported API, instead of having it spread across the file:
function A() { ... } function B() { ... } function C() { ... }
module.exports = { A: A, B: B, C: C };
For case 1, destructuring allows us to eliminate the repetition:
let { MyClass } = import "MyClass.js";
A static multiple export syntax ("export { ... }") would work just fine for cases 2 and 3:
export {
longName: shortName,
A,
B,
C
};
Are there any other cases where dynamic exports are useful?
On 26 June 2012 16:45, Kevin Smith <khs4473 at gmail.com> wrote:
Hi Isaac,
I share some of your concerns as well. I like the idea of "import" just returning an object, which can be destructured using let. I also like the idea of eliminating the "import *" syntax. However, I think that dynamic exports ("export <expression>") might not be as useful as it seems.
In my modules, I use the "export <expression>" form for the following reasons:
When I want to export a single function (perhaps a constructor), and I don't want importers to unnecessarily repeat the function name:
var MyClass = require("MyClass").MyClass; // Boo! var MyClass = require("MyClass"); // Better!
var { MyClass } = require("MyClass"); // Best!
Best of both worlds! My code is full of this.
Le 26/06/2012 20:54, Isaac Schlueter a écrit :
The linked blog post is a very rough cut of where my thoughts are on the subject. Expect changes and cleanup. It does not represent a fully-baked (or even half-baked) idea, but more like a general direction.
I expect to clean it up and propose something at least half-baked to this list soon, incorporating some of the feedback that I've gotten from that blog post. ...
If we can focus what we do on the things that are very essential to what we need, we can probably beat those odds :)
Regarding modules I don't know right now what would be the best in terms of syntax.
Node.js's way is good, except the "transitive dependency issue" mentioned in your post which in some cases indeed can cause problems.
I had some hard time to get used to this commonjs/node.js way of separating modules which can not interact between each others, but now I don't find it bad (even good).
What I find bad (1) is the need of VMs, let's take node.js's one, it's calling c++ stuff, calling itself js's stuff, and at the end things are coming back to js (with some imperfections like node.js's VM not binding things correctly in some cases)
And what I find bad (2) is that the fact that a module could be a normal web js code (ie not a module, the web is composed of js code, not modules) seems to be minimized, and (3) why should we continue to load cross-domain scripts via the <script ..> tag using onload to get the
result (normal browsers) or onreadystatechange (abnormal browser) and then process it via global variables ? Using xhr for example (var code=xhr_result(xxx);eval(code)), this breaks the same origin policy but it's already broken by the capability of inserting scripts (then I am not sure about your proposal "In Web Browsers" with <script..>).
And what I find bad (4) is the impossibility of wrapping things as I describe here gist.github.com/2995641 (maybe impossible, but at least it shows the idea, and the need) instead of being forced to transform js code into modules and play with globals, bindings and clone stuff.
On 26 June 2012 18:36, Aymeric Vitte <vitteaymeric at gmail.com> wrote:
Node.js's way is good, except the "transitive dependency issue" mentioned in your post which in some cases indeed can cause problems.
Does Node not handle transitive dependencies per CommonJS Modules/1.0?
What I find bad (1) is the need of VMs, let's take node.js's one, it's calling c++ stuff, calling itself js's stuff, and at the end things are coming back to js (with some imperfections like node.js's VM not binding things correctly in some cases)
Can you explain this in more detail? I don't really understand what you're getting at .
On Tue, Jun 26, 2012 at 4:48 PM, Wes Garland <wes at page.ca> wrote:
On 26 June 2012 18:36, Aymeric Vitte <vitteaymeric at gmail.com> wrote:
Node.js's way is good, except the "transitive dependency issue" mentioned in your post which in some cases indeed can cause problems. Does Node not handle transitive dependencies per CommonJS Modules/1.0?
Yes, node handles transitive dependencies via unfinished objects, much like the old CommonJS Modules/1.0 style.
However, it's generally better to return a single thing from a module
if possible, rather than a bunch of stuff on an object. We use
module.exports
to accomplish that, and it's mostly good, but it
doesn't handle cycles well.
David Bruant wrote:
Le 26/06/2012 16:44, David Bruant a écrit :
Also relevant to this thread, post on the same topic by Isaacs (node.js lead) : blog.izs.me/post/25906678790/on-es-6-modules "Furthermore, |let| already gives us destructuring assignment. If a module exports a bunch of items, and we want several of them, then do |var {x,y,z} = import 'foo'| or some such." => Excellent idea. That combined with the single export idea reduces the amount of new syntax to introduce.
Declarations can nest under control flow constructs, but import or module dependencies must be prefetched. They're static.
if (some_rare_condition()) { let x = import "m"; ... }
either always prefetches "m", which does not say what is meant; or it nests an event loop, violating run-to-completion, which is not going to happen.
The Module proposal has a local renaming feature which I think should be kept: Initial proposal: import { draw: drawShape } from shape; import { draw: drawGun } from cowboy; Could become: let {draw: drawShape} = import './shape.js' let {draw: drawGun} = import './cowboy.js'
I would actually reverse the order: let {drawShape: draw} = import './shape.js' let {drawGun: draw} = import './cowboy.js' But that's a matter of taste.
No, you can't so reorder, it's not a matter of taste. Destructuring is the dual en.wikipedia.org/wiki/Dual_(category_theory) of
structuring.
Le 27/06/2012 10:31, Brendan Eich a écrit :
David Bruant wrote:
Le 26/06/2012 16:44, David Bruant a écrit :
Also relevant to this thread, post on the same topic by Isaacs (node.js lead) : blog.izs.me/post/25906678790/on-es-6-modules "Furthermore, |let| already gives us destructuring assignment. If a module exports a bunch of items, and we want several of them, then do |var {x,y,z} = import 'foo'| or some such." => Excellent idea. That combined with the single export idea reduces the amount of new syntax to introduce.
Declarations can nest under control flow constructs, but import or module dependencies must be prefetched. They're static.
if (some_rare_condition()) { let x = import "m"; ... }
either always prefetches "m", which does not say what is meant;
True. It could be considered to allow 'let x = import "m";' only at the top level. But if it's the case, having a specific lexical form makes clearer that it's a module import and not a regular assignment.
or it nests an event loop, violating run-to-completion, which is not going to happen.
Event loops and run-to-completion aren't even part of ECMAScript, so I wouldn't allow myself to think about such a thing. Also having played with nested event loops in Firefox chrome code, I'm not really sure they're a good idea the way they are currently designed at least.
David Bruant wrote:
Le 27/06/2012 10:31, Brendan Eich a écrit :
David Bruant wrote:
Le 26/06/2012 16:44, David Bruant a écrit :
Also relevant to this thread, post on the same topic by Isaacs (node.js lead) : blog.izs.me/post/25906678790/on-es-6-modules "Furthermore, |let| already gives us destructuring assignment. If a module exports a bunch of items, and we want several of them, then do |var {x,y,z} = import 'foo'| or some such." => Excellent idea. That combined with the single export idea reduces the amount of new syntax to introduce.
Declarations can nest under control flow constructs, but import or module dependencies must be prefetched. They're static.
if (some_rare_condition()) { let x = import "m"; ... }
either always prefetches "m", which does not say what is meant; True. It could be considered to allow 'let x = import "m";' only at the top level. But if it's the case, having a specific lexical form makes clearer that it's a module import and not a regular assignment.
The other point people seem to miss about import as a special binding form is not just that it can be restricted grammatically to be control-insensitive by construction: it's that static export vs. import checking can be done to catch typos.
This is a significant point, but it's either missed or assumed insignificant. I think we should have a stand-up argument about it. Static module systems are static, in dependency prefetching, in binding, and in export vs. import checking.
Le 27/06/2012 11:09, Brendan Eich a écrit :
David Bruant wrote:
Le 27/06/2012 10:31, Brendan Eich a écrit :
David Bruant wrote:
Le 26/06/2012 16:44, David Bruant a écrit :
Also relevant to this thread, post on the same topic by Isaacs (node.js lead) : blog.izs.me/post/25906678790/on-es-6-modules "Furthermore, |let| already gives us destructuring assignment. If a module exports a bunch of items, and we want several of them, then do |var {x,y,z} = import 'foo'| or some such." => Excellent idea. That combined with the single export idea reduces the amount of new syntax to introduce.
Declarations can nest under control flow constructs, but import or module dependencies must be prefetched. They're static.
if (some_rare_condition()) { let x = import "m"; ... }
either always prefetches "m", which does not say what is meant; True. It could be considered to allow 'let x = import "m";' only at the top level. But if it's the case, having a specific lexical form makes clearer that it's a module import and not a regular assignment.
The other point people seem to miss about import as a special binding form is not just that it can be restricted grammatically to be control-insensitive by construction: it's that static export vs. import checking can be done to catch typos.
This is a significant point, but it's either missed or assumed insignificant. I think we should have a stand-up argument about it. Static module systems are static, in dependency prefetching, in binding, and in export vs. import checking.
Import checking is definitely a feature that's worth it, but Isaacs idea to being able to import jQuery (or any library of course) as is by having the module global scope into the "export object" without polluting the actual global object seems to is definitely a win. Being able to import existing libraries as modules without changing a bit of the library, without even having to read it or worry about global leaks is a strong win in my opinion. It's worth not having the typo check for this particular case. Import checking can still be added afterward.
David Bruant wrote:
Import checking is definitely a feature that's worth it, but Isaacs idea to being able to import jQuery (or any library of course) as is by having the module global scope into the "export object" without polluting the actual global object seems to is definitely a win.
That's maybe a win, but we don't use JQuery that way today. Speculating about future usability is perilous. We'd need to implement and test, but see below for some questions to answer first.
If it's important, then people can build such a system using loaders. But it's at this point completely undemonstrated that exposing JQuery's few top-level bindings in an imported object beats (for usability, simplifying old vs. new clients, or any other measure) modifying JQuery to export those bindings and then importing what the client uses.
Being able to import existing libraries as modules without changing a bit of the library, without even having to read it or worry about global leaks is a strong win in my opinion. It's worth not having the typo check for this particular case.
Either way, there's a different client code obligation from today's pattern.
It's true you can use today's JQuery as is, but why would you use a new client API or syntax and require only new browsers or else trans-compilation? What's the benefit?
Import checking can still be added afterward.
How?
Le 27/06/2012 01:48, Wes Garland a écrit :
What I find bad (1) is the need of VMs, let's take node.js's one, it's calling c++ stuff, calling itself js's stuff, and at the end things are coming back to js (with some imperfections like node.js's VM not binding things correctly in some cases)
Can you explain this in more detail? I don't really understand what you're getting at .
It's a paradox for me, we should be able to handle "VMs" as something builtin without having to do plenty of clone manipulations, using fake or temporary globals, using tricks to reproduce the bindings, extracting and processing stuff from the code to execute, freezing things etc... I give some examples in the gist link I have provided (node, cajaVM, shadow), in the case of node it's even more strange since VM is a c++ module that is internally using js, it's not a critic, that's the way it is, but again we should be able to handle this more simply in js
Regarding modules, I can not believe I am the only one to want to load scripts (and not modules) as I described, so I can load and execute them when and where I want, again example in the gist, modules/scripts are complementary
The other point people seem to miss about import as a special binding form is not just that it can be restricted grammatically to be control-insensitive by construction: it's that static export vs. import checking can be done to catch typos.
As long as the exported names are static, it's possible to catch typos using Isaac's form as well though, right?
I think a stand-up fight about this sounds wonderful.
I am not at all convinced that typo-checking is anywhere near worth the price tag, or is even a problem. Most of the alleged needs for type-checking are a bit dubious to me; that's not really what JS is all about.
It would be great for one of the static-export proponents to catalog some current problems in the wild today that this would address, with code examples that use modern module systems.
Re: Conditional Importing
Only allowing import
at the top level sounds like an ok idea, but
I'm not so sure it's necessary. Consider the current require() style:
if (some_rare_condition()) { foo = require('foo') }
In requirejs and browserify the 'foo' module will be defined, but never loaded (ie, prefetched, evaluated for deps, bundled, etc). In Node, it won't be read from the disk or evaluated (but if it's not there ON the disk, you'll have problems eventually, which is conceptually equivalent to having already been fetched, but without the pre-run safety check.)
if (some_rare_condition()) { foo = import 'foo' }
could be treated similarly. There is an import statement, so resolve
it and define it. However, it's never actually evaluated/run until
the first time the condition is hit, so the program text will be
parsed for import
s, but never actually executed.
I am not aware of this being a surprise to many people in the current systems.
It's true you can use today's JQuery as is, but why would you use a new client API or syntax and require only new browsers or else trans-compilation? What's the benefit?
I'm confused. Isn't module "jquery" { ... $teh.Codez() ... }
already going to require only new browsers, as well as code editing or
trans-compilation? Why is that less onerous than a new API or html
tag, especially when the tag can desugar to the API?
But, that being said, as I mentioned in the cited blog post, auto-exporting the global is a bit weird, at least. Making changes to old libraries is costlier than we tend to think, but usually not prohibitively so (and when it is, we just write new libraries).
I definitely agree that speculating about the future is hazardous, which is exactly why I think that the module specification (and all ES specs, actually) should focus on the problems we have today, and aim to deliver value to today's programs. We should look at current common problems, and ask, "What is the minimum change to the language's semantics and syntax that will make this problem go away, without causing new problems, or preventing other solutions?"
Kevin Smith wrote:
The other point people seem to miss about import as a special binding form is not just that it can be restricted grammatically to be control-insensitive by construction: it's that static export vs. import checking can be done to catch typos.
As long as the exported names are static, it's possible to catch typos using Isaac's form as well though, right?
Not as I understood Isaac's proposal:
|---|
// x.js export { real: 'x' }
|---
// y.js var x = import './x.js'
obscured_call(x)
assert.same(x.real, 'x')
x.typo // undefined, not an early error
|
obscured_call could have deleted the 'real' property, and added (or not) 'typo'. There is no way in general to statically check property references in JS. Static analysis is by definition approximate and while we have some hot analyses in SpiderMonkey and (to be brought back up soon) DoctorJS, they are way too much to mandate in the standard.
Isaac Schlueter wrote:
I think a stand-up fight about this sounds wonderful.
Ok, great. But:
I am not at all convinced that typo-checking is anywhere near worth the price tag, or is even a problem. Most of the alleged needs for type-checking are a bit dubious to me; that's not really what JS is all about.
This is not stand-up fighting.
First, what we propose is not type-checking. Names are not types. It's not even structural record typing, one level deep. We're talking about the same checking done to make sure
var foo = 42; ... foop ...
throws at runtime in ES1-5 if evaluation reaches the foop use, and (if you wrap a module around that hunk of code, and there's no global foop property) at compile-time (EarlyError) in ES6.
Second, you are "not at all convinced". Ok, that's either attitudinizing and padding an already long reply, or a line in the sand that doesn't say how you would be convinced, so unanswerable.
Third, "what JS is all about" arguments fall into the endless meta-discussion "Ugly" talking points I decried at the
brendaneich.com/brendaneich_content/uploads/TXJS-Talk.012.png
slide in this talk:
brendaneich.com/2011/08/my-txjs-talk-twitter-remix
We will never agree on "what JS is all about".
Let's please instead argue about exact semantics of the proposals, so we have a hope of even talking about the same thing.
Then we should try to agree on gaps in the language to fill.
You seem to say lack of typo checking is not a gap in the language. Is this a fair statement?
Stopping here, to avoid ever-increasing message length. Also because we need to agree on stand-up fighting rules.
On Jun 27, 2012, at 8:46 AM, Isaac Schlueter wrote:
I am not at all convinced that typo-checking is anywhere near worth the price tag, or is even a problem. Most of the alleged needs for type-checking are a bit dubious to me; that's not really what JS is all about.
Well, "not JavaScripty" is a circular argument.
JavaScript serves many communities. One community we talk to is game developers that get a lot of mileage out of static types in other languages, and would love to have a typed dialect of JS. Then there's people who use the Closure compiler, which adds a type discipline to JS.
But we're not even talking about type checking here, just variable name checking (although if we don't have static import/export we never will be able to even experiment with interoperable typed dialects of JS -- that door will be permanently shut). IME, a dynamic language with statically checked variables is a nice sweet spot. It catches the really simple errors, because while you may want to do all sorts of dynamic computations in a program, you always want the scoping structure to be fixed and static. But it leaves you free to do all the dynamic stuff you can do today.
We should look at current common problems, and ask, "What is the minimum change to the language's semantics and syntax that will make this problem go away, without causing new problems, or preventing other solutions?"
Hill climbing is famous for reaching local maxima and getting stuck. We have to keep an eye on the longer term at the same time as we solve near-term problems. This is why I have worked for years (not in haste, as your blog post suggests) on a module system that addresses both today's problems and lays foundations for tomorrow.
On Wed, Jun 27, 2012 at 9:39 AM, Brendan Eich <brendan at mozilla.org> wrote:
First, what we propose is not type-checking.
Oh, ok. I misunderstood. Let's not say another word about type checking :)
var foo = 42; ... foop ...
throws at runtime in ES1-5 if evaluation reaches the foop use, and (if you wrap a module around that hunk of code, and there's no global foop property) at compile-time (EarlyError) in ES6.
I don't think that's a real problem. Can you point to in-the-wild bugs caused by this? Maybe it's a failure of imagination on my part.
The "cost" I was referring to was:
- added syntax
- less obvious-for-humans-to-read programs
Consider:
module "foo" { export let x = 100; export let y = { z: 'zed' } }
// far far away in another file entirely... import * from "foo"; import * from "baz"; console.log(x) // what? where did THAT come from? x++; // do other importers of foo see x change? if so, spooky! if not, why not? is it foo's x or not? y.z = 'zoo' // surely that must be shared, right?
The compiler knows about x, but I don't. This is probably my biggest complaint about using C and C++. Managing exported symbols is not hard to automate, but it is hard to not-automate, and that makes it more painful than just fixing the bugs. Compare with:
var foo = import "foo"; console.log(foo.x) // maybe undefined, but so what?
We deal with undefined properties of objects all the time, and it's not really such a big deal. Typos are not a problem worth giving up "export one thing" semantics for, and dealing with symbol conflicts is worse. Just let me call my vars whatever I want; when I ask for your thing, give me your thing. Dumping a bunch of symbols into my local scope is intrusive.
Second, you are "not at all convinced". Ok, that's either attitudinizing and padding an already long reply, or a line in the sand that doesn't say how you would be convinced, so unanswerable.
Sorry, you're right, that was unclear.
I would be convinced by examples showing bugs in modern programs that would have been prevented by the proposed static export syntax. Ie, bugs that current state of the art module systems do not or cannot address, and which are causing actual problems.
We will never agree on "what JS is all about".
Well, apparently we do wrt static type checking, actually :)
But you're right. I'll leave esthetics out of it. Let's focus on practicality.
You seem to say lack of typo checking is not a gap in the language. Is this a fair statement?
Yes that is a fair statement. I don't see how we can add typo checking without also adding "you get to tell me what to call my local vars". In every other place in the language, I get to decide what my vars are called, and typo-checking happens locally, not at a distance. That's not a gap, that's a feature.
The main gap in the language that I'd like to see filled by a module/loader spec is global leakage. As I see it, the rest is simply "how do we do that, without also removing module communication". Removing global leakage removes global communication, so in module-mode, we need a way for modules to communicate to one another, and for the host system to know which programs to load.
On Jun 27, 2012, at 10:32 AM, Isaac Schlueter wrote:
On Wed, Jun 27, 2012 at 9:39 AM, Brendan Eich <brendan at mozilla.org> wrote:
var foo = 42; ... foop ...
throws at runtime in ES1-5 if evaluation reaches the foop use, and (if you wrap a module around that hunk of code, and there's no global foop property) at compile-time (EarlyError) in ES6.
I don't think that's a real problem. Can you point to in-the-wild bugs caused by this? Maybe it's a failure of imagination on my part.
Well, this was a relatively high-profile example:
http://blog.safeshepherd.com/23/how-one-missing-var-ruined-our-launch/
Consider:
// far far away in another file entirely... import * from "foo"; import * from "baz"; console.log(x) // what? where did THAT come from?
The client chose to use *. You don't have to use * if you don't want to. It's a convenience.
x++; // do other importers of foo see x change? if so, spooky! if not, why not? is it foo's x or not?
Clients are disallowed from mutating another module's exports. (That's one of the things we're able to accomplish by making modules declarative rather than totally dynamic.)
Dumping a bunch of symbols into my local scope is intrusive.
The module did not dump anything. The client chose to bulk-import them. The client is perfectly free not to.
Yes that is a fair statement. I don't see how we can add typo checking without also adding "you get to tell me what to call my local vars". In every other place in the language, I get to decide what my vars are called, and typo-checking happens locally, not at a distance. That's not a gap, that's a feature.
Here I have no idea what you're talking about. Nothing about ES6 modules prevents you from locally controlling names. Local control over scope has always been one of the foremost principles of the entire design.
On Wednesday, June 27, 2012 at 10:32 AM, Isaac Schlueter wrote:
I don't think that's a real problem. Can you point to in-the-wild bugs caused by this? Maybe it's a failure of imagination on my part.
Not sure if it's relevant but based on feedback I receive spotting typoed variables is one of the most popular JSHint features.
Anton
The client chose to use *. You don't have to use * if you don't want to. It's a convenience.
The convenience of * comes with a price, of course: (a) the inability to statically catch undeclared names without also anlayzing external files, (b) the hazard of name collisions, and (c) the inability for a reader to tell where names are coming from without automated analysis.
Is it worth it? How can we tell?
On Wed, Jun 27, 2012 at 8:43 PM, David Herman <dherman at mozilla.com> wrote:
On Jun 27, 2012, at 10:32 AM, Isaac Schlueter wrote:
On Wed, Jun 27, 2012 at 9:39 AM, Brendan Eich <brendan at mozilla.org> wrote:
var foo = 42;
... foop ...
throws at runtime in ES1-5 if evaluation reaches the foop use, and (if you
wrap a module around that hunk of code, and there's no global foop property)
at compile-time (EarlyError) in ES6.
I don't think that's a real problem. Can you point to in-the-wild bugs caused by this? Maybe it's a failure of imagination on my part.
Well, this was a relatively high-profile example:
http://blog.safeshepherd.com/23/how-one-missing-var-ruined-our-launch/
I don't see how that's at all related to modules or how modules would have prevented this.
Consider:
// far far away in another file entirely... import * from "foo"; import * from "baz"; console.log(x) // what? where did THAT come from?
The client chose to use *. You don't have to use * if you don't want to. It's a convenience.
For this, if it's actually that convenient, I'd actually suggest another destructuring pattern, because it might be useful in general as well:
let * = Math; console.log(PI); // 3.141592653589793 console.log(cos(0)); // 1
x++; // do other importers of foo see x change? if so, spooky! if not, why not? is it foo's x or not?
Clients are disallowed from mutating another module's exports. (That's one of the things we're able to accomplish by making modules declarative rather than totally dynamic.)
What about exported objects then? Are they immutable to clients as well? That would make this unusable for libraries that have a plugin system, such as jQuery.
On Jun 27, 2012, at 11:00 AM, Jussi Kalliokoski wrote:
On Wed, Jun 27, 2012 at 8:43 PM, David Herman <dherman at mozilla.com> wrote:
http://blog.safeshepherd.com/23/how-one-missing-var-ruined-our-launch/
I don't see how that's at all related to modules or how modules would have prevented this.
Because we're talking about static checking of unbound variables within modules. A reference to or assignment to an unbound variable would result in an early error.
For this, if it's actually that convenient, I'd actually suggest another destructuring pattern, because it might be useful in general as well:
let * = Math;
This is dynamic scoping. The difference between import * and let * is that the former is statically scoped, and the latter is dynamically scoped.
What about exported objects then? Are they immutable to clients as well? That would make this unusable for libraries that have a plugin system, such as jQuery.
Of course not. You can export a mutable object if you want to. You can export whatever you want.
On Jun 27, 2012, at 10:58 AM, Kevin Smith wrote:
The client chose to use *. You don't have to use * if you don't want to. It's a convenience.
The convenience of * comes with a price, of course: (a) the inability to statically catch undeclared names without also anlayzing external files, (b) the hazard of name collisions, and (c) the inability for a reader to tell where names are coming from without automated analysis.
We intend to rule out (b) by disallowing import * from shadowing. But yes, the convenience does mean that the bindings are not explicitly named. That's the trade-off. I prefer to leave this trade-off to developers. Others prefer to make a unilateral ban on *. Reasonable people can disagree. But in my calculus, that argues for inclusion in the language and letting developers or teams make the decision for themselves whether to use it.
Is it worth it? How can we tell?
By implementing it in SpiderMonkey! :) Seriously, though, we intend to build modules so people can get a feel for it.
I understand that import * is controversial. ES6 modules don't depend inherently on them. I believe that they're an important convenience for scripting. But they're not fundamental.
Well, this was a relatively high-profile example:
blog.safeshepherd.com/23/how-one-missing-var-ruined-our-launch
That was a bug caused by a lack of global isolation, which current module systems cannot fix. (Well, node can fix it with separate contexts, but only by harshly penalizing performance and breaking typeof, which we're not willing to do.)
I think we all agree that global isolation is the core purpose of a module system. (Is that incorrect?)
The question was whether there are in-the-wild bugs caused by typo-ing export names in current module systems.
The client chose to use *. You don't have to use * if you don't want to. It's a convenience.
It's unnecessary, afaict, and causes demonstrable harm in languages that have it. If it's just a convenience, then it should be cut out.
Clients are disallowed from mutating another module's exports. (That's one of the things we're able to accomplish by making modules declarative rather than totally dynamic.)
Mutating at all? Ie, they're frozen? If I export an object, you can't decorate it? (If so, what does that restriction buy us? It seems kind of harsh.)
Here I have no idea what you're talking about. Nothing about ES6 modules prevents you from locally controlling names. Local control over scope has always been one of the foremost principles of the entire design.
It's not exactly clear to me how I'd import foo's "x" as something other than "x", from reading harmony:modules (Admittedly, I'm a lot better at parsing JavaScript than parsing JavaScript parsing rules.)
Something like this?
import {myX: x, myY: y, z} from "foo" // comparable to: let {myX: x, myY: y, z} = require("foo")
Does this allow any way for the "foo" module to export just a single thing, as the top level result? How would this be expressed?
var Foo = require("foo") var f = new Foo()
If the answer is "that's not supported", then I think that's a significant gap. It encourages a "one module = one thing" style and is very easy to reason about. It would be better to give up multi-exporting in favor of exporting one thing, only. If I could get away with making that change in Node, I would have by now.
How does this proposal address transitive dependency cycles? Unfinished export objects?
// a.js import b from "b" export a = b
// b.js import c from "c" export b = c
// c.js import a from "a" export c_a = a export c = 10 // does c_a === c?
This was one area where I mentioned in my blog post that new syntax for exporting seems like it might be warranted. With require() systems today, c_a is undefined, because the "c" export wasn't set yet. It's of course much worse when these are functions that call one another.
All of the problems that I'm bringing up, which you're saying are solved by the Harmony:Modules proposal, is it possible to solve them with less new syntax and boilerplate?
On Wed, Jun 27, 2012 at 9:02 PM, David Herman <dherman at mozilla.com> wrote:
On Jun 27, 2012, at 11:00 AM, Jussi Kalliokoski wrote:
On Wed, Jun 27, 2012 at 8:43 PM, David Herman <dherman at mozilla.com> wrote:
blog.safeshepherd.com/23/how-one-missing-var-ruined-our-launch
I don't see how that's at all related to modules or how modules would have prevented this.
Because we're talking about static checking of unbound variables within modules. A reference to or assignment to an unbound variable would result in an early error.
For this, if it's actually that convenient, I'd actually suggest another
destructuring pattern, because it might be useful in general as well:
let * = Math;
This is dynamic scoping. The difference between import * and let * is that the former is statically scoped, and the latter is dynamically scoped.
I'm sorry, I'm not entirely sure what static scoping means in the context of JavaScript. Could you clarify? Does it mean that it's only applicable in the context of the current file, module, domain or something like that? Does it mean that it can't be shadowed by the dynamic scope, for example:
import a from b; var a = 'foo';
What is a? 'foo', error or something else?
What about exported objects then? Are they immutable to clients as well? That would make this unusable for libraries that have a plugin system, such as jQuery.
Of course not. You can export a mutable object if you want to. You can export whatever you want.
Good. I might as well clear another question I had, say I have
-- foo.js
module foo { exports bar = {} }
-- foo-plugin.js
module foo at './foo.js'; export bar from foo;
bar.baz = "zen";
-- uses-foo.js
module foo at './foo.js'; // insert some way to load foo-plugin here console.log(foo.bar.baz); // What should this be?
On Wed, Jun 27, 2012 at 11:15 AM, Isaac Schlueter <i at izs.me> wrote:
import {myX: x, myY: y, z} from "foo" // comparable to: let {myX: x, myY: y, z} = require("foo")
Um.. I got the destructuring backwards, didn't I?
Of course not. You can export a mutable object if you want to. You can export whatever you want.
I seem to have missed that you already answered my question about freezing. Consider it withdrawn.
"For security, the Loader object could be frozen with Object.freeze to prevent additional changes." => This is not enough. People shouldn't have to opt-in for security, mostly because they don't do it. I woud call for security by default here and having "import <path>" call the built-in Loader.resolve instead of the dynamic one. If people want to override the Loader API, they would have to forget about syntax. Or a new syntax could be introduced, making clear that it's dangerous. Maybe something like "importDyn".
Sorry to arrive late to the party, but I don't see the security issue here. Is this about third party scripts being able to change what modules get loaded, to inject a malicious script into a module path? Why would they do that if they already have script access and can import the malicious stuff themselves? Or is this something about leaking secrets?
On Jun 27, 2012, at 11:15 AM, Isaac Schlueter wrote:
That was a bug caused by a lack of global isolation, which current module systems cannot fix.
It was caused by accidentally creating a global variable instead of a local variable. Not totally sure what you mean by global isolation? If you mean giving separate modules separate global objects, I don't agree that that would solve this kind of bug. He doesn't show us the whole code, but it looks like it was local code that was accessing the (accidentally) global variable, but probably different event handlers were interleaving and causing data races.
I think we all agree that global isolation is the core purpose of a module system. (Is that incorrect?)
Partly agree? I believe that obviating the need for globals is the core purpose of a module system. I don't believe that modules should necessarily be strictly separated. Modules should be given clean local scopes so that they don't overwrite each other, but that doesn't mean they shouldn't be able to still communicate via the global object.
The question was whether there are in-the-wild bugs caused by typo-ing export names in current module systems.
That bug was particularly bad because it was assigning to an accidentally global variable. But in my personal experience I certainly forget to import common libraries like 'path' and 'fs' in Node all the time and end up with unbound variable references. When this happens in a control flow that got missed by tests, then it can end up in production.
The client chose to use *. You don't have to use * if you don't want to. It's a convenience.
It's unnecessary, afaict, and causes demonstrable harm in languages that have it. If it's just a convenience, then it should be cut out.
You're not alone in this opinion. I disagree, but I think it's largely an orthogonal question.
Something like this?
import {myX: x, myY: y, z} from "foo" // comparable to: let {myX: x, myY: y, z} = require("foo")
Right, except flipped, as you said in your followup.
Does this allow any way for the "foo" module to export just a single thing, as the top level result? How would this be expressed?
var Foo = require("foo") var f = new Foo()
Just import it directly:
import Foo from "foo"; var f = new Foo();
If the answer is "that's not supported", then I think that's a significant gap. It encourages a "one module = one thing" style and is very easy to reason about. It would be better to give up multi-exporting in favor of exporting one thing, only. If I could get away with making that change in Node, I would have by now.
I just disagree. I think it's fine if you like that style, and you can use it. But we shouldn't force it on users.
Moreover, it would be hostile to adding static constructs in the future, such as macros, that can be exported from a module.
How does this proposal address transitive dependency cycles?
Better than yours. ;-P
Unfinished export objects?
The exports are all there from the beginning but uninitialized.
All of the problems that I'm bringing up, which you're saying are solved by the Harmony:Modules proposal, is it possible to solve them with less new syntax and boilerplate?
Maybe! Happy to discuss. I don't believe there's that much boilerplate. In fact, there's less boilerplate than either CommonJS or AMD, and compared to your sketches on your post I suspect the differences in character count could be counted on one hand.
On Tue, Jun 26, 2012 at 2:54 PM, Isaac Schlueter <i at izs.me> wrote:
The linked blog post is a very rough cut of where my thoughts are on the subject. Expect changes and cleanup. It does not represent a fully-baked (or even half-baked) idea, but more like a general direction.
Your post makes four criticisms of the module system design. The first two are about the runtime module loaders API, and the second two are about the language extensions for writing modules in source code.
It seems to be based on the assumption that nesting module systems is a thing that people want.
It puts too many things in JavaScript (as either API or syntax) which belong in the host (browser/node.js).
These first two misunderstand what the module loaders API provides. In particular, you don't have to write a module system to use it. We agree that almost everyone just wants the defaults to work, and we've tried to design for that. Here's a really simple example:
System.load("http://mylib.com/lib.js", function(mod) { return
mod.do_stuff(); } )
Here's another example, constructing and installing a module at runtime:
System.set("add_blaster", { go : function(n,m) { return n + m; } })
Then we can use the module like this:
System.load("add_blaster", function(ab) { return ab.go(4,5); })
or, since we know that "add_blaster" is a local module:
let { go } = System.get("add_blaster");
go(9,10);
or, if we put the call to System.set
in the previous script tag, we
can just do:
import go from "add_blaster";
go(2,2);
At no point here did we have to write a module system.
Of course, the loader API is designed to give programmers more flexibility than this. If you want to create a sandbox, or set up modules to be cached in localStorage, or rerouted through a CDN, then you can make that happen. But you don't have to use these facilities to get your work done.
It borrows syntax from Python that many Python users do not even recommend using.
Certainly, import *
makes it easier to get name clashes, and we've
designed the system to be resistant to a lot of them. However, it's
great for getting things done quickly, which I hope we all want to
keep supporting. Maybed IDEs for JS will eventually support the same
features that they do for Java, to unfold the * into a bunch of
specific imports, but for hacking something together, convenience and
not repeating lots of names is a big win.
It favors the “object bag of exported members” approach, rather than the “single user-defined export” approach.
Here I just disagree. I think supporting this:
module m { export function f() { ... } export function g() { ... } }
is important. Otherwise, you have to repeat yourself, like so:
module m { function f() { ... } function g() { ... } export {f : f, g : g } }
Of course, the latter is supported too, so if you want to do all your exports in one place, that's fine. However, I have not seen any discussion of why the "single export" approach is superior -- just saying that "it is widely acknowledged" isn't very persuasive, especially given the troubles with cycles. In contrast, the more declarative approach makes handling cycles straightforward.
As a starter, I'd like to say that jQuery may not be the best example since it's heavily maintained and it's certainly an exception by comparison to the massive amount of JavaScript library out there. Also, jquery seems to attach its properties to the 'window' alias rather than the top-level 'this'. I'm shooting myself in the foot here, but it's worth noting that it would be an easy change (like a couple of characters) to make jQuery as usable today and potentially directly compatible with the ES module syntax.
Le 27/06/2012 11:48, Brendan Eich a écrit :
David Bruant wrote:
Import checking is definitely a feature that's worth it, but Isaacs idea to being able to import jQuery (or any library of course) as is by having the module global scope into the "export object" without polluting the actual global object seems to is definitely a win.
That's maybe a win, but we don't use JQuery that way today. Speculating about future usability is perilous. We'd need to implement and test, but see below for some questions to answer first.
If it's important, then people can build such a system using loaders. But it's at this point completely undemonstrated that exposing JQuery's few top-level bindings in an imported object beats (for usability, simplifying old vs. new clients, or any other measure) modifying JQuery to export those bindings and then importing what the client uses.
Old clients don't need to change anything. They're neither simplifyed nor complexified. It makes new client simplified in the sense that they don't need to work on the library to make it ECMAScript module compliant.
Being able to import existing libraries as modules without changing a bit of the library, without even having to read it or worry about global leaks is a strong win in my opinion. It's worth not having the typo check for this particular case.
Either way, there's a different client code obligation from today's pattern.
It's true you can use today's JQuery as is, but why would you use a new client API or syntax and require only new browsers or else trans-compilation? What's the benefit?
The benefit is only that libraries would be compatible with the new 'import' syntax without a change. Current clients wouldn't have to change anything. ES6 clients could use the new "import" syntax.
Import checking can still be added afterward. How?
By "afterward", I meant "by changing the library afterward". This would be done by tracking down things that are exported and using the proper syntax to replace them. But the point is that once a client has written
import jQuery from "./jQuery.js"
then, whether the module has explicitely exported or not, you're good to go. If later 'jQuery.js' has turned its top-level binding into 'export' statements, client code doesn't need to change. It acts exactly the same way. I realize I was wrong, import/export matching can still happen even without 'export' statements. It just happens later. Without export statement, import/export matching requires to evaluate the code, while with 'export' statement, it can be done at parse-time.
I'd like to take a different perspective on that topic.
import x from './x.js'
At the very least, this statement will trigger the download of x.js and parsing of it. If the parsing is successful, then there are 2 cases: either the code has 'export' statements and that's what will be imported. Or the code has no 'export' statement. Then what happens? One solution could be to say that 'x' is not in the exports (since there is none) and throw an error. Or, it could be considered that this library exposes its exports in its top-level binding. The latter idea would turn all existing and future-until-ECMAScript-modules-are-mainstream JavaScript libraries into modules for free, because exposing global properties is what JavaScript programmers have been doing while waiting for a proper module system (until recently with CommonJS, AMD, etc.)
The downsides are weaker checking (no 'export' statement turns into 'your top-level binding is an implicit export area') and later import vs export name checking. It sounds like a decent trade-off to leverage all existing code.
obscured_call could have deleted the 'real' property, and added (or not) 'typo'. There is no way in general to statically check property references in JS. Static analysis is by definition approximate and while we have some hot analyses in SpiderMonkey and (to be brought back up soon) DoctorJS, they are way too much to mandate in the standard.
I was thinking that if Isaac's proposal were modified such that exported names are static, and import expressions always return an object whose properties cannot be set, then we wouldn't need a special import binding form. We could just use "let/var".
But at that point, it's more or less the same semantics and I prefer Dave's spelling (particularly "variant A").
On Wed, Jun 27, 2012 at 11:56 AM, Sam Tobin-Hochstadt <samth at ccs.neu.edu> wrote:
Then we can use the module like this:
System.load("add_blaster", function(ab) { return ab.go(4,5); })
or, since we know that "add_blaster" is a local module:
let { go } = System.get("add_blaster"); go(9,10);
or, if we put the call to
System.set
in the previous script tag, we can just do:import go from "add_blaster"; go(2,2);
At no point here did we have to write a module system.
This is not usually how we have found loading to be done in AMD. 'add_blaster' is usually not loaded before that import call is first seen. Call this module foo:
import go from "add_blaster";
The developer asks for foo first. foo is loaded, and parsed. 'add_blaster' is seen and then loaded and parsed (although not sure how 'add_blaster' is converted to a path…):
add_blaster calls the runtime:
System.set("add_blaster", { go : function(n,m) { return n + m; } });
What happens according to the current modules proposal?
Does an error get generated for foo's import line stating that add_blaster does not export go, or are those checks optional, as David Bruant suggests on another message in this thread?
My previous interaction on this list led me to believe that I would have to construct a userland library to make sure I load and execute the script that does System.set("add_blaster") before foo is parsed.
If that is true, then that is what is fueling my particular feedback about the "eval deps, modify module, then eval module" feedback.
By implementing it in SpiderMonkey! :)
That's cheating! : )
A social note: designing the module system for ES6 is a difficult position to be in because there's already a more or less de facto module system in place (derived from CommonJS). It's like an empty field has been transformed into a garden by the local community, and then the owner of the plot wants to plow it up to create a better garden.
I personally really like the current module proposal - I think we could
I personally really like the current module proposal - I think we could
...benefit from working out how ES6 modules are going to interoperate with current module systems, though.
(my 17 mo. old hit the send button early)
Sure, but I would hate it if this opportunity to create a better module system was dragged down by compatibility with existing ones. We can migrate the node modules if we need to, but the new system has be compelling enough for us to do so.
We should identify the short comings of the existing module systems and improve upon them. We should take the learnings of what does work and has been successful in those modules systems with a greater degree of importance.
One thing we've learned in node is that you can't let the past drag you down in building a better future. We religiously break compatibility for the sake of improvement and if we can get this proposal to a place that node people feel it is an improvement then a break in compatibility is an easy thing for us to do.
On Wed, Jun 27, 2012 at 11:51 AM, David Herman <dherman at mozilla.com> wrote:
On Jun 27, 2012, at 11:15 AM, Isaac Schlueter wrote:
That was a bug caused by a lack of global isolation, which current module systems cannot fix.
It was caused by accidentally creating a global variable instead of a local variable. Not totally sure what you mean by global isolation? If you mean giving separate modules separate global objects, I don't agree that that would solve this kind of bug. He doesn't show us the whole code, but it looks like it was local code that was accessing the (accidentally) global variable, but probably different event handlers were interleaving and causing data races.
So, I've encountered two flavors of this in production, in my own programs: (The first in npm, the second in the no.de portal)
- leaked global
app.route("/my/login", function (req, res) { x = res.query.x || 100 if (x < 100) blerg() })
In testing, you don't spot the leaked global, and things work, because there's only one request at a time. Linters all catch this, and tests can check for leaked globals.
- Module-local var
var x = someThing // ... many lines later .. app.route("/my/login", function (req, res) { x = res.query.x || 100 if (x < 100) blerg() })
A linter won't catch this, since it'll assume that you meant to do exactly what you did. Global isolation won't catch it either, and neither will Harmony Modules or any existing require() thing. (And in fact, often you DO mean to do what this does!)
I think this is one of the cases where we just have to make programming easier, by making choices that encourage smaller, more discrete modules.
Partly agree? I believe that obviating the need for globals is the core purpose of a module system. I don't believe that modules should necessarily be strictly separated. Modules should be given clean local scopes so that they don't overwrite each other, but that doesn't mean they shouldn't be able to still communicate via the global object.
Right, perhaps isolation is the wrong word. Missing a "var" keyword should not be so hazardous, that's what I'm saying.
That bug was particularly bad because it was assigning to an accidentally global variable. But in my personal experience I certainly forget to import common libraries like 'path' and 'fs' in Node all the time and end up with unbound variable references. When this happens in a control flow that got missed by tests, then it can end up in production.
You mean something like this?
var fs = require('fs') // no path here... function notCoveredByTests () { fs.open(path.resolve("yabbadabba"), ...) }
How would any of this solve that?
var Foo = require("foo") var f = new Foo()
Just import it directly:
import Foo from "foo"; var f = new Foo();
But wait... those are two different things, aren't they? Isn't yours
more akin to: var Foo = require('foo").Foo
?
I just disagree. I think it's fine if you like that style [one module exports one thing], and you can use it. But we shouldn't force it on users.
I'm having trouble articulating why it is that module.exports=blah is better than exports.blah=blah. Surely, you can just choose to only put one thing on the exports object, right? It seems obviously better to allow the flexibility, and I was strongly in favor of this early in node's history.
However, after using it a lot, I've found that exports.foo = bar
often ends up being more painful than module.exports = foo
, even
with the transitive issues. I'm not sure why that is, and "Go write
couple hundred KLoC of module JS and then you'll get it" is not an
argument, I know.
Moreover, it would be hostile to adding static constructs in the future, such as macros, that can be exported from a module.
Can you elaborate on that?
How does this proposal address transitive dependency cycles?
Better than yours. ;-P
Unfinished export objects?
The exports are all there from the beginning but uninitialized.
That's sort of like unfinished objects, then, but with the keys all set to undefined.
So, then export x = 10
hoists the export x
and leaves the x = 10
where it is, var-like?
Does a_c === c, or not?
Maybe! Happy to discuss. I don't believe there's that much boilerplate. In fact, there's less boilerplate than either CommonJS or AMD, and compared to your sketches on your post I suspect the differences in character count could be counted on one hand.
It's quite a lot of new syntax, including special syntax for things that are not obviously required by the stated goals. (Not obvious to me anyway, I'm definitely willing to be educated.) The existing AMD and CommonJS patterns are a good "must be at least this useful" bar.
But if we're changing the language, we should crush them and make them no longer even worth considering, because this new thing is so good, in the same way that ()=>{} absolutely cruses function(){}.bind(this). I consider AMD to be too much boilerplate, personally. And while ES modules have the capacity to fix some problems we can't fix with CommonJS API alone, it does so by adding a lot of moving parts and obviously more new syntax.
The cost of new syntax can be justified, clearly, but it IS a cost, that's all I'm saying. If we can add 2 new magic tokens instead of 5, then I think that's a massive improvement.
On 27 June 2012 15:45, Kevin Smith <khs4473 at gmail.com> wrote:
By implementing it in SpiderMonkey! :)
That's cheating! : )
A social note: designing the module system for ES6 is a difficult position to be in because there's already a more or less de facto module system in place (derived from CommonJS). It's like an empty field has been transformed into a garden by the local community, and then the owner of the plot wants to plow it up to create a better garden.
Professional gardeners who feed millions plow up their gardens on a regular basis. So-called "green manure", usually a nitrogen-fixing legume such as white clover, is grown after the money-making crop. This crop will add nitrogen (a key element in plant growth) from the atmosphere into the soil through the action of bacteria growing in its roots. Then this crop is plowed under, allowing it decompose to add organic matter to the soil before the next crop is sown.
Similarly, while I have a LOT of time, money, and effort invested in and around the CommonJS ecosystem, I eagerly await the addition of a native module system to the language. The CommonJS experience helped to generate a lot of fertile ideas and willpower in the community, and when it is time for a better crop to be sewn, I see no reason not to plow it under in order to grow something better.
Remember that CommonJS is not all about modules -- CommonJS modules only a means to and end, which is to create a base-level environment for executing ES code on a wide variety of host platforms. It's impossible to have large systems without modules.
There are some serious issues which remain to be addressed with CommonJS modules, in particular, good ways to handle the global var scope, which cannot be addressed in the browser... and to address them on the server, I had to do some things to SpiderMonkey which would probably make Brendan cry.
One thing I hope we can still have in ES6 modules, though, is the ability to lazy-load modules in a server-side context without altering the semantics of the program. I'll have to give that some thought in the future.
System.load("mylib.com/lib.js", function(mod) { return mod.do_stuff(); } )
Since this is async, it might be worthwhile considering more complex examples (loading multiple libs => nesting or async
libs or promises?).
Here's another example, constructing and installing a module at runtime:
System.set("add_blaster", { go : function(n,m) { return n + m; } })
Then we can use the module like this:
System.load("add_blaster", function(ab) { return ab.go(4,5); })
or, since we know that "add_blaster" is a local module:
let { go } = System.get("add_blaster"); go(9,10);
or, if we put the call to
System.set
in the previous script tag, we can just do:import go from "add_blaster"; go(2,2);
This dependence on "previous script tag" === earlier group of loads is easily missed. Could it be made more prominent in the spec?
And could those examples please be added to the modules examples page? They are helpful starting points for better understanding the limits and possibilities of the spec.
Certainly,
import *
makes it easier to get name clashes, and we've designed the system to be resistant to a lot of them. However, it's great for getting things done quickly, which I hope we all want to keep supporting. Maybed IDEs for JS will eventually support the same features that they do for Java, to unfold the * into a bunch of specific imports, but for hacking something together, convenience and not repeating lots of names is a big win.
I've found "import *" to be a feature that is great to have and even better to avoid. Unless there is a static interface file for the imported module, it weakens module separation. In usage (Haskell) it is great for standard libs (those functions that just should be there, no matter where from).
It gets less and less bearable as more and more non standard libs from module repositories and local installs get added to the mix (the original author might have an idea what is supposed to come from where, but most readers won't be so lucky, and even tools can no longer guess the right definition to choose when newer versions of dependencies introduce conflicts).
And IDEs can just as well help with inserting the proper qualifiers or explicit imports when the code is written, so that the code can be read on a module-by-module basis.
But as you say, coders like to be able to hurt themselves quickly, so we can leave this to coding standards and linters;-)
Claus
On Wed, Jun 27, 2012 at 3:37 PM, James Burke <jrburke at gmail.com> wrote:
On Wed, Jun 27, 2012 at 11:56 AM, Sam Tobin-Hochstadt <samth at ccs.neu.edu> wrote:
Then we can use the module like this:
System.load("add_blaster", function(ab) { return ab.go(4,5); })
or, since we know that "add_blaster" is a local module:
let { go } = System.get("add_blaster"); go(9,10);
or, if we put the call to
System.set
in the previous script tag, we can just do:import go from "add_blaster"; go(2,2);
At no point here did we have to write a module system.
This is not usually how we have found loading to be done in AMD. 'add_blaster' is usually not loaded before that import call is first seen. Call this module foo:
import go from "add_blaster";
The developer asks for foo first. foo is loaded, and parsed. 'add_blaster' is seen and then loaded and parsed (although not sure how 'add_blaster' is converted to a path…):
add_blaster calls the runtime:
System.set("add_blaster", { go : function(n,m) { return n + m; } });
What happens according to the current modules proposal?
I'm not quite sure what you're asking. If the question is: does
importing "foo" automatically compile "add_blaster", then yes, that
happens automatically. You can think about that as doing something
internal that's similar to System.set
. But that's all implicit. If
we are in a system like NPM, where "add_blaster" might map
automatically to "add_blaster.js", we could have:
foo.js:
import go from "add_blaster"
go(1,2)
add_blaster.js:
export function go(n,m) { return n + m; };
Does an error get generated for foo's import line stating that add_blaster does not export go, or are those checks optional, as David Bruant suggests on another message in this thread?
add_blaster does export go
in my example, so I'm not sure what you mean.
My previous interaction on this list led me to believe that I would have to construct a userland library to make sure I load and execute the script that does System.set("add_blaster") before foo is parsed.
Certainly you shouldn't have to create a userland loader in order to get examples like I've written to work. You should only ever need to create a loader if you want to customize things, such as redirecting some things to localStorage, or setting up a sandbox.
let * = Math;
This is dynamic scoping. The difference between import * and let * is that the former is statically scoped, and the latter is dynamically scoped.
I'm sorry, I'm not entirely sure what static scoping means in the context of JavaScript. Could you clarify? Does it mean that it's only applicable in the context of the current file, module, domain or something like that?
Perhaps I can answer this, though I'm not involved with ES Modules.
"Static" scoping means that scoping does not depend on runtime behavior. If "Math" is the Module object for the module "Math", then we have a "dynamic" object (dependent on runtime behavior). If let destructuring were to support "*", the variables in scope after such a statement would depend on the properties of a dynamic object, so scoping would no longer be static:
let * = flip_coin() ? {sin: .., cos: .. } : {apples: .., bananas: ..}
console.log( sin(3.14) ); // is 'sin' bound or not?
That doesn't mean that "static" vs "dynamic" is a clear-cut distinction in JS, where code can be constructed and loaded at runtime. But that just makes it even more important to have clear phase separations, so that one can tell when the static and dynamic phases of each piece of code begin (construction followed by static followed by dynamic).
Language designs that try to unite dynamic modules with static scoping are a well-known case of needing very careful design. "import *" is barely on the safe side, if done right, "let *" is just on the wrong side of this dangerous border.
Hope this helps, Claus
On 27 June 2012 17:21, Sam Tobin-Hochstadt <samth at ccs.neu.edu> wrote:
Certainly you shouldn't have to create a userland loader in order to get examples like I've written to work. You should only ever need to create a loader if you want to customize things, such as redirecting some things to localStorage, or setting up a sandbox.
Hopefully it will be possible to create userland loaders somehow which can fetch multiple modules at once. We are able to fetch multiple CommonJS modules at once, often satisfying an entire dependency tree with a single HTTP 304 status response.
On Wed, Jun 27, 2012 at 5:29 PM, Wes Garland <wes at page.ca> wrote:
On 27 June 2012 17:21, Sam Tobin-Hochstadt <samth at ccs.neu.edu> wrote:
Certainly you shouldn't have to create a userland loader in order to get examples like I've written to work. You should only ever need to create a loader if you want to customize things, such as redirecting some things to localStorage, or setting up a sandbox.
Hopefully it will be possible to create userland loaders somehow which can fetch multiple modules at once. We are able to fetch multiple CommonJS modules at once, often satisfying an entire dependency tree with a single HTTP 304 status response.
What request do you send to ask for multiple modules? Or does the server just know to use 304 in response to requests for particular modules? And does this work for the non-cached situation (that is, how did the client get to the place where 304 was the right thing -- by doing multiple requests previously?)?
Thanks Claus, it helped! But I still kind of like the idea I threw in. It's
a footgun, for sure, but a pretty convenient one, kinda like with
.
Cheers, Jussi
On Jun 28, 2012 12:25 AM, "Claus Reinke" <claus.reinke at talk21.com> wrote:
let * = Math;
This is dynamic scoping. The difference between import * and let * is
that
the former is statically scoped, and the latter is dynamically scoped.
I'm sorry, I'm not entirely sure what static scoping means in the context of JavaScript. Could you clarify? Does it mean that it's only applicable
in
On Jun 27, 2012, at June 27, 20121:06 PM, Isaac Schlueter wrote:
I just disagree. I think it's fine if you like that style [one module exports one thing], and you can use it. But we shouldn't force it on users.
I'm having trouble articulating why it is that module.exports=blah is better than exports.blah=blah. Surely, you can just choose to only put one thing on the exports object, right? It seems obviously better to allow the flexibility, and I was strongly in favor of this early in node's history.
However, after using it a lot, I've found that
exports.foo = bar
often ends up being more painful thanmodule.exports = foo
, even with the transitive issues. I'm not sure why that is, and "Go write couple hundred KLoC of module JS and then you'll get it" is not an argument, I know
I think I might be able to articulate it a little.
Substack had this great quote while Max and I were trying to name our module: "libraries should never be nouns, they should be verbs."
That's not just about naming, that's pretty much his entire design philosophy for modules. Small discreet pieces of code that do one task well. A healthy module ecosystem is built from thousands of small components and not from dozens of "frameworks" and other nouns.
The single function export encourages this design pattern while exporting a hash with several properties encourages the module to be more of a "noun."
This isn't a theory, you can observe that the most popular node modules are the simplest to use anda, with the exception of express and underscore, are single function exports.
On 27 June 2012 17:40, Sam Tobin-Hochstadt <samth at ccs.neu.edu> wrote:
What request do you send to ask for multiple modules?
We send a request like /methods/modules?root=pathto/mystuff&id=/sha256&id=/auth/password
The client canonicalizes each CommonJS dependency to its full (canonical) path and tells the server where the module system root is. The server then examines each module last modification time, comparing against the Last-Modified-Since header. Dependent modules are likewise examined recursively. If any module is newer, all of the modules are sent, otherwise, an HTTP 304 response is returned and the browser reads all modules from its cache.
And does this work for the non-cached situation (that is, how did the client get to the place where 304 was the right thing -- by doing multiple requests previously?)?
Precisely.
This pattern is obviously more useful for some sites than others. But I think it's interesting enough to mention.
On Wed, Jun 27, 2012 at 2:21 PM, Sam Tobin-Hochstadt <samth at ccs.neu.edu> wrote:
On Wed, Jun 27, 2012 at 3:37 PM, James Burke <jrburke at gmail.com> wrote:
On Wed, Jun 27, 2012 at 11:56 AM, Sam Tobin-Hochstadt <samth at ccs.neu.edu> wrote:
Then we can use the module like this:
System.load("add_blaster", function(ab) { return ab.go(4,5); })
or, since we know that "add_blaster" is a local module:
let { go } = System.get("add_blaster"); go(9,10);
or, if we put the call to
System.set
in the previous script tag, we can just do:import go from "add_blaster"; go(2,2);
At no point here did we have to write a module system.
This is not usually how we have found loading to be done in AMD. 'add_blaster' is usually not loaded before that import call is first seen. Call this module foo:
import go from "add_blaster";
The developer asks for foo first. foo is loaded, and parsed. 'add_blaster' is seen and then loaded and parsed (although not sure how 'add_blaster' is converted to a path…):
add_blaster calls the runtime:
System.set("add_blaster", { go : function(n,m) { return n + m; } });
What happens according to the current modules proposal?
I'm not quite sure what you're asking. If the question is: does importing "foo" automatically compile "add_blaster", then yes, that happens automatically. You can think about that as doing something internal that's similar to
System.set
. But that's all implicit. If we are in a system like NPM, where "add_blaster" might map automatically to "add_blaster.js", we could have:foo.js:
import go from "add_blaster" go(1,2)
add_blaster.js:
export function go(n,m) { return n + m; };
I was using the original code for 'add_blaster', say as you say, it is in add_blaster.js:
System.set("add_blaster", { go : function(n,m) { return n + m; } });
My understanding that since add_blaster.js uses the runtime API and not the export, the above code will not work unless I construct a loader library that first loads and executes add_blaster.js before foo.js is parsed.
The use case: scripts, like jquery/backbone/others that want to live in a non-harmony and harmony world, I would expect that they could be adapted to call the System.set() API, but not use the new syntax keywords.
I am under the impression that library developers do not want to provide two different versions of their scripts, just to participate in es modules, but rather use a runtime check to register as part of one script that works in es harmony and non-harmony browsers. Otherwise, it feels like a "2 javascripts" world.
James
I think we all agree that global isolation is the core purpose of a module system. (Is that incorrect?) Partly agree? I believe that obviating the need for globals is the core purpose of a module system. I don't believe that modules should necessarily be strictly separated. Modules should be given clean local scopes so that they don't overwrite each other, but that doesn't mean they shouldn't be able to still communicate via the global object.
Yes, exactly, if I can dare a comparison, a module could behave somewhere like the theorical 'wrap' here gist.github.com/2995641 (sorry to bother with this), contexts are separated, you can not create global vars inside the 'wrap' but can access them, you decide what should come into the 'wrap' and what goes out, so what you can/want to share, a bit like having a super global outside and multiple globals inside the 'wraps' (or modules)
On Wed, Jun 27, 2012 at 7:11 PM, James Burke <jrburke at gmail.com> wrote:
The use case: scripts, like jquery/backbone/others that want to live in a non-harmony and harmony world, I would expect that they could be adapted to call the System.set() API, but not use the new syntax keywords.
Ah, now I understand. Yes, this is feasible, and does not require writing your own loader. I'd write my forward-compatible add_blaster.js like this:
function g(n,m) { return n + m; };
if (this.System) {
System.set("add_blaster", {go : g});
} else {
this.add_blaster = { go : g };
}
Then, in a page, the client (on a browser that supports ES6) does this:
<script src="/assets/add_blaster.js">
<script>
module main {
import go from "add_blaster";
console.log(go(4,5));
}
</script>
<script src="/assets/add_blaster.js"> <script> module main { import go from "add_blaster"; console.log(go(4,5)); } </script>
That's not what I'd call a "forward-compatible" solution since you still have to use the script tag prior to importing. What's needed is a way to tell the loader that "add_blaster" can be fetched from "/assets/add_blaster.js".
I believe that's what James is referring to: a declarative set of mappings that inform the loader where to fetch certain modules from. Creating a custom loader which overrides the default loader's resolve behavior is going to be too much work for this simple use case. That's the argument, anyway. If it's not difficult, then I think we as a community need to see examples of how that would work.
It seems that a better way to enable forward-compatibility would be to provide an imperative way to set the exports of the current module:
if (this.System)
System.setCurrentModuleExports({ ... }); // A shorter name, of course!
The "forward-compatible" module is then relieved of the necessity of naming itself using some arbitrary module name (which would obviously pollute the global module namespace: not acceptable).
Trying to keep this message short, but it seems to me that once we allow the possibility of custom loader behavior (even as simple as declaratively remapping URLs), then the arguments for static import/export start to fall apart. In order to keep the supposed benefits of static analysis, our tools (i.e. IDE) are going to have to be aware of that custom loading behavior to continue to be useful. And if modules are dynamically injected into the module namespace with System.set, then static analysis (and the typo-checking that is deemed so important) is completely "out-the-window".
Does this indicate that the static analysis arguments are on shaky ground?
On Thu, Jun 28, 2012 at 10:40 AM, Kevin Smith <khs4473 at gmail.com> wrote:
<script src="/assets/add_blaster.js"> <script> module main { import go from "add_blaster"; console.log(go(4,5)); } </script>
That's not what I'd call a "forward-compatible" solution since you still have to use the script tag prior to importing. What's needed is a way to tell the loader that "add_blaster" can be fetched from "/assets/add_blaster.js".
What James asked for was a solution for how libraries such as jquery or backbone could be implemented so that they work in both worlds, which is what I provided.
I believe that's what James is referring to: a declarative set of mappings that inform the loader where to fetch certain modules from. Creating a custom loader which overrides the default loader's resolve behavior is going to be too much work for this simple use case. That's the argument, anyway. If it's not difficult, then I think we as a community need to see examples of how that would work.
It seems that a better way to enable forward-compatibility would be to provide an imperative way to set the exports of the current module:
if (this.System) System.setCurrentModuleExports({ ... }); // A shorter name, of course!
The "forward-compatible" module is then relieved of the necessity of naming itself using some arbitrary module name (which would obviously pollute the global module namespace: not acceptable).
Trying to keep this message short, but it seems to me that once we allow the possibility of custom loader behavior (even as simple as declaratively remapping URLs), then the arguments for static import/export start to fall apart. In order to keep the supposed benefits of static analysis, our tools (i.e. IDE) are going to have to be aware of that custom loading behavior to continue to be useful. And if modules are dynamically injected into the module namespace with System.set, then static analysis (and the typo-checking that is deemed so important) is completely "out-the-window".
Does this indicate that the static analysis arguments are on shaky ground?
No, it doesn't. It's important to distinguish between the code that
runs to set up an environment (such as the call to System.set
in my
mail) and code that runs in that environment (such as the import
statement in my mail). What is runtime for one of these is
compilation time for the other.
What James asked for was a solution for how libraries such as jquery or backbone could be implemented so that they work in both worlds, which is what I provided.
From James' point-of-view, though (correct me if I'm wrong, James), this
would be a step backward from current AMD usability.
No, it doesn't. It's important to distinguish between the code that
runs to set up an environment (such as the call to
System.set
in my mail) and code that runs in that environment (such as theimport
statement in my mail). What is runtime for one of these is compilation time for the other.
I see your point. If "add_blaster.js" does not export "go", then we get an early binding error when we compile the main module.
That's fine for the execution environment, but static analysis tools that do not execute code will not be able to tell what the exports are for a given module URL, in general. This point has a bearing on the "IDE" argument, I think.
On Thu, Jun 28, 2012 at 7:56 AM, Sam Tobin-Hochstadt <samth at ccs.neu.edu> wrote:
On Thu, Jun 28, 2012 at 10:40 AM, Kevin Smith <khs4473 at gmail.com> wrote:
<script src="/assets/add_blaster.js"> <script> module main { import go from "add_blaster"; console.log(go(4,5)); } </script>
That's not what I'd call a "forward-compatible" solution since you still have to use the script tag prior to importing. What's needed is a way to tell the loader that "add_blaster" can be fetched from "/assets/add_blaster.js".
What James asked for was a solution for how libraries such as jquery or backbone could be implemented so that they work in both worlds, which is what I provided.
As Kevin says later, that is not what I asked for.
A developer using a module system does not want to have to manually know that some dependencies need to be included as script tags before starting loading. Often they are using modules that have dependencies, that have dependencies, and they do not want to know that they have to manually inspect the dependency tree to figure out what modules need to be inlined as script tags before using modules themselves.
So, they will rely on a someone to provide a script loader library to handle this. But that is what module support should do by default. Otherwise, forms like AMD will continue to thrive, and worse yet cause confusion in the minds of developers. There should be one module format, usable with existing code as dependencies for es modules.
This is a real use case because it comes up all the time in AMD. Ask any AMD+backbone user. Backbone does not use a module format, but it is used all the time as a dependency in AMD modules.
This is why relying on "parse all the dependencies before eval for exports" is a problem. My proposal of "eval dependencies, get those exported values, then use that exported value to modify the AST of the current module, then eval" approach in the other thread is an attempt to solve that problem.
With that, you can support these older libraries that need to use the runtime API so they can live in both worlds, but you still have a shot at supporting things like import checking.
I think working this out in person or in online, real time may work better. Sam or Dave, feel free to contact me offline if you want to do so.
I will also try to set up a repo with some test scenarios, because the optimization case when combining modules also needs more work.
James
On Jun 27, 2012, at 1:06 PM, Isaac Schlueter wrote:
On Wed, Jun 27, 2012 at 11:51 AM, David Herman <dherman at mozilla.com> wrote:
That bug was particularly bad because it was assigning to an accidentally global variable. But in my personal experience I certainly forget to import common libraries like 'path' and 'fs' in Node all the time and end up with unbound variable references. When this happens in a control flow that got missed by tests, then it can end up in production.
You mean something like this?
var fs = require('fs') // no path here... function notCoveredByTests () { fs.open(path.resolve("yabbadabba"), ...) }
Right.
How would any of this solve that?
Because path
is unbound, and static variable checking reports that as an error.
var Foo = require("foo") var f = new Foo()
Just import it directly:
import Foo from "foo"; var f = new Foo();
But wait... those are two different things, aren't they? Isn't yours more akin to:
var Foo = require('foo").Foo
?
Yes. I was thinking there wasn't any significant difference in convenience, but was forgetting (until SubStack pointed it out to me on IRC) that the main difference is that the library is required to give the abstraction a name, and the client must import it by that name. The client can rename it explicitly if they like, but that's strictly less convenient than having the export be completely anonymous.
James Burke has also urged us to consider allowing modules to export a single distinguished thing. I'm going to mull this more. I agree it's a worthwhile goal. But I'd like to find a way to keep the syntax as lightweight as possible and yet not interfere with static resolution.
I'm having trouble articulating why it is that module.exports=blah is better than exports.blah=blah.
I think I can make at least a partial case for why it's a good style. In many languages, I think a natural pattern emerges where a module provides a central organizing data abstraction, and there's a special distinguished export representing that data abstraction. Obviously this is popular in NPM, but you see it all over. In Java, they didn't even have a module system because classes did double-duty as a data abstraction, a constructor, a type definition and a module. In ML, people use the pattern where a module exports a distinguished type that conventionally is called t
-- so you would name the module after the type, and the type would be MyWidget.t.
But we should not force this style on programmers. Even Node itself does not adhere strictly to that style -- look at the 'path' or 'fs' libraries, for example. Same with the ES standard library: Math and JSON are both multi-export (pseudo-)modules that just export functions. In these cases, there's no natural data abstraction needed, no class or object with methods, just functions. To quote John Carmack, "Sometimes the elegant implementation is just a function. Not a method. Not a class. Not a framework. Just a function."
Moreover, it would be hostile to adding static constructs in the future, such as macros, that can be exported from a module.
Can you elaborate on that?
It took me a few days, but I wrote up some rationale for static module resolution on my blog:
http://calculist.org/blog/2012/06/29/static-module-resolution/
That's sort of like unfinished objects, then, but with the keys all set to undefined.
So, then
export x = 10
hoists theexport x
and leaves thex = 10
where it is, var-like?
Correct.
Does a_c === c, or not?
The syntax wasn't quite right. You had:
// c.js import a from "a" export c_a = a export c = 10 // does c_a === c?
You could write:
// c.js import a from "a" export var c_a = a export var c = 10
In this case, it acts like traditional hoisting as you say. So the answer depends on the order of execution of the modules. With cycles, there's always a bit of arbitrariness about it; we'll of course make it deterministic, but you can't avoid the possibility of one module referring to another one before it's executed. So if c_a will only be === to a if a.js executes first. And NaN notwithstanding, of course. :)
Alternatively, you could write:
// c.js import a from "a" export { c_a: a }
In this case, you're re-exporting the same binding from "a", and they are aliases. No matter what you do, c_a and a will be the same.
In Java, they didn't even have a module system because classes did double-duty as a data abstraction, a constructor, a type definition and a module.
Not that it affects your arguments, but that is not entirely true. With packages, you’ve always had a namespacing mechanism that was easy to understand (because it mapped directly to directories) and prevented name clashes (thanks to its reverse domain name convention). And there was package-private visibility. So packages are like 30% of a module system. Classes were mainly used as modules when Java needed to work around not having functions (via static methods), a bit like JSON, Math, et al.
On 30 June 2012 01:49, Axel Rauschmayer <axel at rauschma.de> wrote:
So packages are like 30% of a module system.
Coming from ML, I have to disagree strongly -- Java's packages are at most 3% of a module system. ;)
Strongly concur with Andreas. Citing Java is fraught beyond belief.
Right. :-) There is indeed some clever module stuff out there. IIRC, Racket, née PLT Scheme, goes even further than ML. Newspeak is interesting, too.
[[[Sent from a mobile device. Please forgive brevity and typos.]]]
Dr. Axel Rauschmayer axel at rauschma.de Home: rauschma.de Blog: 2ality.com
Sorry for my long delay in responding.
On Fri, Jun 29, 2012 at 4:33 PM, David Herman <dherman at mozilla.com> wrote:
var fs = require('fs') // no path here... function notCoveredByTests () { fs.open(path.resolve("yabbadabba"), ...) }
Right.
How would any of this solve that?
Because
path
is unbound, and static variable checking reports that as an error.
And this works because modules don't share global namespace with one another? (If they do share global space, then how would the static checker know that it won't be assigned to a global 'path' by that time?)
I'm going to mull this more. I agree it's a worthwhile goal. But I'd like to find a way to keep the syntax as lightweight as possible and yet not interfere with static resolution.
Sounds good. I'm interested in what you come up with.
But we should not force this style on programmers.
It's not forcing anything on programmers.
If you want to export a bag of functions, then put the functions on an object, and export the object.
It is making it trickier to figure out how to add types and macros, but I'm less excited about those features than I am about making our existing problems easier to solve.
Even Node itself does not adhere strictly to that style -- look at the 'path' or 'fs' libraries, for example.
I consider that a mistake. And even there, there's a single "exports" object that methods are assigned to.
Same with the ES standard library: Math and JSON are both multi-export (pseudo-)modules that just export functions.
They export a single object that has functions attached to it. Math.pow(), JSON.parse, etc.
export { parse: parse, stringify:stringify }
In these cases, there's no natural data abstraction needed, no class or object with methods, just functions. To quote John Carmack, "Sometimes the elegant implementation is just a function. Not a method. Not a class. Not a framework. Just a function."
The Carmack quote is exactly why "export one thing" is so important. Most modules should be a single function; not several things, not a collection of utility methods.
Moreover, it would be hostile to adding static constructs in the future, such as macros, that can be exported from a module. Can you elaborate on that? It took me a few days, but I wrote up some rationale for static module resolution on my blog:
http://calculist.org/blog/2012/06/29/static-module-resolution/
At the risk of seeming like a little bit of a luddite, it seems weird to me to make the "modules that export stuff" use case (which we have now) less awesome, in favor of the "modules that exports macros and types" use case (which is not a compelling problem right now).
Granted, we don't have that use case because it doesn't exist. But maybe it could be done in a different way that doesn't necessitate multiple exports.
On Jul 20, 2012, at 9:23 PM, Isaac Schlueter wrote:
On Fri, Jun 29, 2012 at 4:33 PM, David Herman <dherman at mozilla.com> wrote:
var fs = require('fs') // no path here... function notCoveredByTests () { fs.open(path.resolve("yabbadabba"), ...) }
Right.
How would any of this solve that?
Because
path
is unbound, and static variable checking reports that as an error.And this works because modules don't share global namespace with one another? (If they do share global space, then how would the static checker know that it won't be assigned to a global 'path' by that time?)
They do share a global namespace, and the static checker doesn't know that a new global won't be assigned. It's an early error to refer to a variable that doesn't exist at the time of checking.
But we should not force this style on programmers.
It's not forcing anything on programmers.
If you want to export a bag of functions, then put the functions on an object, and export the object.
It is making it trickier to figure out how to add types and macros, but I'm less excited about those features than I am about making our existing problems easier to solve.
It's not trickier, it's essentially impossible. If we don't support static imports and exports, those doors are shut. Not to mention the other things I mentioned in my blog post, including straightforward optimizations and interoperability with modules written in other languages.
Even Node itself does not adhere strictly to that style -- look at the 'path' or 'fs' libraries, for example.
I consider that a mistake. And even there, there's a single "exports" object that methods are assigned to.
I'm still having a hard time telling whether your "just one export" thing descends into tautology. I mean, clearly a set of n things can be thought of either as n things or 1 set. But does that actually tell us anything?
Same with the ES standard library: Math and JSON are both multi-export (pseudo-)modules that just export functions.
They export a single object that has functions attached to it. Math.pow(), JSON.parse, etc.
export { parse: parse, stringify:stringify }
Again, this is obvious, so I'm not sure what you're demonstrating. Clearly if we don't care about any of the benefits of static modules, then multiple operations can be provided as properties of a single object. But you've been claiming that it's a mistake for modules to support multiple operations (such as 'fs' or 'path' or Math). And I disagree with that.
In these cases, there's no natural data abstraction needed, no class or object with methods, just functions. To quote John Carmack, "Sometimes the elegant implementation is just a function. Not a method. Not a class. Not a framework. Just a function."
The Carmack quote is exactly why "export one thing" is so important. Most modules should be a single function; not several things, not a collection of utility methods.
I don't share your interpretation, but let's not quibble over what Carmack meant. What I mean is, JSON.parse and JSON.stringify don't really function as methods. They don't care about this
. They're just functions, and they happen to be stored on an object because at the time JSON came out, that was the only way to provide multiple functions.
If I want to write a module that provides n functions, what is inherently superior about providing a Thing with n methods, instead of just directly providing the n functions?
BTW, I'm not saying that I don't like libraries that use method chaining and abstract types and all that good stuff -- I love it! jQuery, optimist -- these are all great! But they are not the only way to write good libraries.
Moreover, it would be hostile to adding static constructs in the future, such as macros, that can be exported from a module. Can you elaborate on that? It took me a few days, but I wrote up some rationale for static module resolution on my blog:
At the risk of seeming like a little bit of a luddite, it seems weird to me to make the "modules that export stuff" use case (which we have now) less awesome, in favor of the "modules that exports macros and types" use case (which is not a compelling problem right now).
I don't see what you're saying is less awesome. I already wrote a blog post about all the ways I think it's more awesome. It's certainly more awesome to be able to do callback-free module loading without synchronous I/O, which neither Node nor AMD can do. It's more awesome to be able to put 'export' in front of a local var/function/let/class declaration without having to write separate code elsewhere that constructs an export object. It's more awesome to get early binding optimization. It's more awesome to get built-in error checking for everyone, not just the fraction of people that use linters. It's more awesome to support language interoperability. It's more awesome to keep the door open for great features that could make JavaScript better in the future.
Granted, we don't have that use case because it doesn't exist. But maybe it could be done in a different way that doesn't necessitate multiple exports.
It can't.
If you want to export a bag of functions, then put the functions on an object, and export the object.
It is making it trickier to figure out how to add types and macros, but I'm less excited about those features than I am about making our existing problems easier to solve.
It's not trickier, it's essentially impossible. If we don't support static imports and exports, those doors are shut.
-
It should be possible to reconcile the two styles:
-
if the single export object is an object literal, then one-level early checking against imports should be possible
-
if the single export object is not an object literal (eg, a function), early checking could limit static imports to that single object (the properties of which could still be selected dynamically)
In other words, one could permit
module M { export { x: .. , y: ..} } import {x,y} from M
or
module M { var obj = .. ; export obj} import obj from M .. obj.x ..
but not
module M { var obj = .. ; export obj} import {x,y} from M // early error, even if obj.x/obj.y exist
-
-
Why the focus on an early -limited to one-level- check in the current spec can seem non-optimal:
-
if one wants to export a single object, one has to introduce a level of indirection: 'import {theThing: thing} from module'; this is a workaround, for a newly design module system
-
one level of indirection is sufficient to defeat the early checks: 'import {jquery:$} from jquery' give no guarantees whatsoever about the components of '$'
-
-
There is the question of explicit module export interfaces:
-
in ES5, a single explicit export object (as an object literal) makes the export interface obvious; while assignments to exports don't
-
in ES6, export declarations are syntactically limited so that tools can unambiguously identify (one level of) the export interface;
that doesn't mean that humans should have to hunt for export declarations spread throughout the module, though
-
-
The same question arises for import dependencies:
-
in ES5, scanning for calls to 'require' and their parameters gives no guarantees if conventions are not followed
-
in ES6, 'import' conventions are enforced statically, so tools can discover dependencies statically; again, humans should not have to hunt for 'import' declarations
-
-
and for import interfaces:
- ES5 modules make no guarantees
- ES6 modules rely on static checks so hard that they allow 'import *' to interact with the importer's lexical environment
Here I've come around to Isaac's opinion that 'import *' is a step too far. Previously, I said this is a convenient bad habit that might be left to linters. But that was based on experience with statically typed languages, where modules and their import/export interfaces could still be analyzed in separation.
In ES, that is not the case: if 'System.set' and 'import *' are combined, humans and tools would have to run dependencies to discover the import interface. That makes it impossible to analyze/understand such modules in separation, statically.
In brief, in the context of a language as dynamic as JS, the convenience of 'import *' is not worth the damage it does to modular program understanding. Instead, we should ensure that import interfaces are clearly and statically defined.
Not to mention the other things I mentioned in my blog post, including straightforward optimizations and interoperability with modules written in other languages.
As others have tried to point out before, and I've tried to pin down in the thread
ES modules: syntax import vs preprocessing cs plugins
https://mail.mozilla.org/pipermail/es-discuss/2012-July/023985.html
current dynamic JS module systems (both AMD and node's) handle language interoperability and preprocessing in ways that the currently spec-ed ES6 modules cannot:
While all three systems provide for loader plugins in some form, ES6 modules currently make it very hard to use such plugins, requiring a switch away from the new static module system to dynamic and asynchronous features.
The solution seems straightforward, and has been championed by James here: allow loader plugins to be specified on import, without leaving the new world of static and syntactic module imports. If you don't like
import .. from 'loader!resource'
then perhaps
import .. from 'resource' using 'loader'
might do (in both cases, 'loader' itself would be loaded as a module).
Claus
On Sat, Jul 21, 2012 at 5:03 AM, Claus Reinke <claus.reinke at talk21.com> wrote:
If you want to export a bag of functions, then put the functions on an object, and export the object.
It is making it trickier to figure out how to add types and macros, but I'm less excited about those features than I am about making our existing problems easier to solve.
It's not trickier, it's essentially impossible. If we don't support static imports and exports, those doors are shut.
It should be possible to reconcile the two styles:
if the single export object is an object literal, then one-level early checking against imports should be possible
if the single export object is not an object literal (eg, a function), early checking could limit static imports to that single object (the properties of which could still be selected dynamically).
On the subject of 'exporting one value', there are a few things to say:
-
It's been asserted repeatedly that libraries that export 'just one value' are a better design, but while this is excellent style in lots of cases, I don't think a persuasive case has been made that this should be the only style. Dave listed a number of key Node libraries that don't follow this rule, and if you look at other libraries or other languages you mostly see the same thing -- sometimes it's the right thing, and sometimes it isn't.
-
Exporting a value as the module runs into tricky issues in the relationship between the prototype hierarchy, the exports, and the definition of module instances. For example, should exporting a function as the export of a module named
M
mean thatM.call
is also available?
Why the focus on an early -limited to one-level- check in the current spec can seem non-optimal:
if one wants to export a single object, one has to introduce a level of indirection: 'import {theThing: thing} from module'; this is a workaround, for a newly design module system
one level of indirection is sufficient to defeat the early checks: 'import {jquery:$} from jquery' give no guarantees whatsoever about the components of '$'
JS is a dynamic language -- there's no way we're taking away the ability to export JS objects from modules, so I don't see what this objection is about. You could write your whole program in a string, and give up early syntax errors, too.
There is the question of explicit module export interfaces:
in ES5, a single explicit export object (as an object literal) makes the export interface obvious; while assignments to exports don't
in ES6, export declarations are syntactically limited so that tools can unambiguously identify (one level of) the export interface;
that doesn't mean that humans should have to hunt for export declarations spread throughout the module, though
You can use one export
declaration at the top of the module, but
we're not requiring that style.
The same question arises for import dependencies:
in ES5, scanning for calls to 'require' and their parameters gives no guarantees if conventions are not followed
in ES6, 'import' conventions are enforced statically, so tools can discover dependencies statically; again, humans should not have to hunt for 'import' declarations
I don't understand you point here. The point of the module system is not to mandate one preferred code organization style.
and for import interfaces:
- ES5 modules make no guarantees
- ES6 modules rely on static checks so hard that they allow 'import *' to interact with the importer's lexical environment
Here I've come around to Isaac's opinion that 'import *' is a step too far. Previously, I said this is a convenient bad habit that might be left to linters. But that was based on experience with statically typed languages, where modules and their import/export interfaces could still be analyzed in separation.
In ES, that is not the case: if 'System.set' and 'import *' are combined, humans and tools would have to run dependencies to discover the import interface. That makes it impossible to analyze/understand such modules in separation, statically.
This is not correct. You can look at a single module in isolation, and learn exactly the same things about its interface that you can in Haskell, for example.
In brief, in the context of a language as dynamic as JS, the convenience of 'import *' is not worth the damage it does to modular program understanding. Instead, we should ensure that import interfaces are clearly and statically defined.
I disagree. Clearly and statically defined interfaces are a great
thing for some software. Other programs, be they scripts written by
middle school kids or dynamically-reflective towers of
meta-programming, don't want or need them. What's the interface to
$
, in the face of jQuery plugins -- you can't tell, statically. But
that doesn't mean plugins are a bad thing.
Not to mention the other things I mentioned in my blog post, including straightforward optimizations and interoperability with modules written in other languages.
As others have tried to point out before, and I've tried to pin down in the thread
ES modules: syntax import vs preprocessing cs plugins esdiscuss/2012-July/023985
current dynamic JS module systems (both AMD and node's) handle language interoperability and preprocessing in ways that the currently spec-ed ES6 modules cannot:
While all three systems provide for loader plugins in some form, ES6 modules currently make it very hard to use such plugins, requiring a switch away from the new static module system to dynamic and asynchronous features.
The solution seems straightforward, and has been championed by James here: allow loader plugins to be specified on import, without leaving the new world of static and syntactic module imports. If you don't like import .. from 'loader!resource'
Dave and I have been talking about this, and fortunately it doesn't
require changing the core elements of the module system -- it just
means making the System
loader somewhat more configurable at
runtime. Then you'd be able to specify what the 'text' loader should
do, and it would automatically hand 'text!resource' off to that
loader, using the existing module loaders mechanism. This wouldn't
reduce any of the benefits we get, as Dave listed earlier, but would
allow us to express the sorts of things you can do in AMD with loader
plugins.
On the subject of 'exporting one value', there are a few things to say:
- It's been asserted repeatedly that libraries that export 'just one value' are a better design, but while this is excellent style in lots of cases, I don't think a persuasive case has been made that this should be the only style. Dave listed a number of key Node libraries that don't follow this rule, and if you look at other libraries or other languages you mostly see the same thing -- sometimes it's the right thing, and sometimes it isn't.
As Isaac replied, those exceptions are not examples to follow. As I've tried to point out in my summary, separate exports in ES6 won't be as hard to untangle as separate export assignments in ES5.
Personally, my problem is not so much with different styles, but with which style is taken as the basis of the export system design.
Compare
module M {
.. lots of code ..
export var x = ..;
function helper(..) {..}
export var y = helper(..);
}
with
module M {
.. lots of code ..
var x = ..;
function helper(..) {..}
var y = helper(..);
export {x,y} // or, without punning: export {x: x, y: y}
}
The former is using the harder-to-read style, but the (top level of) export interface can still be extracted without running the code, so even with this style, we have an improvement over ES5 modules.
However, the export object is implicit, and local variable names are directly tied to export object properties. In my experience with module systems, separating local names from exported/imported names is important (for preserving the static/dynamic phase distinction when adding more dynamic features to the design).
The latter variant not just makes it easier for humans to find the export interface, which -as you say- is a matter of style. It also has an explicit export object, and it separates the local names from the exported properties. Using property punning (x aas a shortcut for "x" : x), we can still have the convenience of just listing the exported names but the desugared version shows that we have separated local from exported names.
To give just one example of why this is important: imagine a renaming refactoring applied to this module. In the former variant (the current spec), renaming a variable that happens to be exported implies propagating that renaming to all importers!
- Exporting a value as the module runs into tricky issues in the relationship between the prototype hierarchy, the exports, and the definition of module instances. For example, should exporting a function as the export of a module named
M
mean thatM.call
is also available?
Hm, I hadn't thought about that, but my intuition tells me that there should be an export object for every module, with just the exports, and that this export object should be accessible from the module object.
The module object might have other properties, and depending on how this is organized (module system properties in the prototype chain of the export object, or the export object as component of the module object), the module object itself might not support 'M.call', but there should be a way to get to the export object within the module object, and that should definitely support something like 'M.exports.call' if M's export is a function.
[splitting my reply here]
To summarize: I have no problem with separate exports as a convenience on top of an export system that maintains an explicit export object and separates local from exported names.
I do have a problem with an export system that conflates local and exported names and does not provide access to an unmodified export object.
Does this clarify my concerns? Claus
On 24 July 2012 05:03, Claus Reinke <claus.reinke at talk21.com> wrote:
Hm, I hadn't thought about that, but my intuition tells me that there should be an export object for every module, with just the exports, and that this export object should be accessible from the module object.
Being able to access the export object from the module object enables a pattern we use locally, which is roughly
require("myModule").configParameter = xyz;
or
require("myModule").errorReporter = function(err){alert(err)};
exports.errorReporter, exports.configParameter are then used heavily within the module -- normally, they are not even set by the user, but they there in case the user needs to override some behaviour.
Being able to access the export object from the module object enables a pattern we use locally, which is roughly
require("myModule").configParameter = xyz; .. exports.errorReporter, exports.configParameter are then used heavily within the module -- normally, they are not even set by the user, but they there in case the user needs to override some behaviour.
Shouldn't that already be possible? Only the (top-level) exports are non-modifiable, so I think this would work:
module M { let configParameter = default; export function setConfigParameter(value){ configParameter = value } export ..other code using configParameter.. }
Looking forward to modules shims and implementations, so that we can verify and harmonize our interpretations of the spec.
Claus
[I've elided some points and comments: I was trying to summarize what seemed to me the core issues in this discussion; if my summary was unclear, it won't help to add more text; if my summary was clear, but the disagreements persist, adding more text won't help, either]
Here I've come around to Isaac's opinion that 'import *' is a step too far. Previously, I said this is a convenient bad habit that might be left to linters. But that was based on experience with statically typed languages, where modules and their import/export interfaces could still be analyzed in separation.
In ES, that is not the case: if 'System.set' and 'import *' are combined, humans and tools would have to run dependencies to discover the import interface. That makes it impossible to analyze/understand such modules in separation, statically.
This is not correct. You can look at a single module in isolation, and learn exactly the same things about its interface that you can in Haskell, for example.
Haskell isn't a good role model wrt module systems - the main design goal there was simplicity, so it doesn't use advanced module system ideas (at least not in the standard module system). Also, some good aspects have disappeared, and some aspects haven't quite scaled up with the increased use.
One thing that disappeared (because it wasn't done well) was interface files, which allowed to develop modules wrt module interfaces rather than module implementations. So, yes, Haskell suffers from a combination of 'import * from M' with no easy way to pin down M's expected export interface.
Standard ML, and variants that support higher-order or even first-class functors (parameterized modules), might be more interesting in this context. Even when it can't be statically (before running the module-level code) determined which module will provide the imports, one can pin down which interface that module will provide. So one can understand each module in isolation, with the import and export interfaces acting as boundaries.
But we can stay in ES6 for this discussion - consider
<script>
System.set('X',(Math.random() > 0.5) ? {x:"hi"} : {u:"oops"}; </script>
<script>
module M1 { import * from X; console.log(x.length); } module M2 { import {x} from X; console.log(x.length); } </script>
I cannot look at 'M1' and know whether or not 'x' is bound, because the import interface is unspecified. So I'd have to look at 'X' and, in this case, I'd have to run 'X' before I could tell whether 'x' in 'M1' is going to be defined.
In contrast, I can look at 'M2' and know, because the import interface is specified, that, if 'X' is accepted as dependency for 'M2', then 'x' will be available. There might still be a problem at runtime, but it will be outside 'M2', and it will be about matching an export interface to an import interface, not about static scoping in 'M2'.
(Such dynamic module aspects are another reason why one wants to keep exported, imported, and local names separate.)
In brief, in the context of a language as dynamic as JS, the convenience of 'import *' is not worth the damage it does to modular program understanding. Instead, we should ensure that import interfaces are clearly and statically defined.
I disagree. Clearly and statically defined interfaces are a great thing for some software. Other programs, be they scripts written by middle school kids or dynamically-reflective towers of meta-programming, don't want or need them. What's the interface to
$
, in the face of jQuery plugins -- you can't tell, statically. But that doesn't mean plugins are a bad thing.
The question is not the export interface of '$', which can change with every plugin or new release. The question is whether importers of '$' can isolate themselves from such changes by specifying an import interface. Any export interface that provides the import interface will do.
Of course, it is not just handy but a pragmatic necessity not having to write out 'standard' imports, but as with physical 'constants', things can go awry if 'standards' change. It would be great if I could abstract over import interfaces, so that I don't have to write them out on every import declaration.
One way to do this is via tools: for Haskell, I had a Vim plugin that would allow me to write an unqualified variable, and then have the plugin search the available module exports to make suggestions about imports, adding the selected imports and qualifiers. So I'd be free of worrying about writing import declarations, but my code would have the import interfaces documented.
Another way would use module loaders: the default System loader already inserts implicit imports for some standard modules, so it could insert those with explicit import lists.
And I could have a project-specific loader adding import declarations for my 'standard' project imports.
And for school kids, one could have a course-specific loader inserting course-specific import declarations. One could even follow DrScheme in have level-specific 'standard' imports. Or have a standard set of 'play around' imports.
I could be wrong, of course, but I think that there are other (and better) solutions to the issues we are tempted to address with 'import *'.
import .. from 'loader!resource'
Dave and I have been talking about this, and fortunately it doesn't require changing the core elements of the module system -- it just means making the
System
loader somewhat more configurable at runtime. Then you'd be able to specify what the 'text' loader should do, and it would automatically hand 'text!resource' off to that loader, using the existing module loaders mechanism. This wouldn't reduce any of the benefits we get, as Dave listed earlier, but would allow us to express the sorts of things you can do in AMD with loader plugins.
Great!-) From what I could see of the discussion, this should remove the main technical obstacles raised against upgrading to ES6 modules.
The translate hook should allow for things like Coffeescript or Streamline. The fetch hook ought to help with alternate sources (CDN with local fallback). The resolve hook ought to allow something like the RequireJS config (mapping abstract module names to concrete resources in a central position).
Looking forward to ES6 modules, Claus
On Tue, Jul 24, 2012 at 1:11 PM, Claus Reinke <claus.reinke at talk21.com> wrote:
Here I've come around to Isaac's opinion that 'import *' is a step too far. Previously, I said this is a convenient bad habit that might be left to linters. But that was based on experience with statically typed languages, where modules and their import/export interfaces could still be analyzed in separation.
In ES, that is not the case: if 'System.set' and 'import *' are combined, humans and tools would have to run dependencies to discover the import interface. That makes it impossible to analyze/understand such modules in separation, statically.
This is not correct. You can look at a single module in isolation, and learn exactly the same things about its interface that you can in Haskell, for example.
Haskell isn't a good role model wrt module systems - the main design goal there was simplicity, so it doesn't use advanced module system ideas (at least not in the standard module system). Also, some good aspects have disappeared, and some aspects haven't quite scaled up with the increased use. One thing that disappeared (because it wasn't done well) was interface files, which allowed to develop modules wrt module interfaces rather than module implementations. So, yes, Haskell suffers from a combination of 'import * from M' with no easy way to pin down M's expected export interface.
Standard ML, and variants that support higher-order or even first-class functors (parameterized modules), might be more interesting in this context. Even when it can't be statically (before running the module-level code) determined which module will provide the imports, one can pin down which interface that module will provide. So one can understand each module in isolation, with the import and export interfaces acting as boundaries.
There's a lot to say about module systems in Haskell, ML and other languages, but we really don't want to try to make JS push the boundaries on what can be effectively statically checked. The current design adds a very small amount of static information and checking to modules, and leaves open room for some more. The SML module system, while impressive, is most certainly not the goal.
But we can stay in ES6 for this discussion - consider
<script> System.set('X',(Math.random() > 0.5) ? {x:"hi"} : {u:"oops"}; </script>
<script> module M1 { import * from X; console.log(x.length); } module M2 { import {x} from X; console.log(x.length); } </script>
I cannot look at 'M1' and know whether or not 'x' is bound, because the import interface is unspecified. So I'd have to look at 'X' and, in this case, I'd have to run 'X' before I could tell whether 'x' in 'M1' is going to be defined. In contrast, I can look at 'M2' and know, because the import interface is specified, that, if 'X' is accepted as dependency for 'M2', then 'x' will be available. There might still be a problem at runtime, but it will be outside 'M2', and it will be about matching an export interface to an import interface, not about static scoping in 'M2'.
First, that dynamic module construction is going to be hard to reason
about -- that's the price we pay for all the great things about a
dynamic language. Second, if you want M2
, we're making it possible,
even easy, to write. But I don't think we should ban people from
using import *
because sometimes it's hard to reason about.
(Such dynamic module aspects are another reason why one wants to keep exported, imported, and local names separate.)
In brief, in the context of a language as dynamic as JS, the convenience of 'import *' is not worth the damage it does to modular program understanding. Instead, we should ensure that import interfaces are clearly and statically defined.
I disagree. Clearly and statically defined interfaces are a great thing for some software. Other programs, be they scripts written by middle school kids or dynamically-reflective towers of meta-programming, don't want or need them. What's the interface to
$
, in the face of jQuery plugins -- you can't tell, statically. But that doesn't mean plugins are a bad thing.The question is not the export interface of '$', which can change with every plugin or new release. The question is whether importers of '$' can isolate themselves from such changes by specifying an import interface. Any export interface that provides the import interface will do.
Right now, jQuery works without its users specifying such an interface. Again, people who want to specify such an interface have their reasons, and we want to support them. But people who don't should also find a home in the design.
import .. from 'loader!resource'
Dave and I have been talking about this, and fortunately it doesn't require changing the core elements of the module system -- it just means making the
System
loader somewhat more configurable at runtime. Then you'd be able to specify what the 'text' loader should do, and it would automatically hand 'text!resource' off to that loader, using the existing module loaders mechanism. This wouldn't reduce any of the benefits we get, as Dave listed earlier, but would allow us to express the sorts of things you can do in AMD with loader plugins.Great!-) From what I could see of the discussion, this should remove the main technical obstacles raised against upgrading to ES6 modules. The translate hook should allow for things like Coffeescript or Streamline. The fetch hook ought to help with alternate sources (CDN with local fallback). The resolve hook ought to allow something like the RequireJS config (mapping abstract module names to concrete resources in a central position).
That's the idea.
Sam Tobin-Hochstadt wrote:
But I don't think we should ban people from using
import *
because sometimes it's hard to reason about.
Just to focus on import *, here's where I am:
I'm in favor of deferring (not to say cutting) import *, in order to get ES6 modules spec'ed and avoid protracted maybe-good/maybe-bad arguments.
If someone prototyping or REPL'ing feels the pain, they should wail in agony. Enough wailing and we'll figure out something for their use case -- but not on the critical path for ES6.
Date: Tue, 24 Jul 2012 14:11:38 -0700 From: brendan at mozilla.org To: samth at ccs.neu.edu Subject: Re: ES Modules: suggestions for improvement CC: es-discuss at mozilla.org
Sam Tobin-Hochstadt wrote:
But I don't think we should ban people from using
import *
because sometimes it's hard to reason about.Just to focus on import *, here's where I am:
I'm in favor of deferring (not to say cutting) import *, in order to get ES6 modules spec'ed and avoid protracted maybe-good/maybe-bad arguments.
If someone prototyping or REPL'ing feels the pain, they should wail in agony. Enough wailing and we'll figure out something for their use case -- but not on the critical path for ES6.
/be
es-discuss mailing list es-discuss at mozilla.org, mail.mozilla.org/listinfo/es-discuss
Agree. import *
is just a variant of with
-- everyone knows it is evil. import x, y, z from xxx
is enough for most cases.
On Tue, Jul 24, 2012 at 5:11 PM, Brendan Eich <brendan at mozilla.org> wrote:
Sam Tobin-Hochstadt wrote:
But I don't think we should ban people from using
import *
because sometimes it's hard to reason about.Just to focus on import *, here's where I am:
I'm in favor of deferring (not to say cutting) import *, in order to get ES6 modules spec'ed and avoid protracted maybe-good/maybe-bad arguments.
If someone prototyping or REPL'ing feels the pain, they should wail in agony. Enough wailing and we'll figure out something for their use case -- but not on the critical path for ES6.
+1 I'm all for deferring on this. Its easy to add later, but you can't take it back. I'm used to using import * with Java, but I think the pattern matching should be concise enough, and I think it probably the right level of explicitness.
BelleveInvis wrote:
Date: Tue, 24 Jul 2012 14:11:38 -0700 From: brendan at mozilla.org To: samth at ccs.neu.edu Subject: Re: ES Modules: suggestions for improvement CC: es-discuss at mozilla.org
Sam Tobin-Hochstadt wrote:
But I don't think we should ban people from using
import *
because sometimes it's hard to reason about.Just to focus on import *, here's where I am:
I'm in favor of deferring (not to say cutting) import *, in order to get ES6 modules spec'ed and avoid protracted maybe-good/maybe-bad arguments.
If someone prototyping or REPL'ing feels the pain, they should wail in agony. Enough wailing and we'll figure out something for their use case -- but not on the critical path for ES6.
/be
es-discuss mailing list es-discuss at mozilla.org http s://mail.mozilla.org/listinfo/es-discuss
Agree.
Ok.
import *
is just a variant ofwith
No, it's not.
Date: Fri, 27 Jul 2012 09:09:26 -0700 From: brendan at mozilla.org To: infinte.cdda at hotmail.com CC: samth at ccs.neu.edu; es-discuss at mozilla.org Subject: Re: ES Modules: suggestions for improvement
BelleveInvis wrote:
Date: Tue, 24 Jul 2012 14:11:38 -0700 From: brendan at mozilla.org To: samth at ccs.neu.edu Subject: Re: ES Modules: suggestions for improvement CC: es-discuss at mozilla.org
Sam Tobin-Hochstadt wrote:
But I don't think we should ban people from using
import *
because sometimes it's hard to reason about.Just to focus on import *, here's where I am:
I'm in favor of deferring (not to say cutting) import *, in order to get ES6 modules spec'ed and avoid protracted maybe-good/maybe-bad arguments.
If someone prototyping or REPL'ing feels the pain, they should wail in agony. Enough wailing and we'll figure out something for their use case -- but not on the critical path for ES6.
/be
es-discuss mailing list es-discuss at mozilla.org http s://mail.mozilla.org/listinfo/es-discuss
Agree.
Ok.
import *
is just a variant ofwith
No, it's not.
/be
Only explicitly exposed members? that will be a lot better.
Posted here: tagneto.blogspot.ca/2012/06/es-modules-suggestions-for-improvement.html
Some of it is a retread of earlier feedback, but some of it may have been lost in my poor communication style. I did this as a post instead of inline feedback since it is long, it has lots of hyperlinks and it was also meant for outside es-discuss consumption.
I am not expecting a response as it should mostly be a retread, maybe just phrased differently. Just passing along the link in the interest of full disclosure, maybe the rephrasing helps understand the earlier feedback.
James