A few more questions about the current module proposal
On Wed, Jul 4, 2012 at 12:29 PM, Jussi Kalliokoski <jussi.kalliokoski at gmail.com> wrote:
Hello, good people,
I fear I have some misunderstanding going on with the current module proposal, as well as outright ignorance, hence I'd like to get answers to a few questions, as I'm pretty sure I'm not the only one. :)
- How does the static resolution and static scoping behave when out of the normal context. As an example if
import
is in aneval()
call, what would happen:var code = loadFromURL('example.org/foo.js') // content:
import foo from "bar"
eval(code) console.log(foo) // ???
First, what does loadFromURL
do? That looks like sync IO to me.
Would this example block until the module is resolved and loaded? Would it throw? What happens, exactly? As my $0.02 goes, I think it's a bad idea to ban import in eval.
Second, it depends on whether "bar" is a previously-loaded module.
For example, if "bar" is a library provided by the host environment,
such as the browser, then everything will be fine, and the code will
import foo
successfully. If "bar" is a remote resource, this will
throw -- we're not going to add synchronous IO to eval
(or to
anything else).
- How does the module proposal address the increasing need for interaction between pure JS and compile-to-JS languages? (CoffeeScript, Haxe, JS++, JS*, etc)?
More specifically, can you add hooks to preprocessing the files? If not, why? I think it would break static module resolution, but are we certain that static module resolution is worth the price of excluding JS preprocessors of the module system (aside from server-side preprocessing that is)? Again, my personal opinion is that including compile-to-JS languages in the module system would be worth much more than static resolution, but feel free to enlighten me.
We've thought a lot about compile-to-JS languages, and a bunch of the
features of the module loader system are there specifically to support
these languages. You can build a loader that uses the translate
hook to perform arbitrary translation, such as running the
CoffeeScript compiler, before actually executing the code. So you'll
be able to write something like this:
let CL = new CoffeeScriptLoader(); CL.load("code/something.coffee", function(m) { ... });
There are two ways to potentially make this more convenient. One would be to add something to HTML to declare the loader to be used with particular script tags, which we've talked about, but I think we should wait on until we have the base module system in place. The other would be to ship some of these loaders in browsers, but if I was the author of a compile-to-JS language, I wouldn't want to be chained to browser release and upgrade schedules.
On Wed, Jul 4, 2012 at 9:13 PM, Sam Tobin-Hochstadt <samth at ccs.neu.edu>wrote:
On Wed, Jul 4, 2012 at 12:29 PM, Jussi Kalliokoski <jussi.kalliokoski at gmail.com> wrote:
Hello, good people,
I fear I have some misunderstanding going on with the current module proposal, as well as outright ignorance, hence I'd like to get answers to a few questions, as I'm pretty sure I'm not the only one. :)
- How does the static resolution and static scoping behave when out of the normal context. As an example if
import
is in aneval()
call, what would happen:var code = loadFromURL('example.org/foo.js') // content:
import foo from "bar"
eval(code) console.log(foo) // ???First, what does
loadFromURL
do? That looks like sync IO to me.
Indeed it is, to simplify things. Let's pretend it's a function that gets the text contents of a URL.
Would this example block until the module is resolved and loaded? Would it throw? What happens, exactly? As my $0.02 goes, I think it's a bad idea to ban import in eval.
Second, it depends on whether "bar" is a previously-loaded module. For example, if "bar" is a library provided by the host environment, such as the browser, then everything will be fine, and the code will import
foo
successfully. If "bar" is a remote resource, this will throw -- we're not going to add synchronous IO toeval
(or to anything else).
So basically, eval()'ing something acquired via XHR would no longer give the same result as it does if the same script is in a script tag? Suffice to say I disagree strongly with this choice, but I'm sure the rationale behind this choice is strong.
- How does the module proposal address the increasing need for interaction between pure JS and compile-to-JS languages? (CoffeeScript, Haxe, JS++, JS*, etc)?
More specifically, can you add hooks to preprocessing the files? If not, why? I think it would break static module resolution, but are we certain that static module resolution is worth the price of excluding JS preprocessors of the module system (aside from server-side preprocessing that is)? Again, my personal opinion is that including compile-to-JS languages in the module system would be worth much more than static resolution, but feel free to enlighten me.
We've thought a lot about compile-to-JS languages, and a bunch of the features of the module loader system are there specifically to support these languages. You can build a loader that uses the
translate
hook to perform arbitrary translation, such as running the CoffeeScript compiler, before actually executing the code. So you'll be able to write something like this:let CL = new CoffeeScriptLoader(); CL.load("code/something.coffee", function(m) { ... });
There are two ways to potentially make this more convenient. One would be to add something to HTML to declare the loader to be used with particular script tags, which we've talked about, but I think we should wait on until we have the base module system in place. The other would be to ship some of these loaders in browsers, but if I was the author of a compile-to-JS language, I wouldn't want to be chained to browser release and upgrade schedules.
Okay, seems like it's a solution of a sort. Next question is that does this mean that for example CoffeeScript programs will be able to use pure JS modules using the import statement? i.e. can the translated code contain an import statement? If yes, as I presume, good.
Still, this is nowhere near the convenience of node's require() and the possibility of adding a new preprocessor just using require.registerExtension() and after that you have the same require() for a new language. I guess we'll see whether that convenience will outweigh the benefits of static resolution.
And btw, please don't mistake my concern for negativity, I really appreciate the extremely hard work behind the current proposal; but I'd hate to see it not ending up superior to the existing module systems (i.e. making them useless), becoming just another module system library authors have to support and for which to provide different versions, as well as just another module system developers need to know how to use because a third-party module they use works only with it. That fragmentation cost would imho be far worse than not having a standardized modules at all (for now), but I guess we'll find out whether it's a hit or a miss only after it's already there.
Thanks for answering my questions!
On Wed, Jul 4, 2012 at 11:13 AM, Sam Tobin-Hochstadt <samth at ccs.neu.edu> wrote:
We've thought a lot about compile-to-JS languages, and a bunch of the features of the module loader system are there specifically to support these languages. You can build a loader that uses the
translate
hook to perform arbitrary translation, such as running the CoffeeScript compiler, before actually executing the code. So you'll be able to write something like this:let CL = new CoffeeScriptLoader(); CL.load("code/something.coffee", function(m) { ... });
Will heterogenous transpiling in a web app be supported? Can a JS module depend on a CoffeeScript file, and vice versa? What about a JS module depending on a CoffeeScript and text resource? What would that look like?
For instance, it is common in requirejs projects to use coffeescript and text resources via the loader plugin system. While the text plugin is fairly simple, it can be thought of as a transpiler, converting text files to module values that are JS strings. It could also be "text template" transpiler that converts the text to a JS function, which when given data produces a custom HTML string.
For requirejs/AMD systems, the transpiler handler is part of the module ID. This means that nested dependencies can use a transpiler without the top level application developer needing to map out what loader transpilers are in play and somehow configure transpiler capabilities at the top level before starting main module loading.
It also makes it clear which transpiler should be used for a given module dependency. Each module gets to choose the type of transpiler: for a given .html file, one module may want to use a text template transpiler where another module may just want a raw text-to-string transpiler. Both of those modules can be used in the same project as nested dependencies without the end developer needing to wire them up at the top level.
James
Will heterogenous transpiling in a web app be supported? Can a JS module depend on a CoffeeScript file, and vice versa?
Right - Sam's example of having a specific CoffeeScript loader isn't going to actually work for this reason. Instead, we'd have to figure out which "to-JS" compiler to use inside of the translate hook.
let maybeCoffeeLoader = new Loader(System, {
translate(src, relURL, baseURL, resolved) {
// If file extension is ".coffee", then use the coffee-to-js
compiler if (extension(relURL) === ".coffee") src = coffeeToJS(src);
return src;
}
});
You could use the resolve hook in concert with the translate hook to create AMD-style plugin directives. It looks pretty flexible to me.
One question, though: branching on the file extension, as above, will not generally work. The source code might be served through a URL that does not have a file extension. On the web though, we'll generally have access to a Content-Type header. In the current design, there's doesn't appear to be a way to get that information.
One possibility for getting the Content-Type header would be to override the fetch hook and use cross-domain XHR, but that seems like a lot of duplicated code just to get data that's already being received by the browser.
Thoughts?
Sorry I haven't gotten a chance to get into this thread sooner, let me catch up a bit:
On Wed, Jul 4, 2012 at 2:56 PM, Jussi Kalliokoski < jussi.kalliokoski at gmail.com> wrote:
On Wed, Jul 4, 2012 at 9:13 PM, Sam Tobin-Hochstadt <samth at ccs.neu.edu> wrote:
On Wed, Jul 4, 2012 at 12:29 PM, Jussi Kalliokoski <jussi.kalliokoski at gmail.com> wrote:
- How does the static resolution and static scoping behave when out of the normal context. As an example if
import
is in aneval()
call, what would happen:var code = loadFromURL('example.org/foo.js') // content:
import foo from "bar"
eval(code) console.log(foo) // ???First, what does
loadFromURL
do? That looks like sync IO to me.Indeed it is, to simplify things. Let's pretend it's a function that gets the text contents of a URL.
Would this example block until the module is resolved and loaded? Would it throw? What happens, exactly? As my $0.02 goes, I think it's a bad idea to ban import in eval.
Second, it depends on whether "bar" is a previously-loaded module. For example, if "bar" is a library provided by the host environment, such as the browser, then everything will be fine, and the code will import
foo
successfully. If "bar" is a remote resource, this will throw -- we're not going to add synchronous IO toeval
(or to anything else).So basically, eval()'ing something acquired via XHR would no longer give the same result as it does if the same script is in a script tag? Suffice to say I disagree strongly with this choice, but I'm sure the rationale behind this choice is strong.
So I guess my take on it is that any import statement should be illegal inside of eval. Looking at the proposal, that doesn't sound like it, though. Let's take the "loadFromUrl" out of the equation.
import foo from "baz"
var code = 'import foo from "bar"';
eval(code);
console.log(foo);
There is a reason why import got special syntax, and it wasn't just so that it would be easier to type. Putting it inside eval eliminates any ability for static analysis to happen upfront during the parse before actually executing. The import dependency cannot be seen, and in this case there is a collision on "foo" which should have been detected at compilation time. I can think of a dozen other reasons why imports should not be allowed in eval, but that's just one which seems like a pretty clear problem.
On Thu, Jul 5, 2012 at 8:56 AM, Kevin Smith <khs4473 at gmail.com> wrote:
One question, though: branching on the file extension, as above, will not generally work. The source code might be served through a URL that does not have a file extension. On the web though, we'll generally have access to a Content-Type header. In the current design, there's doesn't appear to be a way to get that information.
This makes a lot of sense to me. Great idea.
Oh, I also meant to ask - I do have a question of my own. It seems so basic, but I can't figure it out. If I have a file that contains two modules - let's say in foo.js
------- foo.js ---------------- module Foo { export let x = 42; } module Bar { export let y = 12; }
and I try to do:
import y from "foo.js"
What happens? Similar problem if I try to do
import "foo.js" as Foo
Am I incorrect in thinking that there can be more than one top level module? Or is it that an imported file is automatically a module, and therefore Foo and Bar are nested modules in this case? Would I then have to say "export module Foo..." and later "import Foo from "foo.js;"?
Also, I have a suggestion. It has recently been discussed about how to work with legacy code. For example, the classic example of wanting to import jQuery even if it is not defined as a module. What if we just allowed for the syntax:
import "jquery.js"
with no "as" or "from". This would assume jquery.js is not a module file. It would fail early if there were any imports or exports, and it would execute in the global scope just like a script tag. The value of it, though, would be to allow for a declaration of the dependency, and the ability to load it without putting in a script tag. If multiple modules import it, it would only be loaded once, and executed in the first place it was needed, all without needing to include it with a script tag. Because it is not allowed to contain imports, it would not be capable of causing circular dependency issues, and because it does not have exports, it cannot be interpreted as a module and used with the from/as syntax.
On Thu, Jul 5, 2012 at 8:06 PM, Russell Leggett <russell.leggett at gmail.com>wrote:
Sorry I haven't gotten a chance to get into this thread sooner, let me catch up a bit:
On Wed, Jul 4, 2012 at 2:56 PM, Jussi Kalliokoski < jussi.kalliokoski at gmail.com> wrote:
On Wed, Jul 4, 2012 at 9:13 PM, Sam Tobin-Hochstadt <samth at ccs.neu.edu> wrote:
On Wed, Jul 4, 2012 at 12:29 PM, Jussi Kalliokoski <jussi.kalliokoski at gmail.com> wrote:
- How does the static resolution and static scoping behave when out of the normal context. As an example if
import
is in aneval()
call, what would happen:var code = loadFromURL('example.org/foo.js') // content:
import foo from "bar"
eval(code) console.log(foo) // ???First, what does
loadFromURL
do? That looks like sync IO to me.Indeed it is, to simplify things. Let's pretend it's a function that gets the text contents of a URL.
Would this example block until the module is resolved and loaded? Would it
throw? What happens, exactly? As my $0.02 goes, I think it's a bad idea to ban import in eval.
Second, it depends on whether "bar" is a previously-loaded module. For example, if "bar" is a library provided by the host environment, such as the browser, then everything will be fine, and the code will import
foo
successfully. If "bar" is a remote resource, this will throw -- we're not going to add synchronous IO toeval
(or to anything else).So basically, eval()'ing something acquired via XHR would no longer give the same result as it does if the same script is in a script tag? Suffice to say I disagree strongly with this choice, but I'm sure the rationale behind this choice is strong.
So I guess my take on it is that any import statement should be illegal inside of eval. Looking at the proposal, that doesn't sound like it, though. Let's take the "loadFromUrl" out of the equation.
import foo from "baz" var code = 'import foo from "bar"'; eval(code); console.log(foo);
There is a reason why import got special syntax, and it wasn't just so that it would be easier to type. Putting it inside eval eliminates any ability for static analysis to happen upfront during the parse before actually executing. The import dependency cannot be seen, and in this case there is a collision on "foo" which should have been detected at compilation time. I can think of a dozen other reasons why imports should not be allowed in eval, but that's just one which seems like a pretty clear problem.
The implications of banning import in eval is that modules for an existing evaling module loader can't adopt the new modules system, quite possibly incurring this decision to projects using those modules as well. How much this would allegedly slow down adoption, I can't tell. Maybe it's insignificant.
Another thing it means is that eval() would no longer do what it says on the box, i.e. evaluate an expression of JS, as the code inside eval() would be a whole different JS.
On Thu, Jul 5, 2012 at 8:56 AM, Kevin Smith <khs4473 at gmail.com> wrote:
One question, though: branching on the file extension, as above, will not generally work. The source code might be served through a URL that does not have a file extension. On the web though, we'll generally have access to a Content-Type header. In the current design, there's doesn't appear to be a way to get that information.
This makes a lot of sense to me. Great idea.
+1
On Wed, Jul 4, 2012 at 2:56 PM, Jussi Kalliokoski <jussi.kalliokoski at gmail.com> wrote:
Would this example block until the module is resolved and loaded? Would it throw? What happens, exactly? As my $0.02 goes, I think it's a bad idea to ban import in eval.
Second, it depends on whether "bar" is a previously-loaded module. For example, if "bar" is a library provided by the host environment, such as the browser, then everything will be fine, and the code will import
foo
successfully. If "bar" is a remote resource, this will throw -- we're not going to add synchronous IO toeval
(or to anything else).So basically, eval()'ing something acquired via XHR would no longer give the same result as it does if the same script is in a script tag? Suffice to say I disagree strongly with this choice, but I'm sure the rationale behind this choice is strong.
There's a Loader.evalAsync(src, cb)
method which supports fetching
remote resources using import
statements.
The alternatives would be one of:
- banning reference to remote data except using callbacks
- making
eval
do synchronous IO
I think both of those are much worse.
- How does the module proposal address the increasing need for interaction between pure JS and compile-to-JS languages? (CoffeeScript, Haxe, JS++, JS*, etc)?
More specifically, can you add hooks to preprocessing the files? If not, why? I think it would break static module resolution, but are we certain that static module resolution is worth the price of excluding JS preprocessors of the module system (aside from server-side preprocessing that is)? Again, my personal opinion is that including compile-to-JS languages in the module system would be worth much more than static resolution, but feel free to enlighten me.
We've thought a lot about compile-to-JS languages, and a bunch of the features of the module loader system are there specifically to support these languages. You can build a loader that uses the
translate
hook to perform arbitrary translation, such as running the CoffeeScript compiler, before actually executing the code. So you'll be able to write something like this:let CL = new CoffeeScriptLoader(); CL.load("code/something.coffee", function(m) { ... });
There are two ways to potentially make this more convenient. One would be to add something to HTML to declare the loader to be used with particular script tags, which we've talked about, but I think we should wait on until we have the base module system in place. The other would be to ship some of these loaders in browsers, but if I was the author of a compile-to-JS language, I wouldn't want to be chained to browser release and upgrade schedules.
Okay, seems like it's a solution of a sort. Next question is that does this mean that for example CoffeeScript programs will be able to use pure JS modules using the import statement? i.e. can the translated code contain an import statement? If yes, as I presume, good.
Yes, translation is to the full JS language. Of course, CoffeeScript's interface to the JS module system is up to Jeremy, not Dave and me, to decide.
Still, this is nowhere near the convenience of node's require() and the possibility of adding a new preprocessor just using require.registerExtension() and after that you have the same require() for a new language. I guess we'll see whether that convenience will outweigh the benefits of static resolution.
There's no tension between static resolution and allowing the loader
to change dynamically -- I've posted lots of code using System.set
,
for example. Making the default system loader arbitrarily mutable has
many other potential problems, though: it makes it harder for engine
implementers to optimize, it potentially raises security holes, it
lets any code on the page change the meaning of all the rest of the
code on the page. But fundamentally, that's about the design of the
System loader, and we can improve that without having to change the
fundamental aspects of the design (that's one of the nice aspects of
the system).
On Thu, Jul 5, 2012 at 8:59 PM, Sam Tobin-Hochstadt <samth at ccs.neu.edu>wrote:
On Wed, Jul 4, 2012 at 2:56 PM, Jussi Kalliokoski <jussi.kalliokoski at gmail.com> wrote:
Would this example block until the module is resolved and loaded? Would
it throw? What happens, exactly? As my $0.02 goes, I think it's a bad idea
to ban import in eval.
Second, it depends on whether "bar" is a previously-loaded module. For example, if "bar" is a library provided by the host environment, such as the browser, then everything will be fine, and the code will import
foo
successfully. If "bar" is a remote resource, this will throw -- we're not going to add synchronous IO toeval
(or to anything else).So basically, eval()'ing something acquired via XHR would no longer give the same result as it does if the same script is in a script tag? Suffice to say I disagree strongly with this choice, but I'm sure the rationale behind this choice is strong.
There's a
Loader.evalAsync(src, cb)
method which supports fetching remote resources usingimport
statements.The alternatives would be one of:
- banning reference to remote data except using callbacks
- making
eval
do synchronous IOI think both of those are much worse.
Ahah! This is why I ask, forgive my ignorance! :) This is excellent, I'll withdraw my argument as evalAsync solves my problem. You're absolutely correct, now I very much agree that import should be banned from eval. An existing evaling module loader can just be updated to use evalAsync.
- How does the module proposal address the increasing need for interaction between pure JS and compile-to-JS languages? (CoffeeScript, Haxe, JS++,
JS*, etc)?
More specifically, can you add hooks to preprocessing the files? If not,
why? I think it would break static module resolution, but are we certain
that static module resolution is worth the price of excluding JS preprocessors of the module system (aside from server-side preprocessing
that is)? Again, my personal opinion is that including compile-to-JS languages in the module system would be worth much more than static resolution, but feel free to enlighten me.
We've thought a lot about compile-to-JS languages, and a bunch of the features of the module loader system are there specifically to support these languages. You can build a loader that uses the
translate
hook to perform arbitrary translation, such as running the CoffeeScript compiler, before actually executing the code. So you'll be able to write something like this:let CL = new CoffeeScriptLoader(); CL.load("code/something.coffee", function(m) { ... });
There are two ways to potentially make this more convenient. One would be to add something to HTML to declare the loader to be used with particular script tags, which we've talked about, but I think we should wait on until we have the base module system in place. The other would be to ship some of these loaders in browsers, but if I was the author of a compile-to-JS language, I wouldn't want to be chained to browser release and upgrade schedules.
Okay, seems like it's a solution of a sort. Next question is that does this mean that for example CoffeeScript programs will be able to use pure JS modules using the import statement? i.e. can the translated code contain an import statement? If yes, as I presume, good.
Yes, translation is to the full JS language. Of course, CoffeeScript's interface to the JS module system is up to Jeremy, not Dave and me, to decide.
Excellent.
Still, this is nowhere near the convenience of node's require() and the possibility of adding a new preprocessor just using require.registerExtension() and after that you have the same require() for a new language. I guess we'll see whether that convenience will outweigh the benefits of static resolution.
There's no tension between static resolution and allowing the loader to change dynamically -- I've posted lots of code using
System.set
, for example. Making the default system loader arbitrarily mutable has many other potential problems, though: it makes it harder for engine implementers to optimize, it potentially raises security holes, it lets any code on the page change the meaning of all the rest of the code on the page. But fundamentally, that's about the design of the System loader, and we can improve that without having to change the fundamental aspects of the design (that's one of the nice aspects of the system).
I see. Very good. Let's hope the JS built-in modules win the race.
On Thu, Jul 5, 2012 at 8:56 AM, Kevin Smith <khs4473 at gmail.com> wrote:
Will heterogenous transpiling in a web app be supported? Can a JS module depend on a CoffeeScript file, and vice versa?
Right - Sam's example of having a specific CoffeeScript loader isn't going to actually work for this reason. Instead, we'd have to figure out which "to-JS" compiler to use inside of the translate hook.
let maybeCoffeeLoader = new Loader(System, { translate(src, relURL, baseURL, resolved) { // If file extension is ".coffee", then use the coffee-to-js
compiler if (extension(relURL) === ".coffee") src = coffeeToJS(src);
return src; } });
You could use the resolve hook in concert with the translate hook to create AMD-style plugin directives. It looks pretty flexible to me.
Exactly. And note that the compiled CoffeeScript code will be able to
use import
, which will again use the same loader, with the same
translate hook. So however modules are added to CoffeeScript, those
modules will be able to depend on other module written in CS, JS, or
anything else.
One question, though: branching on the file extension, as above, will not generally work. The source code might be served through a URL that does not have a file extension. On the web though, we'll generally have access to a Content-Type header. In the current design, there's doesn't appear to be a way to get that information.
This is an excellent suggestion. In general, there won't be a
Content-Type header (or any other kind of metadata) in every JS
environment, so the right thing may be to add a metadata
additional
parameter, and then have HTML specify that browser embeddings of JS
should provide particular forms of that metadata. I'll talk with Dave
about this once he's back.
One possibility for getting the Content-Type header would be to override the fetch hook and use cross-domain XHR, but that seems like a lot of duplicated code just to get data that's already being received by the browser.
Right -- we want to avoid programmers having to do the browsers' job.
On Thu, Jul 5, 2012 at 1:06 PM, Russell Leggett <russell.leggett at gmail.com> wrote:
So I guess my take on it is that any import statement should be illegal inside of eval. Looking at the proposal, that doesn't sound like it, though.
I don't think we should ban import
from eval
-- eval
is a
powerful feature that has been used to good effect in lots of ways,
and we don't want to cripple it.
Let's take the "loadFromUrl" out of the equation.
import foo from "baz" var code = 'import foo from "bar"'; eval(code); console.log(foo);
There is a reason why import got special syntax, and it wasn't just so that it would be easier to type. Putting it inside eval eliminates any ability for static analysis to happen upfront during the parse before actually executing. The import dependency cannot be seen, and in this case there is a collision on "foo" which should have been detected at compilation time. I can think of a dozen other reasons why imports should not be allowed in eval, but that's just one which seems like a pretty clear problem.
This problem is already there if I write code
as 'var foo = "bar"'
(how's that for excessive quotation?). Direct eval
is powerful and
potentially scary already. We could specify the semantics of eval
such that your example doesn't bind foo
in code after the eval
,
but I don't think that has much to do with the other issues here.
On Thu, Jul 5, 2012 at 1:33 PM, Russell Leggett <russell.leggett at gmail.com> wrote:
Oh, I also meant to ask - I do have a question of my own. It seems so basic, but I can't figure it out. If I have a file that contains two modules - let's say in foo.js
------- foo.js ---------------- module Foo { export let x = 42; } module Bar { export let y = 12; }
and I try to do:
import y from "foo.js"
What happens? Similar problem if I try to do
import "foo.js" as Foo
Am I incorrect in thinking that there can be more than one top level module? Or is it that an imported file is automatically a module, and therefore Foo and Bar are nested modules in this case?
When you write:
import A from "B.js";
that's implicity wrapping "B.js" in a module which you create (unnamed
here), and then importing A
from it. So yes, Foo
and Bar
are
nested modules. It's important for the importer to control the outer
module in the import.
Would I then have to say "export module Foo..." and later "import Foo from "foo.js;"?
Yes, that would be the correct way to write this example,
On Thu, Jul 5, 2012 at 5:56 AM, Kevin Smith <khs4473 at gmail.com> wrote:
Will heterogenous transpiling in a web app be supported? Can a JS module depend on a CoffeeScript file, and vice versa?
Right - Sam's example of having a specific CoffeeScript loader isn't going to actually work for this reason. Instead, we'd have to figure out which "to-JS" compiler to use inside of the translate hook.
let maybeCoffeeLoader = new Loader(System, { translate(src, relURL, baseURL, resolved) { // If file extension is ".coffee", then use the coffee-to-js
compiler if (extension(relURL) === ".coffee") src = coffeeToJS(src);
return src; } });
You could use the resolve hook in concert with the translate hook to create AMD-style plugin directives. It looks pretty flexible to me.
Right, I do not believe file extension-based loader branching is the right way to go, see the multiple text template transpiler uses for .html in AMD loader plugins. The module depending on the resource needs to choose the type of transpiler.
So as you mention, a custom resolver may need to be used. This means that there will be non-uniform dependency IDs floating around. That seems to lead to this chain of events:
- packages that use these special IDs need to communicate that the end developer needs to use a particular Module Loader implementation.
- the end developer will need to load a script file before doing any harmony module loading when using those dependencies.
- People end up using loaders like requirejs.
- Which leads to the dark side. At least a side I do not want to see.
It is also unclear to me what happens if package A wants a particular ModuleLoader 1 where package B wants ModuleLoader 2, and both loaders like to resolve IDs differently.
This is why I favor "specify transpiler in the ID, transpiler is just another module with a specific API". If the default module loader understands something along the lines of "something!resource" means to call the "something" module as a transpiler to resolve and load "resource", the module IDs get uniform, and we can avoid a tower of babel around module IDs, and the need for bootstrap script translators.
James
On 5 July 2012 19:33, Russell Leggett <russell.leggett at gmail.com> wrote:
Oh, I also meant to ask - I do have a question of my own. It seems so basic, but I can't figure it out. If I have a file that contains two modules - let's say in foo.js
------- foo.js ---------------- module Foo { export let x = 42; } module Bar { export let y = 12; }
and I try to do:
import y from "foo.js"
What happens? Similar problem if I try to do
import "foo.js" as Foo
Am I incorrect in thinking that there can be more than one top level module? Or is it that an imported file is automatically a module, and therefore Foo and Bar are nested modules in this case? Would I then have to say "export module Foo..." and later "import Foo from "foo.js;"?
Yes, an imported file is a module body by itself, so Foo and Bar are nested modules. Your first import hence is an error, and the second one binds a module Foo with members Foo.Foo and Foo.Bar.
On 5 July 2012 19:06, Russell Leggett <russell.leggett at gmail.com> wrote:
So I guess my take on it is that any import statement should be illegal inside of eval.
Dave and I have been discussing modules vs eval a while ago. My take is that we should actually disallow any kind of module construct inside eval, because it is not clear what it should mean in general. In particular, it seems to be introducing local modules through the back door. The problem is actually amplified by 1JS, because you could write an old-school direct eval as in
// non-strict mode function f() { eval("module A { ... }"); ... }
and the only sane interpretation of this (under lexical scoping) would be that you have created a module in the local scope of f. Since local modules induce quite a number of additional complications, I don't think going that route is well-advised for the time being (we can always decide to relax the language later, once we have more experience with modules).
On 5 July 2012 19:34, Jussi Kalliokoski <jussi.kalliokoski at gmail.com> wrote:
The implications of banning import in eval is that modules for an existing evaling module loader can't adopt the new modules system, quite possibly incurring this decision to projects using those modules as well. How much this would allegedly slow down adoption, I can't tell. Maybe it's insignificant.
Another thing it means is that eval() would no longer do what it says on the box, i.e. evaluate an expression of JS, as the code inside eval() would be a whole different JS.
No, that is not actually true. It is merely a question of what you are parsing the eval string as. Roughly, it would be a function body, not a program. In ES5, there is no difference, but in ES6 there will be (because modules are global only), so we can make a choice. Since eval can occur locally, it is very natural to allow local declarations only (especially under 1JS, see my example above).
Hello, good people,
I fear I have some misunderstanding going on with the current module proposal, as well as outright ignorance, hence I'd like to get answers to a few questions, as I'm pretty sure I'm not the only one. :)
import
is in aneval()
call, what would happen:var code = loadFromURL('example.org/foo.js') // content:
import foo from "bar"
eval(code) console.log(foo) // ???Would this example block until the module is resolved and loaded? Would it throw? What happens, exactly? As my $0.02 goes, I think it's a bad idea to ban import in eval.
More specifically, can you add hooks to preprocessing the files? If not, why? I think it would break static module resolution, but are we certain that static module resolution is worth the price of excluding JS preprocessors of the module system (aside from server-side preprocessing that is)? Again, my personal opinion is that including compile-to-JS languages in the module system would be worth much more than static resolution, but feel free to enlighten me.
I'm sorry if these questions have been answered already, I'm trying to keep up with the discussion, but there's always something I'll miss/overlook I guess. I'll probably have more questions after these are answered but bare with me.