Composition of Uncoordinated Working Sets of Modules

# Kris Kowal (15 years ago)

Simple Modules are, in their present state, one step forward and two steps back from the previous generation of proposals. With this email, I intend to isolate these steps and propose a way to meet one or two steps forward.

The one step forward comes from handling cyclic dependencies elegantly. If I am correct, this is the feature we gain from "second classness" and from not basing the module system on a better "eval". By being second class, this module system is able to internalize the code needed to link the imports and exports of the "working set" of modules. The loader proposal reintroduces the idea of a "better eval", being simply a hermetic evaluator that collects a working set of modules, links them, and executes them.

Rather than rotating my original proposal to include this feature, I will identify the features of my original proposal to which I'm attached and propose how to rotate Simple Modules to accommodate them.

But, taking a few steps back, let's look at some use-cases from prior art.

Java's packages and the present Simple Modules proposal share a particular feature. I call this "autonomous modules": modules that are "self-named": modules that include their fully qualified-name in their source code. Rhino is a Java package that contains parsers and interpreters for JavaScript. Dojo's ShrinkSafe and the YUI Compressor use the parser components from Rhino, perform transformations on the token stream, and re-print the resultant token stream to produce a "minified" version of the original. For expedience, these projects forked Rhino instead of refactoring it to accommodate their need.

That was their mistake, but our problem.

Because all of the Rhino codebase contains fully qualified names in every file, refactoring Rhino to contain and link against alternate names is onerous, and alternately creating a parallel universe for the minifier fork is onerous, so these things are simply not done. As a result, it is not possible (or perhaps merely egregiously inconvenient) to compose a pure-Rhino system with either YUI Compressor or Dojo ShrinkSafe, much less all three at once.

Python's module loader solves this problem by reducing the coupling between modules and their names. It is possible in Python to express both relative and top-level module identifiers for the purpose of linking, and a module's own name is never expressed in code.

Relative module identifiers are used to link within coherent (internally consistent, designed in coordination) sets of modules, usually stored in the same hierarchy. Top-level module identifiers are used to link across "packages".

Python has a few weaknesses that CommonJS modules address.

1.) Least importantly, it is non-trivial for a module to discover its own top-level identifier.

2.) Before version 2.6, it was not possible to explicitly distinguish a relative module identifier from a top-level module identifier. If you imported module X, Python would first look for that module relatively (in the same directory) and then look for it at the top-level (specifically, in the first of the paths in the module search path that contained a directory that matched the name of the first term of the module identifier). This conflated the relative and top-level name spaces, such that if you gave a module the same name as any of the names used at the top-level, you would not be able to access the module of the same name in the top level from that directory. For example, it is not possible to import the top-level "csv" module from within the "my.formats.csv" module, because its own module would intercept "csv". This is a problem we solved in CommonJS by requiring relative module identifiers and top-level identifiers to be explicitly distinguished with "." or ".." in their first term. This is also the reason why we used "/" to delimit terms instead of dots. Python 2.6 and 3 introduce a similar notation, from which we draw our inspiration, with prefix dots, but the solution is crippled for reasons beyond the scope of this discussion.

3.) The top-level module name space is centrally managed. In Python, the "global name space problem" is deferred once by separating each file into a scope and moving the global name space to top-level identifiers. This means that there exist hazards of coordination when composing packages. In the context of Python packages, collisions at this level are reasonably improbable. CommonJS defers the global name space problem similarly, moving the global name space out to top-level module identifiers. However, the server-side JavaScript community is every bit as fragmented as the client-side JavaScript community, which is to say that there are several separate land-grabs in progress for the best top-level identifiers. The package mappings proposal [1] pushes the global name space problem out to URLs where it belongs.

It's easy to take pot-shots at having a three-layer system, but at each of these layers, you get to balance brevity and sovereignty, and I think three layers are what you need. A module is generally the size of a chunk of code a single person can keep in their head. A package is generally the size of a chunk of code that can be coordinated by a team. The web is the size of a chunk of code that the world can collectively manage. At the module scope, you use variables to reference internally, and short module names to reference externally. At package scope you use module names to link internally, and URLs to link externally. With modules and lexical scoping, you get sovereignty of the variable names in your scope. With package mappings, you each package gets sovereignty over its internal module name space.

The original Simple Modules proposal was only sufficient in the small. The Loaders proposal addresses the large. It gives "working sets of internally linked modules" sovereignty of their module name space, which is good. It does not yet enable linking to other working sets of internally consistent modules, wherein the composition problem lies. I propose the following revisions:

A.) Bifurcate the module name space between internal and external linkage.

import "foo"; // external
import "./bar"; // internal

B.) Support hierarchical nesting of internal modules with relative module identifiers.

foo.js
foo/
    bar.js
    baz.js

C.) Separate the name from the module declaration syntax; make a file a module production, and provide the means of creating anonymous modules and giving them to loaders. Make anonymous modules uninstantiable without the assistance of the module "linker". Permit the module linker to process whole files as module bodies with a an externally assigned name. This would allow us to decouple fetching and bundling, permitting a variety of patterns there.

linker.set("foo/bar", module {
    import ./baz/*;
});

D.) Add something like package mappings to the loader, so a working set of internally consistent modules can reference an external working set of internally consistent modules managed by another loader, recursively.

var other = Linker();
other.set("bar", module {
    export a = 10;
});
var self = Linker();
self.set("foo", other);
self.set("main", module {
    import "foo/bar".{a};
    assert.equal(a, 10);
});
self.execute("main");

BONUS.) Allow the user of a module loader to instantiate the working set of modules with a controlled set of free variables available to all modules. This would allow us to contrive environments that smell like a previous script left them some global variables. This would greatly assist migration, and permit new dependency injection forms.

linker.execute("foo/bar", {
    "assert": assert
});

BONUS BONUS.) Provide an API on the linker that assists developers in constructing bundles of the minimal working set of transitive dependencies from a particular starting module.

linker.dependencies("foo/bar");

Another feature of Simple Modules is that it preserves the "equivalence by concatenation" property of existing "script" tags, while liberating the scripts from being sensitive to the order in which they are concatenated. This is in conflict with the goal of removing autonomous module blocks.

The principle value of being able to concatenate scripts is that it can reduce the "chattiness" of the interaction between the client and server, which over long-latency HTTP connections, which in turn reduces load times.

CSS can be concatenated. Images can be sprited. Scripts can be concatenated. All of these solutions for improving performance are based on an imperfect world where downloads are initiated in the order that they are discovered, which is itself tied down to the order in which they appear in the layout. There are two major solutions to this problem that would eliminate the need for bundling and concatenation. One of them is Alexander Limi's resource package proposal [2] and the other is Google's SPDY [3].

Alexander Limi proposes that a link tag with a relationship of "package" could be attached to a subtree of the URL space, permitting an archive to be downloaded before the resources are mentioned in source. This drops a bomb on the concatenation solution and decouples the load order from the layout order, since archives can be unpacked in stream, all with a progressive enhancement that would permit production and debugging to use mostly the same code, and permit older browsers to do business as usual with individual files.

SPDY allows the server and client to prioritize content intelligently in a layer between TCP and HTTP.

The technique of concatenation may be an anachronism by the time web developers are willing to publish Harmony modules to general web users.

However, it would still be good default behavior for a web page to construct a "working set" / "loader" / "linker" for a web page that is backed by modules fetched individually over HTTP and executed when it is possible to link the working set. Then, using reflective "Loader" or "Linker" API, it would be possible to create and use optimized bundles. Furthermore, package mappings could be accomplished if browsers provided a URL Linker/Loader that would automatically fetch and link modules on a particular URL tree.

In summary, the problems worth solving include:

a.) balancing linkage brevity and uniqueness, with the goal of offloading the global name space problem to DNS, providing reliable sovereignty over name spaces controlled by: * the developer of a single file * the developer of a tree of files * domain owners * IANA b.) elimination of accidental global variables c.) the manual explication of transitive dependencies d.) the manual linearization of execution and linkage e.) mutual dependency f.) the elimination of the need for build steps during development and debugging. g.) decoupling the utterance of dependencies from the order and timing in which dependencies are transported in production. h.) isolation of scopes i.) isolation of internally consistent modules j.) reliable linkage to independently develop, internally consistent working sets of modules

Simple Modules will assist individual designers of coherent groups of name spaces for the purpose of producing single internally consistent applications and APIs.

Simple Modules, at present, will not sufficiently assist people constructing applications and APIs by composing non-coherent groups of name spaces produced by non-cooperating groups of developers.

In any case, that's my two bucks, Kris Kowal

[1] wiki.commonjs.org/wiki/Packages/Mappings/B [2] limi.net/articles/resource-packages [3] www.chromium.org/spdy/spdy

# David Herman (15 years ago)

Thanks for your thoughts; I'll keep reading but I do want to respond to a couple points that I don't think are quite accurate.

The one step forward comes from handling cyclic dependencies elegantly. If I am correct, this is the feature we gain from "second classness" and from not basing the module system on a better "eval".

I don't agree with this summary. First of all, you don't have to base any module system on eval. By keeping modules second class, we get a number of benefits, not just handling cyclic dependencies. (In fact, cyclic dependencies can be handled nicely in a first-class module system as well.) One of the benefits of second-class modules is the ability to manage static bindings; for example, import m.*; is statically manageable. Allen has made some good points about how second-class modules are a good fit for the programmer's mental model of statically delineated portions of code. At any rate, cyclic dependencies are not the central point.

The loader proposal reintroduces the idea of a "better eval", being simply a hermetic evaluator that collects a working set of modules, links them, and executes them.

It's more than eval -- e.g., it provides load hooks to manage resource fetching and even allow transformation -- but yes, it does provide a more controlled eval.

Because all of the Rhino codebase contains fully qualified names in every file, refactoring Rhino to contain and link against alternate names is onerous, and alternately creating a parallel universe for the minifier fork is onerous, so these things are simply not done.

I don't see how this problem applies to simple modules. Because modules are referred to as bound names, rather than as fully-qualified names or URL's, it's easier for separate projects to share common components. They can even share the same module under different names, since module names can be rebound (module NewName = OldName) and modules in separate files can be loaded with different names (module Foo = load '...some url...').

The original Simple Modules proposal was only sufficient in the small. The Loaders proposal addresses the large.

That's not true. Loaders are about isolation. I agree with you that conceptually, there's a level of granularity that consists of a set of modules, which is often what we mean by "package," at least in common usage (if not the particular meaning of that term in a given language). But the idea of nested/hierarchical modules is that modules scale to the large by simply making modules that consist of nested modules.

It does not yet enable linking to other working sets of internally consistent modules

This is also not true; the ability to attach modules to module loaders (as well as the dynamic evaluation methods) makes it possible for separate module loaders to communicate. However, loaders aren't about linking multiple working sets, but rather providing isolated subspaces. (One use case I sometimes use is an IDE implemented in ES, that wants to run other ES programs without them stepping on its toes.)

Another feature of Simple Modules is that it preserves the "equivalence by concatenation" property of existing "script" tags, while liberating the scripts from being sensitive to the order in which they are concatenated. This is in conflict with the goal of removing autonomous module blocks.

I don't quite understand this, and I'm glad you bring up the issue of latency and plugging into the browser semantics. I believe we're at least partway to the answer, but I won't believe we've solved it till I really see it go all the way through. That said, I am also not convinced that a) the <script> tag is going away any time soon, or that b) we necessarily need to solve these problems in the context of a module system.

Simple Modules, at present, will not sufficiently assist people constructing applications and APIs by composing non-coherent groups of name spaces produced by non-cooperating groups of developers.

I'm not convinced of this point. If someone doesn't want to share their code, there's nothing we can do to make them do so. But if they do want to, the simple modules proposal explicitly solves the problems of Java-like systems where everything is hard-wired. Instead, modules are given lexically scoped names, and can even be deployed without naming themselves; both of these features make it far easier to share code between different teams.

# Kris Kowal (15 years ago)

On Fri, Jun 4, 2010 at 5:17 PM, David Herman <dherman at mozilla.com> wrote:

By keeping modules second class, we get a number of benefits, not just handling cyclic dependencies. (In fact, cyclic dependencies can be handled nicely in a first-class module system as well.) One of the benefits of second-class modules is the ability to manage static bindings; for example, import m.*; is statically manageable. Allen has made some good points about how second-class modules are a good fit for the programmer's mental model of statically delineated portions of code. At any rate, cyclic dependencies are not the central point.

As far as I can tell, Simple Modules only changes the composition hazard introduced by "imoprt m.*" from a run-time hazard to a link-time hazard.

Consider the modules A, B, and C as developed without coordination by Alice, Bob, and Charlie respectively, as Charlie updates his version of C in the following manner:

Before:

A.js
    import B.*;
    import C.*;
    assert.equal(foo, 10);
B.js
    export foo = 10;
C.js
    export bar = 20;

After:

A.js
    import B.*;
    import C.*; // link-error that requires Alice's attention
    assert.equal(foo, 10);
B.js
    export foo = 10;
C.js
    export foo = 30; // change introduced by Charlie
    export bar = 20;

This being the case, I consider this neither a future nor a curse. It is worth noting, however, that this particular hazard has bitten me in practice, and the CommonJS wisely omitted this feature against my wishes, and I have not had reason to complain since.

The loader proposal reintroduces the idea of a "better eval", being simply a hermetic evaluator that collects a working set of modules, links them, and executes them.

It's more than eval -- e.g., it provides load hooks to manage resource fetching and even allow transformation -- but yes, it does provide a more controlled eval.

This I had not noticed before. I now see that the Loader constructor accepts a handler that has an opportunity to either provide source code, redirect, or reject a module. This is great. It also accepts the global record that I proposed in as a BONUS, which is also full of win.

That's not true. Loaders are about isolation. I agree with you that conceptually, there's a level of granularity that consists of a set of modules, which is often what we mean by "package," at least in common usage (if not the particular meaning of that term in a given language). But the idea of nested/hierarchical modules is that modules scale to the large by simply making modules that consist of nested modules.

Perhaps you could point to or provide some sample code that illustrates collecting modules. My guess is that it would require a lot of explicit linkage, rather than harnessing the hierarchy in which the modules are organized.

This is also not true; the ability to attach modules to module loaders (as well as the dynamic evaluation methods) makes it possible for separate module loaders to communicate. However, loaders aren't about linking multiple working sets, but rather providing isolated subspaces. (One use case I sometimes use is an IDE implemented in ES, that wants to run other ES programs without them stepping on its toes.)

Code examples would be insightful.

Another feature of Simple Modules is that it preserves the "equivalence by concatenation" property of existing "script" tags, while liberating the scripts from being sensitive to the order in which they are concatenated.  This is in conflict with the goal of removing autonomous module blocks.

I don't quite understand this,

I hope that this is because I have misinterpreted the way module names are scoped.

Simple Modules, at present, will not sufficiently assist people constructing applications and APIs by composing non-coherent groups of name spaces produced by non-cooperating groups of developers.

The simple modules proposal explicitly solves the problems of Java-like systems where everything is hard-wired. Instead, modules are given lexically scoped names, and can even be deployed without naming themselves; both of these features make it far easier to share code between different teams.

Perhaps I am misunderstanding the scope of a module name. Is it not true that a module is available by its self declared name in all modules that share a loader? Is it actually possible to bind a single module name that provides access to all of the modules in another loader?

module X = load("http://example.com/api");
module Y = X.Y; // is this possible?

Is it possible for MRL's to be CommonJS top-level and relative module identifiers? If that's the case, is it possible for the loader handler to forward a request for a module to another loader?

var externalLoaders = {};
Loader(function (id, request) {
    var parts = id.split("/");
    if (parts[0] === "." || parts[0] === "..") {
        // what is the identifier of the module from which this
        // module was requested?  I need that to resolve the
        // identifier of the request module.
        when(fetch(id),
            request.provideSource,
            request.reject
        );
    } else {
        var external = externalLoaders[parts[0]];
        request.provideLoader(external, parts.slice(1).join("/"));
    }
})

I think it might be best to organize the syntax around MRL's rather than local short-names. MRL's can be reasonably short if they're permitted to be relative paths, which requires the module loader handler to receive the MRL of the requesting module.

Kris Kowal

# Sam Tobin-Hochstadt (15 years ago)

On Fri, Jun 4, 2010 at 9:48 PM, Kris Kowal <kris.kowal at cixar.com> wrote:

On Fri, Jun 4, 2010 at 5:17 PM, David Herman <dherman at mozilla.com> wrote:

By keeping modules second class, we get a number of benefits, not just handling cyclic dependencies. (In fact, cyclic dependencies can be handled nicely in a first-class module system as well.) One of the benefits of second-class modules is the ability to manage static bindings; for example, import m.*; is statically manageable. Allen has made some good points about how second-class modules are a good fit for the programmer's mental model of statically delineated portions of code. At any rate, cyclic dependencies are not the central point.

As far as I can tell, Simple Modules only changes the composition hazard introduced by "imoprt m.*" from a run-time hazard to a link-time hazard.

In your example, certainly the earlier error is a benefit of our proposal. But the really key benefit is this:

module M { export x = 7; }

module N { M.y + 3; // an error - just like an unbound variable in ES5 strict }

This can be an early error because we statically know a lot about modules. This is good for programmers, because it supports early errors, and also good for compiler writers, since it supports optimization.

That's not true. Loaders are about isolation. I agree with you that conceptually, there's a level of granularity that consists of a set of modules, which is often what we mean by "package," at least in common usage (if not the particular meaning of that term in a given language). But the idea of nested/hierarchical modules is that modules scale to the large by simply making modules that consist of nested modules.

Perhaps you could point to or provide some sample code that illustrates collecting modules.  My guess is that it would require a lot of explicit linkage, rather than harnessing the hierarchy in which the modules are organized.

Because Simple Modules is based on lexical scope, collecting modules is as simple as collecting objects into a larger object:

module Container { module Sub1 = load "example.com/foo.js"; module Sub2 = Other.InnerModule; module Sub3 { module SubSub4 = load "example.org/bar.js"; } }

This is also not true; the ability to attach modules to module loaders (as well as the dynamic evaluation methods) makes it possible for separate module loaders to communicate. However, loaders aren't about linking multiple working sets, but rather providing isolated subspaces. (One use case I sometimes use is an IDE implemented in ES, that wants to run other ES programs without them stepping on its toes.)

Code examples would be insightful.

Currently, in web-based IDEs such as Bespin, code being developed has the ability to muck with the internal state of the IDE and the overall page, which is usually undesirable. With module loaders, simply by not sharing access to the DOM or other internal state with the code being developed, this would be prevented.

Simple Modules, at present, will not sufficiently assist people constructing applications and APIs by composing non-coherent groups of name spaces produced by non-cooperating groups of developers.

The simple modules proposal explicitly solves the problems of Java-like systems where everything is hard-wired.  Instead, modules are given lexically scoped names, and can even be deployed without naming themselves; both of these features make it far easier to share code between different teams.

Perhaps I am misunderstanding the scope of a module name.  Is it not true that a module is available by its self declared name in all modules that share a loader?  Is it actually possible to bind a single module name that provides access to all of the modules in another loader?

module X = load("example.com/api");    module Y = X.Y; // is this possible?

Yes, if that URL has a module Y in the code that it provides. For example, if that URL produces the code:

module Y { ... } module Z { ... }

Then it's certainly possible.

Is it possible for MRL's to be CommonJS top-level and relative module identifiers?

We've avoided committing to particular syntax for MRLs so far, although the discussion at the last meeting tended toward the following syntax:

MRL = URL | RelativeURL | "@"Identifier

with the last case for modules provided by the host environment.

If that's the case, is it possible for the loader handler to forward a request for a module to another loader?

var externalLoaders = {};    Loader(function (id, request) {        var parts = id.split("/");        if (parts[0] === "." || parts[0] === "..") {            // what is the identifier of the module from which this            // module was requested?  I need that to resolve the            // identifier of the request module.            when(fetch(id),                request.provideSource,                request.reject            );        } else {            var external = externalLoaders[parts[0]];            request.provideLoader(external, parts.slice(1).join("/"));        }    })

This could work, certainly. Note that there's no particular need for the externalLoaders to be ModuleLoaders themselves - they could have whatever API is useful in this context.

I think it might be best to organize the syntax around MRL's rather than local short-names.  MRL's can be reasonably short if they're permitted to be relative paths, which requires the module loader handler to receive the MRL of the requesting module.

This is one thing we've resolutely tried to avoid. A key aspect of our module system is that it gets sharing right - if you import a module in two different places, that module is shared. This requires knowing when you import "the same" module. In most languages, this ultimately comes down to some filesystem-based comparison, which we don't have the luxury of on the web. MRLs don't support a very good equality operation. That's why we've gone simply with names, with a very simple equality.

# Kris Kowal (15 years ago)

On Sat, Jun 5, 2010 at 3:40 AM, Sam Tobin-Hochstadt <samth at ccs.neu.edu> wrote:

On Fri, Jun 4, 2010 at 9:48 PM, Kris Kowal <kris.kowal at cixar.com> wrote:

On Fri, Jun 4, 2010 at 5:17 PM, David Herman <dherman at mozilla.com> wrote:

By keeping modules second class, we get a number of benefits, not just handling cyclic dependencies. (In fact, cyclic dependencies can be handled nicely in a first-class module system as well.) One of the benefits of second-class modules is the ability to manage static bindings; for example, import m.*; is statically manageable. Allen has made some good points about how second-class modules are a good fit for the programmer's mental model of statically delineated portions of code. At any rate, cyclic dependencies are not the central point.

As far as I can tell, Simple Modules only changes the composition hazard introduced by "imoprt m.*" from a run-time hazard to a link-time hazard.

In your example, certainly the earlier error is a benefit of our proposal.

I strongly disagree. Either Alice is at fault for using "import m.*", Charlie is at fault for altering her API, or neither Alice and Charlie are at fault because they were merely and earnestly using the features of the underlying system, in which case the system is at fault. Alice should be able to trust the features of her module system. Charlie should be able to augment her API without breaking her dependents.

But the really key benefit is this:

module M {  export x = 7; }

module N {  M.y + 3; // an error - just like an unbound variable in ES5 strict }

This feature does not preclude the omission of the import * syntax variant.

This can be an early error because we statically know a lot about modules.  This is good for programmers, because it supports early errors, and also good for compiler writers, since it supports optimization.

I agree. I do not think that any of my objections preclude statically linking name spaces.

Because Simple Modules is based on lexical scope, collecting modules is as simple as collecting objects into a larger object:

module Container {  module Sub1 = load "example.com/foo.js";  module Sub2 = Other.InnerModule;  module Sub3 {    module SubSub4 = load "example.org/bar.js";  } }

And I presume that usage of submodules is:

module Container = load("the script above");
module X = Container.Sub1.SubSub4;

If that's the case, please consider making the nested module export explicit:

module Container {
    export module Contained {
    }
}

I am not attached to the name spaces feature of the Simple Modules proposal, but it's not worth fighting.

This is also not true; the ability to attach modules to module loaders (as well as the dynamic evaluation methods) makes it possible for separate module loaders to communicate. However, loaders aren't about linking multiple working sets, but rather providing isolated subspaces. (One use case I sometimes use is an IDE implemented in ES, that wants to run other ES programs without them stepping on its toes.)

Code examples would be insightful.

Currently, in web-based IDEs such as Bespin, code being developed has the ability to muck with the internal state of the IDE and the overall page, which is usually undesirable.  With module loaders, simply by not sharing access to the DOM or other internal state with the code being developed, this would be prevented.

I understand and wholeheartedly agree with the "why". I do not understand "how". Code examples would be insightful.

Perhaps I am misunderstanding the scope of a module name.  Is it not true that a module is available by its self declared name in all modules that share a loader?  Is it actually possible to bind a single module name that provides access to all of the modules in another loader?

module X = load("example.com/api");    module Y = X.Y; // is this possible?

Yes, if that URL has a module Y in the code that it provides.  For example, if that URL produces the code:

module Y { ... } module Z { ... }

Then it's certainly possible.

Developers should not need to concatenate subsystems to construct packages. It should be possible for one to connect loaders to other loaders.

Is it possible for MRL's to be CommonJS top-level and relative module identifiers?

We've avoided committing to particular syntax for MRLs so far, although the discussion at the last meeting tended toward the following syntax:

MRL = URL | RelativeURL | "@"Identifier

This notation seems adequate. The question remains whether relative URL's are supported by the proposed loader API. It's my impression that it is not presently possible for a loader handler to observe the MRL of the module that requested the module. If that's the case, it would be the responsibility of the loader itself to resolve MRL's. It would be better if that responsibility were deferred to the loader handler.

If that's the case, is it possible for the loader handler to forward a request for a module to another loader?

var externalLoaders = {};    Loader(function (id, request) {        var parts = id.split("/");        if (parts[0] === "." || parts[0] === "..") {            // what is the identifier of the module from which this            // module was requested?  I need that to resolve the            // identifier of the request module.            when(fetch(id),                request.provideSource,                request.reject            );        } else {            var external = externalLoaders[parts[0]];            request.provideLoader(external, parts.slice(1).join("/"));        }    })

This could work, certainly.  Note that there's no particular need for the externalLoaders to be ModuleLoaders themselves - they could have whatever API is useful in this context.

It would still be nice if you could just connect loaders to loaders to so they could share the results of compilation and static analysis instead of having to communicate entirely with source code.

I think it might be best to organize the syntax around MRL's rather than local short-names.  MRL's can be reasonably short if they're permitted to be relative paths, which requires the module loader handler to receive the MRL of the requesting module.

This is one thing we've resolutely tried to avoid.  A key aspect of our module system is that it gets sharing right - if you import a module in two different places, that module is shared.  This requires knowing when you import "the same" module.  In most languages, this ultimately comes down to some filesystem-based comparison, which we don't have the luxury of on the web.  MRLs don't support a very good equality operation. That's why we've gone simply with names, with a very simple equality.

This strikes me as six of one and half a dozen of the other. MRL's are just as comparable as your short names and neither guarantee source equivalence or semantic equivalence. It's really not worth trying, even on a file system. If you want the same thing, you either have to use the same name or the same MRL.

I'm not convinced that using short names will be or need to be the common case. In CommonJS, "MRL's" are sufficient. It's certainly not worth the complexity cost to have two layers of naming.

Kris Kowal

# Brendan Eich (15 years ago)

On Jun 5, 2010, at 2:17 PM, Kris Kowal wrote:

On Sat, Jun 5, 2010 at 3:40 AM, Sam Tobin-Hochstadt
<samth at ccs.neu.edu> wrote:

On Fri, Jun 4, 2010 at 9:48 PM, Kris Kowal <kris.kowal at cixar.com>
wrote:

On Fri, Jun 4, 2010 at 5:17 PM, David Herman <dherman at mozilla.com>
wrote:

By keeping modules second class, we get a number of benefits, not just handling cyclic dependencies. (In fact, cyclic dependencies
can be handled nicely in a first-class module system as well.) One of the benefits of second-class modules is the ability to manage
static bindings; for example, import m.*; is statically manageable. Allen has made some good points about how second-class modules are a good fit for the programmer's mental model of statically delineated portions of code. At any rate, cyclic dependencies are not the central point.

As far as I can tell, Simple Modules only changes the composition hazard introduced by "imoprt m.*" from a run-time hazard to a link-time hazard.

In your example, certainly the earlier error is a benefit of our proposal.

I strongly disagree.

Whoa -- I don't see how anyone can disagree that early error is better
than a runtime error, if there is an error case at all. It seems to me
you're instead arguing that no such error should be possible because
import m.* should not be supported -- that you're arguing against any
import-everything-that's-exported feature. Right?

Either Alice is at fault for using "import m.*", Charlie is at fault for altering her API, or neither Alice and Charlie are at fault because they were merely and earnestly using the features of the underlying system, in which case the system is at fault. Alice should be able to trust the features of her module system. Charlie should be able to augment her API without breaking her dependents.

The only reason import m.* is in the proposal is that when one is
using modules in one's own (definitely including the single-author
case, but also the single-curator and same-origin-hosted case) larger
program, where the hazard of new names can be controlled by testing
and auditing, then lack of import m.* is a royal pain. This is
especially true during rapid prototyping.

If * imports were considered too dangerous because they might be
abused at scale and across administrative boundaries, then they could
be dropped. This is a sideshow -- it doesn't get at the essential
issue of second- vs. first-class module system, or other seeming bones
of contention.

But I don't think there's a consensus to drop * imports. Lots of
things in JS can be abused, but are not so hazardous they should be
removed. True,'with' is gone in ES5 strict, eval is tamed somewhat.
Perhaps there's a case for a future stricter strict mode forbidding *
imports, but it's not obviously worth the added modal complexity.

But the really key benefit is this:

module M { export x = 7; }

module N { M.y + 3; // an error - just like an unbound variable in ES5 strict }

This feature does not preclude the omission of the import * syntax variant.

Right -- no one is saying * imports must be part of the system to keep
other aspects of the design working. And singling out * imports does
not argue against the whole design.

This can be an early error because we statically know a lot about modules. This is good for programmers, because it supports early errors, and also good for compiler writers, since it supports optimization.

I agree. I do not think that any of my objections preclude statically linking name spaces.

Ok, then it seems you did agree earlier too, that an early error is
better than a runtime error. Whew!

# Kris Kowal (15 years ago)

On Sat, Jun 5, 2010 at 2:41 PM, Brendan Eich <brendan at mozilla.com> wrote:

I strongly disagree.

Whoa -- I don't see how anyone can disagree that early error is better than a runtime error, if there is an error case at all. It seems to me you're instead arguing that no such error should be possible because import m.* should not be supported -- that you're arguing against any import-everything-that's-exported feature. Right?

Yes, we're in agreement that an early link-error is better than run-time error. You are also correct that my argument is that "import *" should not be supported. It also true that this is not my primary objection and it does not poison the design; it is a side-show.

It is also true that this is a value-judgement between the convenience of the feature when used responsibly by wise and scholarly programmers within a system of modules designed in coordination, and the value of protecting programers from the hazard at the cost of that convenience.

CommonJS put this to vote. There was support on both sides, but on CommonJS, the feature was sacrificed to get unianimous support. Only Tom Robinson called for the "include" function (import *) in the final show of hands, but cast his +1 without the feature.

groups.google.com/group/commonjs/browse_thread/thread/d2dc85a2725992be/4a7fb3943fdbbbbd?lnk=gst&q=modules+include#4a7fb3943fdbbbbd

It is likely that it not possible to get a large enough group of people either in support of the feature or against the feature to reach unanimity, and that nobody cares enough either way to block ratification. This is far more likely than that there is consensus either way.

The only reason import m.* is in the proposal is that when one is using modules in one's own (definitely including the single-author case, but also the single-curator and same-origin-hosted case) larger program, where the hazard of new names can be controlled by testing and auditing, then lack of import m.* is a royal pain. This is especially true during rapid prototyping.

Yeah, I've been on both sides of the debate. I got bitten in the ass when I was using "from django.models import *", which is naturally a case of using a module in foreign control (which you note is not proper usage) but also very compelling because of the royal pain you mention. I think it would be good to put this issue to vote. I think we're in agreement about the nature of the trade-off and we wouldn't want to make Buridan donkeys of ourselves.

I also think we should get a show of hands on whether we should try to decouple "name spaces" (named module clauses per the simple modules proposal) and "modules" (as linked with a loader), and whether we need both layers.

Meanwhile, I would still like to see examples of how to compose working sets of modules with other working sets of modules that were not designed in coordination.

Kris Kowal

# Kris Kowal (15 years ago)

Supposing that aQuery and bQuery are implemented by independent uncoordinated authors.

aQuery.js

module $ {
}

bQuery.js

module $ {
}

If my interpretation is correct, these cannot be combined in a single "Application".

<script type="harmony" src="aQuery.js"></script>
<script type="harmony" src="bQuery.js"></script>

One solution to this problem is to convince A and B to coordinate, which I've hitherto inferred was the only solution supported by Simple Modules, in which case they share a fault with Java.

Is this a solution?

<script type="harmony">
    module A_ = load("aQuery.js");
    module A = A_.$;
    module B_ = load("bQuery.js");
    module B = B_.$;
</script>

With this example, I am inferring that

  • That the web-browser's loader knows the location of the current page, so it can resolve the MRL based on that location.
  • "load" can only be used in the context of an importing module assignment.
  • conceptually, if not at run-time, "load" returns a module instance that contains the top-level modules of the given script.
  • that the top-level modules of the remote script are not registered as top-level modules of the local application, unlike co-DOM scripts.
  • for a script to have importable bindings, these must exist in a module block of the loaded script.
  • there is no notation for destructuring a module from a loaded sub-module
  • a script is not a module, so exports cannot be used at the top level.

If that's the case, I would like to refine this approach, such that loaded modules can have exports at the top level. This would permit the function export.

aQuery.js

export var $ = function () {
};

bQuery.js

export var $ = function () {
};

link.js

module A = load("aQuery.js");
module B = load("bQuery.js");

It would also be good for there to be a way to bind $ without binding a module.

const A = load("aQuery.js").$;
const B = load("bQuery.js").$;

This obviously breaks a load call outside an import clause, which I infer is not possible with the present proposal.

Is it possible to decouple name spaces from loaded modules?

Another point of interest is transitive loads. I do not think that there is a provision in the specification that would permit load directives to be resolved relative to the location or MRL of the module from which load call is declared.

scripts/sazzle.js module Sazzle { }

scripts/aQuery.js module Sazzle_ = load("sazzle.js"); // relative to // "scripts/aQuery.js" module Sazzle = Sazzle_.Sazzle; module aQuery { export $ = function () { }; }

link.js module aQuery_ = load("scripts/aQuery.js"); const $ = aQuery_.aQuery.$;

Kris Kowal

# Sam Tobin-Hochstadt (15 years ago)

On Sun, Jun 6, 2010 at 2:00 PM, Kris Kowal <kris.kowal at cixar.com> wrote:

Supposing that aQuery and bQuery are implemented by independent uncoordinated authors.

aQuery.js

module $ {    }

bQuery.js

module $ {    }

If my interpretation is correct, these cannot be combined in a single "Application".

<script type="harmony" src="aQuery.js"></script>    <script type="harmony" src="bQuery.js"></script>

That's not correct, the second module would simply shadow the first.

One solution to this problem is to convince A and B to coordinate, which I've hitherto inferred was the only solution supported by Simple Modules, in which case they share a fault with Java.

That's not the only solution.

Is this a solution?

<script type="harmony">        module A_ = load("aQuery.js");        module A = A_.$;        module B_ = load("bQuery.js");        module B = B_.$;    </script>

Yes, although it's even simpler to just skip the `A_' bits, and refer to A.$ and B.$.

With this example, I am inferring that

  • That the web-browser's loader knows the location of the current  page, so it can resolve the MRL based on that location.

We have explicitly avoided any commitment to the semantics of MRLs. Certainly we don't want to mandate how path lookup or remote fetching works, since this may not be applicable to all ES host environments.

  • "load" can only be used in the context of an importing module  assignment.

'load' is a relative keyword (I'm forgetting the exact term here) which is specified in the ModuleDeclaration production on the simple_modules wiki page.

  • conceptually, if not at run-time, "load" returns a module instance  that contains the top-level modules of the given script.
  • that the top-level modules of the remote script are not registered  as top-level modules of the local application, unlike co-DOM  scripts.

Right.

module M = load "foo.js";

creates a new module M with foo.js as its contents.

  • for a script to have importable bindings, these must exist in a  module block of the loaded script.

I don't know what this means.

  • there is no notation for destructuring a module from a loaded  sub-module

If a module is statically bound, you can access its value exports with 'import' or its component modules with 'module M = ...'

  • a script is not a module, so exports cannot be used at the top  level.

The ScriptElement production explicitly includes ImportStatement, so modules can be imported by scripts.

If that's the case, I would like to refine this approach, such that loaded modules can have exports at the top level.  This would permit the function export.

aQuery.js

export var $ = function () {    };

bQuery.js

export var $ = function () {    };

link.js

module A = load("aQuery.js");    module B = load("bQuery.js");

This is exactly what the proposal already specifies.

It would also be good for there to be a way to bind $ without binding a module.

const A = load("aQuery.js").$;    const B = load("bQuery.js").$;

It's possible to use the module loader API to do this, slightly more verbosely. But why? If you say:

module A = load "aQuery.js";

then A.$ is already available for use in expression contexts.

This obviously breaks a load call outside an import clause, which I infer is not possible with the present proposal.

You keep saying 'infer'. Is the grammar Dave has written on the wiki page unclear?

Is it possible to decouple name spaces from loaded modules?

What do you mean by 'name spaces'?

Another point of interest is transitive loads.  I do not think that there is a provision in the specification that would permit load directives to be resolved relative to the location or MRL of the module from which load call is declared.

scripts/sazzle.js    module Sazzle {    }

scripts/aQuery.js    module Sazzle_ = load("sazzle.js"); // relative to                                    // "scripts/aQuery.js"    module Sazzle = Sazzle_.Sazzle;    module aQuery {        export $ = function () {        };    }

link.js    module aQuery_ = load("scripts/aQuery.js");    const $ = aQuery_.aQuery.$;

Again, we haven't specified the precise semantics of MRL resolution, which will depend both on the host environment and the current module loader. Your example might point to a need to augment the module loader api with information on 'load' calls specifying what module the 'load' occurs in.

# Kris Kowal (15 years ago)

On Mon, Jun 7, 2010 at 8:37 AM, Sam Tobin-Hochstadt <samth at ccs.neu.edu> wrote:

On Sun, Jun 6, 2010 at 2:00 PM, Kris Kowal <kris.kowal at cixar.com> wrote: ...

Most of this is good clarification, particularly that load interacts with the exports of the foreign script's implied, anonymous module scope. The grammar is clear. It would be good for this to be expressed in one of the examples, and for it to be clarified in the description of semantics that every script is also an anonymous module from which the exports are only accessible through the lexical scope "shadowing" (I assume) and by being bound to a module through a "load" expression.

Right.

module M = load "foo.js";

creates a new module M with foo.js as its contents.

This is a point that Ihab clarified for me yesterday evening that merits bold and emphasis: loaded modules are not singletons. You do this to avoid having to compare MRL's for equivalence, particularly to avoid having to define equivalence given the potential abundance of edge cases.

http://example.com/module?a=10&b=20
http://example.com/module?b=20&a=10

It's worth noting, and please dismiss the implication that the approach is necessarily proper and correct, that this is not a problem for CommonJS modules because the specification limits module identifiers to a very small subset of expressible URLs and defers the issue of URLs to the packaging layer, wherein the semantics are similar to those put forth here.

  • for a script to have importable bindings, these must exist in a  module block of the loaded script.

I don't know what this means.

Ihab clarified that this is not true. This is a re-statement of my mis-perception that there is no implicit, anonymous, top-level module in a script and that therefore there cannot be exports outside explicit module blocks. I stand contentedly corrected on this point.

It's possible to use the module loader API to do this, slightly more verbosely.   But why?  If you say:

module A = load "aQuery.js";

then A.$ is already available for use in expression contexts.

I can make the same argument about "import *". If I "import A", I can access its contents as "A.$". To permit destructing on all import expressions would be consistent philosophically.

link.js    module aQuery_ = load("scripts/aQuery.js");    const $ = aQuery_.aQuery.$;

Your example might point to a need to augment the module loader api with information on 'load' calls specifying what module the 'load' occurs in.

Exactly. The Narwhal loader receives an id and a baseId on from require(id) calls. Each module gets a fresh "require" effectively bound on the baseId. I think that the loader handler needs to receive the base MRL as an argument or part of the request object.

Another thing that Ihab clarified which merits a full section on the wiki is the dynamic scoping of lexical module names. Ihab pointed out that, when a script is loaded, it inherits the module scope chain of the declarer, permitting a certain degree of "aspect oriented" dependency provision, or "external linkage". This depends on an understanding that each "load" always instantiates a module and cannot ever rebind an existing module (a singleton).

loaded.js export function poof(el) { $.poof(el); }

aQuery.js export function poof(el) { // one implementation }

bQuery.js export function poof(el) { // an alternate implementation }

a.js module $ = load("aQuery.js") module X = load("loaded.js");

b.js module $ = load("bQuery.js") module X = load("loaded.js");

linkage.js module A = load("a.js"); module B = load("b.js");

Noting that in this example, having loaded linkage, there are two instances of "loaded.js", each of which sees "$" as "aQuery" and "bQuery" respectively. They also see "X", "A" and "B" following the dynamic scope chain.

This is something I have not considered. It would be good to do a write-up on what use-cases you have in mind for this feature.

Also note that, because I was not aware of this feature, I've been using the terms "external linkage" and "internal linkage" differently, in the context of my first email on this thread. I used the terms "internal" and "external" to refer to modules from a given working set of modules from one coordinated design (a "package"), and to modules outside the package, in other packages. Sorry for the confusion.

At this point I have been convinced that it is possible with this proposal to integrate uncoordinated working sets of modules by using the load syntax and script-scoped exports. I've also been convinced that there is a way to inject free variables into an isolated context, as mediated by the loader. I've been made aware that there exists a way to implicitly inject modules when loading scripts, which implies that there is a contract between the loader and loadee that certain free variables in the loadee will be bound through the module scope chain. This provides a finer grain means of weaving dependencies into a module than that provided by the global environment record as passed to instantiate a loader.

It's also clear now that this proposal does use a three-layered approach (lexically scoped variables, module identifiers, and URL's), and that it simply differs from other systems in that module identifiers share the lexical scope, and URL's only grab "packages" in the sense that an entire tree of modules can be referenced through transitive "loads". I'm going to mull the implications, but one for sure, is that it is necessary to buy a whole package even if you only want a single function from it. Tooling like prototype's Sprockets might address this.

I have some outstanding "side-show" points for refining the proposal:

  • load handlers need to receive the MRL of the declaring module so they can resolve relative MRLs. This one's important.
  • there should be cleaner syntax for destructuring a loaded module. Preferably whether a module is rebound or loaded should be orthogonal to the destructuring syntax. Sugar.
  • the community should be called upon to weigh in on whether "import *" should be supported, and we should frame the question with education on the full implications of the trade-off.
  • we should consider a way to link one loader to another, such that a loader, for example a package loader, can be mapped responsibility for all modules in a subtree of the MRL name space without having to communicate exclusively in source strings.

I would like to see a lot more examples of solving problems on the wiki.

Thanks to everyone for the clarifications,

Kris Kowal

# Erik Arvidsson (15 years ago)

On Mon, Jun 7, 2010 at 10:35, Kris Kowal <kris.kowal at cixar.com> wrote:

Another thing that Ihab clarified which merits a full section on the wiki is the dynamic scoping of lexical module names.

This is a common misconception. Simple modules is using static lexical scoping, not dynamic scoping. The thing that might be confusing is that the loaded module is defined in the lexical scope of the module that loaded it.

# David Herman (15 years ago)

It would be good for this to be expressed in one of the examples, and for it to be clarified in the description of semantics that every script is also an anonymous module from which the exports are only accessible through the lexical scope "shadowing" (I assume) and by being bound to a module through a "load" expression.

This doesn't sound quite right-- in the terminology we used, scripts are not modules. An application is composed of a sequence scripts, which are like module bodies but do not contain exports. Each script's bindings are in scope for all subsequent scripts. By contrast, the target of "load" is the body of a module, which can export bindings.

This is a point that Ihab clarified for me yesterday evening that merits bold and emphasis: loaded modules are not singletons.

Yes, I think something we need to do is write out some more material explaining the proposal in more tutorial fashion. The examples page was a start, but clearly not enough.

You do this to avoid having to compare MRL's for equivalence, particularly to avoid having to define equivalence given the potential abundance of edge cases.

example.com/module?a=10&b=20, example.com/module?b=20&a=10

Yes, as well as the fact that even the bit-for-bit same URL can deliver different bits from moment to moment. So module loading really is effectful (albeit at compile time), in the sense that it performs arbitrary Internet I/O. In lieu of requiring programmers to learn rules about when two references to modules are referring to the same memoized instance and when the instance is loaded and evaluated, simple modules make all this explicit and under the programmer's control.

It would also be good for there to be a way to bind $ without binding a module.

const A = load("aQuery.js").$; const B = load("bQuery.js").$;

There are a couple reasons why I think I'd avoid this kind of thing: for one, it means that |load| -- which indicates a /compile-time/ operation -- can now be arbitrarily nested in a program instead of just at top level. Also, loading is a fairly heavyweight operation, and since it doesn't memoize, you could very easily end up with accidental duplication.

As Sam says, you can write almost the same thing via dynamic loading:

const A = ModuleLoader.current.loadModule("aQuery.js").$;
const B = ModuleLoader.current.loadModule("bQuery.js").$;

or a little more conveniently:

function load(ml, mrl) {
    return ml.loadModule(mrl);
}

const ml = ModuleLoader.current;

const A = load(ml, "aQuery.js").$;
const B = load(ml, "bQuery.js").$;

The main difference from what I think you intended is that this would do the loading dynamically.

It's possible to use the module loader API to do this, slightly more verbosely. But why? If you say:

module A = load "aQuery.js";

then A.$ is already available for use in expression contexts.

I can make the same argument about "import *". If I "import A", I can access its contents as "A.$". To permit destructing on all import expressions would be consistent philosophically.

I don't follow your reasoning-- import A.* is a convenience form to bind the exports of A as local variables. It serves a very different purpose.

Local, nested module loading could either mean static loading, which I contend would be confusing and error-prone, or dynamic loading, which is already available via the dynamic loading API.

Your example might point to a need to augment the module loader api with information on 'load' calls specifying what module the 'load' occurs in.

Exactly. The Narwhal loader receives an id and a baseId on from require(id) calls. Each module gets a fresh "require" effectively bound on the baseId. I think that the loader handler needs to receive the base MRL as an argument or part of the request object.

Yes, I agree. That was an oversight-- thanks for bringing it up.

Another thing that Ihab clarified which merits a full section on the wiki is the dynamic scoping of lexical module names.

I've said it before: it's not dynamic scoping. It's static, lexical, compile-time scoping. Dynamic scoping necessarily involves dynamically determining the binding of a variable. There's nothing of the sort happening here; it's all compile-time.

This is something I have not considered. It would be good to do a write-up on what use-cases you have in mind for this feature.

Yes, we should definitely do that. Two important use cases are 1) standard libraries, which would be shared as global module bindings in a standard module loader, and 2) mutually recursive modules, which need to agree on what they call one another.

I'm going to mull the implications, but one for sure, is that it is necessary to buy a whole package even if you only want a single function from it.

True. I don't think it's reasonable to try to solve the more intricate problems of partial or on-demand loading of modules. I think people are still experimenting with this in the wild, and IMO it's premature to try to solve this in Harmony. With dynamic loading, people can continue experimenting with different approaches. And especially when you consider the fact that on-demand loading means the semantics is doing network I/O behind your back and lazy evaluation of arbitrary JS code, it becomes a very difficult programming model. As I've said before, laziness and effects don't mix (ask any Haskellite! ;).

  • load handlers need to receive the MRL of the declaring module so they can resolve relative MRLs. This one's important.

Agreed.

  • there should be cleaner syntax for destructuring a loaded module. Preferably whether a module is rebound or loaded should be orthogonal to the destructuring syntax. Sugar.

I'm not sure I know what you're looking for here.

  • the community should be called upon to weigh in on whether "import *" should be supported, and we should frame the question with education on the full implications of the trade-off.

A few thoughts:

  1. I'd rather have a module system without import A.* then no module system at all.

  2. Community discussion of the issue is, of course, fine -- that's what es-discuss is here for.

  3. IMO, it'd be a big mistake to eliminate import A.*. It's not hard to avoid it, proscribe it in coding standards, or even reject it from lint tools. But withholding it from the language eliminates a real convenience for simple scripting, which continues to be an important use case for ES. And I don't see the proscription providing enough value.

The hazard of import A.* is that a new version of A may introduce new bindings that conflict with another module's bindings. In that case, code in the wild will start failing because it gets a compile-time conflict that isn't resolved. Another cost is that people would be more likely to use import A.* when A is some widely used library (including the builtins). This adds /some/ pressure for library writers to be more conservative about introducing new bindings.

But the failure mode for all of these is an early, compile-time error; in practice, production systems would pin particular versions of libraries rather than downloading from a 3rd-party server; and generally production systems would probably avoid it anyway. And that's not what it's there for. It's there for quick scripts, rapid prototyping, etc.

  • we should consider a way to link one loader to another, such that a loader, for example a package loader, can be mapped responsibility for all modules in a subtree of the MRL name space without having to communicate exclusively in source strings.

IIUC, this might already be achievable with the load hook (the function passed to the ModuleLoader constructor). The latter can make whatever decisions it wants about how to handle any given MRL. And it avoids having to over-specify MRL's. (We may end up needing to specify more of MRL's anyway, but I'd prefer to do no more than necessary.)

# Kris Kowal (15 years ago)

On Mon, Jun 7, 2010 at 12:10 PM, Erik Arvidsson <erik.arvidsson at gmail.com> wrote:

On Mon, Jun 7, 2010 at 10:35, Kris Kowal <kris.kowal at cixar.com> wrote:

Another thing that Ihab clarified which merits a full section on the wiki is the dynamic scoping of lexical module names.

This is a common misconception. Simple modules is using static lexical scoping, not dynamic scoping. The thing that might be confusing is that the loaded module is defined in the lexical scope of the module that loaded it.

Reviewing the idea, it's certainly not dynamic scoping. If you're very free with the analogy to a function call as established by the syntax, I recklessly intuited that there's a case for it sharing the analyzability problem that dynamic scoping causes, but I have not found such a case. However, you cannot statically observe a reference error on a single script in isolation; you need to know the lexical scope in which it has been loaded. I don't think that's necessarily a problem. It's certainly the same case with any situation where successive script tags have access to the modules declared by previous scripts.

Kris Kowal

(For anyone observing the political mess I've made, I do plan to do a write-up redacting my claim that Simple Modules can't be used to compose independently designed scripts. I think this is the big issue and I'm glad this design has a solution. I'll continue to ponder the implications for CommonJS and see if I can come up with a migration story that makes sense.)

# Kris Kowal (15 years ago)

Thanks in general,

On Mon, Jun 7, 2010 at 3:23 PM, David Herman <dherman at mozilla.com> wrote:

  • there should be cleaner syntax for destructuring a loaded  module.  Preferably whether a module is rebound or loaded  should be orthogonal to the destructuring syntax.  Sugar.

I'm not sure I know what you're looking for here.

I'll keep quiet until I've got an idea.

  1. I'd rather have a module system without import A.* then no module system at all.

And I would rather have module system with import A.* than no module system at all. Seems like every time the issue comes up, a different consensus is reached. The same voter dynamics are probably why "function" declarations will never be abbreviated. Difficult to agree; easy to live with disagreement.

  • we should consider a way to link one loader to another,  such that a loader, for example a package loader, can be  mapped responsibility for all modules in a subtree of the  MRL name space without having to communicate exclusively  in source strings.

IIUC, this might already be achievable with the load hook (the function passed to the ModuleLoader constructor). The latter can make whatever decisions it wants about how to handle any given MRL. And it avoids having to over-specify MRL's. (We may end up needing to specify more of MRL's anyway, but I'd prefer to do no more than necessary.)

The only thing that appears to be missing is the ability to share an opaque object representing a pre-compiled module. I believe Brendan in the past has mentioned that this kind of problem can be solved behind the scenes, so it's certainly not critical.

Kris Kowal

# David Herman (15 years ago)

The only thing that appears to be missing is the ability to share an opaque object representing a pre-compiled module.

The ModuleLoader attachModule method lets you take an already-instantiated module and "attach" it to another module loader, i.e., share that module instance with the loader.