Modules: compile time linking (was Re: Modules feedback, proposal)

# James Burke (13 years ago)

On Fri, Mar 30, 2012 at 3:25 PM, James Burke <jrburke at gmail.com> wrote:


  1. Compile time linking

There is a tension between the runtime calls like System.load and the compile time linking of the module/import syntax. The runtime capabilities cannot be removed. However, I believe it would simplify the story for an end user if the the compile time linking is removed.

While the compile time linking may give some kinds of type checks/export name checking, it is only one level deep, it does not help with this sort of checking:

//Compile time checking can make sure //'jquery.js does export a $ import $ from 'jquery.js';

//However, it cannot help check if foo is //a real property $.foo();

Similar story for prototypical properties on constructor functions.

New possibilities open up if this the compile time stuff is removed, and I believe it simplifies the end user's day-to-day interaction with modules (more below).

Judging from some feedback on twitter, I may not have fully tied together the costs and benefits for suggesting that the compile time binding be dropped:


Benefits of compile time binding

This is what I need help in understanding. The benefits I have heard so far:

  1. Being able to check export names/types. As mentioned, this feels like a very shallow benefit, since it does not apply to properties outside of the export properties. See constructor functions and function libs like jQuery.

  2. It may help allow some future things like macros?


Benefits of runtime only

  1. Upgrading existing libraries

The biggest issue we have seen for AMD loaders so far is getting libraries to update to register as modules. They still need to operate in "old world", non-module, browser globals situations.

Why is it important to opt in to module registration? It is not really about being able to get the right export value. In a pinch, browser globals could be read for that.

The really important part is knowing what dependencies the script needs before it should be executed. Example:

Backbone needs jQuery and underscore to set itself up, (technically just underscore for initial "module evaluation", but the point is it has two dependencies).

For ES.next, how can Backbone register itself as being an ES.next module, and that it needs jQuery and underscore executed before it runs, while still allowing Backbone to run in non-ES browsers?

When ES.next comes out, it will still be 2-5 years, at least, where non-ES.next browsers are in play. This is my estimate based on typical 2 year mobile contracts with carriers not incentivized to upgrade software, and the half-lives of older Windows OS/IE versions.

Libraries will need to work in old world browsers for a few years.

Possible solutions:

a) Ask libraries to provide a lib.es-next.js version of themselves in addition to the old world version, so that compile time linking with new "module/import" syntax can be used.

b) Have a way for the library to do a runtime type check, and opt-in to the call.

c) Something else?

Option a) seems bad. It complicates the library deployment/distribution story. As a library developer, I just want one JS file I can deliver. It makes support much easier.

On b): the module_loaders API has a way to do a runtime module registration, but as I understand it, it means that a consumer of my library then needs to then use the System.load() API to get a hold of it.

If I am writing my code for an ES.next browser, it seems very awkward then for me to use Backbone, if Backbone used the runtime module_loaders API to register a module:

//app.js import draw from "shape.js" System.load("backbone", function (backbone) {

});

What if I need Backbone to generate one of my exports?

If only runtime mechanics are used, then I could do something like this:

//app.js let {draw} from "shape.js", backbone from "backbone.js";

  1. Simplifies returning a function for the module value. It sounded like there were concerns about prototypes and such when doing compile time checking for the "function as the module export" case.

It seems weird to me that it has special restrictions but doing "export function Foo() { /*constructor function */}" is OK. This points to the shallowness of the export checking that is enabled by the compile time checking, and the complication of making that top level exports level "special".

Being able to just use "return" the module export is also easier for a JS dev to understand, no special weird syntax. I'm not sure what the "export call" syntax looks like (I'm guessing it starts with that), but it sounds more complicated, as does the older "export this function (){}" idea.

Using functions as exports has been used heavily in Node and AMD modules. Not having support for them will be seen as a step backwards.

  1. As mentioned, it may be possible to remove "import" as new syntax for ES.next, and just rely on normal variables and destructuring. While "less syntax" may not good in and of itself, given the other tradeoffs mentioned here, it seems like a benefit.

  2. It opens the door for loader plugins. We have found these incredibly useful in AMD loaders because it reduces the "pyramid of doom" callback nesting that are needed to properly set up modules that may use network resources for initialization, like a text template.

Note that I do not believe promises or some similar API helps here, unless the entire module API is promised based. This seems unlikely to happen.

Loader plugins have been used to fetch resources that are part of a module's setup, and if a loader plugin was not used, it would complicate the module's external API, force it to be callback/promise-based just because it had some small extra init work to do.

I will not expand on this item more because I do think it is a lower benefit. If you are curious, see AMD loader usage for things like a "text!" and "has!" plugins.

Also note that they can actually inline their "exports" in an optimized file, which reduces network traffic for deployment.


Summary

Going runtime only:

  • Being able to allow lower level JS libs to live in both the old world and ES.next
  • Clearer rules/cleaner syntax for doing "function as the module value"
  • less new syntax
  • the possibility for loader plugins that can simplify module construction code and exports API

The current compile time/runtime split:

  • Partial name/type checking with the compile time code.
  • Some future benefit?

I could be missing a lot of the compile time checking benefits, and I would appreciate being pointed in the right direction.

Also, I may be use "runtime" and "compile time" too coarsely here, there is some subtlety to it, so I'm open to suggestions on clearer terminology.

James

# Luke Hoban (13 years ago)

On Fri, Mar 30, 2012 at 3:25 PM, James Burke <jrburke at gmail.com> wrote: [snip] The module_loaders API has a way to do a runtime module registration, but as I understand it, it means that a consumer of my library then needs to then use the System.load() API to get a hold of it.

My understanding was that this is not necessarily true. For example - in the syntax of the current API design on the wiki:

// app.html <script src='backbone.js'></script> <script src='app.js></script>

// backbone.js System.set("backbone", { something: 42 });

//app.js import something from "backbone" console.log(something);

The ES6 module syntax and static binding of app.js still works correctly, because backbone.js has been fully executed and has added itself to the module instance table before app.js is compiled (which is the point where the static binding is established). There are restrictions here of course, due to the need for the dependent modules to have been made available before compilation of the ES6 code.

At least in cases like the above though, libraries can continue to work on non-ES6 browsers, and feature-detect ES6 module loaders to register into the ES6 loader so that later processed modules (typically app code) can use ES6 syntax if desired. Moreover, it ought to in principle be possible to build AMD (or other current module API)-compliant shims over the ES6 module definition API that allow existing modules to be used as-is, and still consumed with ES6 module syntax.

FWIW - I asked about a few similar scenarios related to module loaders interop in [0].

Luke

[0] esdiscuss/2011-November/018457

# James Burke (13 years ago)

On Sat, Mar 31, 2012 at 11:02 AM, Luke Hoban <lukeh at microsoft.com> wrote:

On Fri, Mar 30, 2012 at 3:25 PM, James Burke <jrburke at gmail.com> wrote: [snip] The module_loaders API has a way to do a runtime module registration, but as I understand it, it means that a consumer of my library then needs to then use the System.load() API to get a hold of it.

My understanding was that this is not necessarily true.  For example - in the syntax of the current API design on the wiki:

// app.html <script src='backbone.js'></script> <script src='app.js></script>

// backbone.js System.set("backbone", {    something: 42 });

//app.js import something from "backbone" console.log(something);

The ES6 module syntax and static binding of app.js still works correctly, because backbone.js has been fully executed and has added itself to the module instance table before app.js is compiled (which is the point where the static binding is established).  There are restrictions here of course, due to the need for the dependent modules to have been made available before compilation of the ES6 code.

This requires me as an app developer to have to know the dependencies I will use that are not ES.next compatible, load them first, via inline script tags, or via another script loader, then do the ES.next work.

This does not seem to be like an improvement. There is no reason for me to use ES.next modules in this case, and it makes my life really difficult if I decide to accidentally use one or two dependencies that are ES.next modules.

At least in cases like the above though, libraries can continue to work on non-ES6 browsers, and feature-detect ES6 module loaders to register into the ES6 loader so that later processed modules (typically app code) can use ES6 syntax if desired.  Moreover, it ought to in principle be possible to build AMD (or other current module API)-compliant shims over the ES6 module definition API that allow existing modules to be used as-is, and still consumed with ES6 module syntax.

One of the goals of my feedback is to actually get rid of AMD and its associated loaders. If the ES.next modules require me as an app developer to use a script loader to manage some of this complexity, then I do not see it as a net gain over just using AMD directly.

James

# David Herman (13 years ago)

Benefits of compile time binding

This is what I need help in understanding. The benefits I have heard so far:

  1. Being able to check export names/types. As mentioned, this feels like a very shallow benefit, since it does not apply to properties outside of the export properties. See constructor functions and function libs like jQuery.

Static checking is a continuum. It's mathematically proven (e.g., Rice's theorem) that there are tons of things a computer just can't do for you in general. So we have to pick and choose which are the things that can be done for us that are a) decidable, b) tractable, and c) useful. In my experience, checking variables is really useful, even though it certainly can't check every aspect of a program's correctness.

What's more, it's actually somewhere between insanely hard and impossible to define a optional/gradual/hybrid type system for a language that keeps having code appended to it dynamically. But if you can talk about units of compilation, it becomes much more approachable. So when you say "static modules aren't worth it because they don't do enough checking" you're also pretty much closing the door to ever introducing any more static constructs into the language.

Now, there are certainly people who will say "boo, don't pollute my yummy dynamic language with your stinky static constructs." But there's plenty of static constructs that aren't stinky at all. We speak to game programmers telling us they'd prefer to write in C# and compile to JS because they want the engineering benefits of types. Brian McKenna is about to demo Roy, a statically typed language that compiles to JS, at JSConf next week. I personally would love to introduce a statically typed dialect of JS that could integrate with dynamically typed JS, but that could still be used to help people catch bugs and improve their performance -- and the predictability of their performance. This can be and has been done in other dynamically typed languages [1], and it can be done for JS, too. But I'm telling you now, we'll never find a way to make it work without static modules.

  1. It may help allow some future things like macros?

If you have dynamic modules, you can't use them to export any compile-time constructs, like macros, static operator overloading, custom literals, or static types. If you load a module at runtime, then it's too late by the time you actually have the module to use it for anything at compile time.

# David Herman (13 years ago)

On Mar 31, 2012, at 6:47 PM, David Herman wrote:

This can be and has been done in other dynamically typed languages [1], and it can be done for JS, too.

[1] www.ccs.neu.edu/home/stamourv/papers/numeric-tower.pdf

# Claus Reinke (13 years ago)

If you have dynamic modules, you can't use them to export any compile-time constructs, like macros, static operator overloading, custom literals, or static types. If you load a module at runtime, then it's too late by the time you actually have the module to use it for anything at compile time.

That is not quite accurate, unless I'm misinterpreting you. See, for instance, the variety of work surrounding type Dynamic in ML-like languages: it is possible to define language constructs that represent the static/dynamic phase distinction, so that

  • toDynamic(expression) pairs an expression with a runtime representation of statically inferred information (such as type)

  • fromDynamic([info,expr]) extracts an expression from such a pair if and only if the runtime info for expr matches the statically inferred information for the usage context (since this can fail, it is often embedded into pattern matching, aka type case)

toDynamic marks program points where objects pass out of range of static inference (eg, storing modules/objects to disk), fromDynamic marks program points where dynamically annotated objects need to be checked against static information before embedding them (such as linking a dynamically loaded module).

This pair of constructs allows for multiple program stages, each with their own static/dynamic phases. You can load a module at runtime, then enter compile time for that module, then enter a new runtime stage with the newly compiled and linked code. This was used, eg, in orthogonally persistent programming languages to dynamically store/load statically typed code/modules to/from database-like storage.

Multi-stage languages drive this further by allowing stage-n dynamic constructs to set static properties for stage-(>n) code.

Their motivation tends to be meta-programming.

Which is why I'm confused by the staging aspects of ES6 modules:

  • ES is a multi-stage language, thanks to eval and module load
  • ES6 modules offer "static" constructs only for the first stage, after which everything seems to devolve to dynamic again?

I'm not saying this is wrong, since JS analysis is hard, but it would be good to be clear about whether this single-stage approach to a multi-stage problem is intended, or whether the current spec incompletely captures the intentions.

Claus

# Sam Tobin-Hochstadt (13 years ago)

I think you misunderstand the relationship between what Dave said, and the type Dynamic work. Also, your later comments about staging are unrelated to this issue, and wrong with regard to the module design. More detail below ...

On Sun, Apr 1, 2012 at 10:52 AM, Claus Reinke <claus.reinke at talk21.com> wrote:

If you have dynamic modules, you can't use them to export any compile-time constructs, like macros, static operator overloading, custom literals, or static types. If you load a module at runtime, then it's too late by the time you actually have the module to use it for anything at compile time.

That is not quite accurate, unless I'm misinterpreting you. See, for instance, the variety of work surrounding type Dynamic in ML-like languages: it is possible to define language constructs that represent the static/dynamic phase distinction, so that

  • toDynamic(expression) pairs an expression with a runtime   representation of statically inferred information (such as type)

  • fromDynamic([info,expr]) extracts an expression from such a   pair if and only if the runtime info for expr matches the   statically inferred information for the usage context (since   this can fail, it is often embedded into pattern matching, aka   type case)

toDynamic marks program points where objects pass out of range of static inference (eg, storing modules/objects to disk), fromDynamic marks program points where dynamically annotated objects need to be checked against static information before embedding them (such as linking a dynamically loaded module).

This is mostly a correct characterization of the academic work on type Dynamic. However, it's really missing the point that Dave was making. If you don't have static modules, nothing static can be exported from them. Using a facility like you describe, static information such as types could be associated with the dynamic values exported by a dynamic module after the fact. For example, if a module exported a two argument function, we could somewhere else use that operation as the dynamic implementation of a static overloading of +. However, the static overloading itself can't be exported from the module. Which is exactly what Dave said.

This pair of constructs allows for multiple program stages, each with their own static/dynamic phases. You can load a module at runtime, then enter compile time for that module, then enter a new runtime stage with the newly compiled and linked code. This was used, eg, in orthogonally persistent programming languages to dynamically store/load statically typed code/modules to/from database-like storage.

Again, this misunderstands the relationship between staging and persistence. Persistence is about values -- storing and retrieving values of the language to a disk or database. The type Dynamic work shows how to do this safely in a typed language. Multi-state programming is about programs, not values. Even when a value contains computation, as with a function or an object, the program is gone.

Multi-stage languages drive this further by allowing stage-n dynamic constructs to set static properties for stage-(>n) code. Their motivation tends to be meta-programming.

Which is why I'm confused by the staging aspects of ES6 modules:

  • ES is a multi-stage language, thanks to eval and module load
  • ES6 modules offer "static" constructs only for the first stage,   after which everything seems to devolve to dynamic again?

I'm not saying this is wrong, since JS analysis is hard, but it would be good to be clear about whether this single-stage approach to a multi-stage problem is intended, or whether the current spec incompletely captures the intentions.

Neither of these is correct.

The modules design provides for both static and dynamic elements at every phase. In particular, if we have the program "A.js":

import sin from "@math";
console.log(sin(3));

and we load that from "B.js" with:

System.loadAsync("A.js", m => m)

then the runtime of "B.js" is the compile-time of "A.js", and "A.js" can use the static features of modules just fine.

# Claus Reinke (13 years ago)

I think you misunderstand the relationship between what Dave said, and

The purpose of my questions is to remove misunderstandings - it is entirely possible that some of them are on my side!-)

If you have dynamic modules, you can't use them to export any compile-time constructs, like macros, static operator overloading, custom literals, or static types. If you load a module at runtime, then it's too late by the time you actually have the module to use it for anything at compile time.

This assumes that runtime loading always follows compile-time, framing the question in such a way as to preclude alternative answers.

.. This is mostly a correct characterization of the academic work on type Dynamic. However, it's really missing the point that Dave was making. If you don't have static modules, nothing static can be exported from them. Using a facility like you describe, static information such as types could be associated with the dynamic values exported by a dynamic module after the fact. For example, if a module exported a two argument function, we could somewhere else use that operation as the dynamic implementation of a static overloading of +. However, the static overloading itself can't be exported from the module. Which is exactly what Dave said.

Narrowly speaking, yes, when importing dynamic code, you'll use static constructs in the importing code to associate static with dynamic info.

However, consider static/lexical scoping and this JS example:

var inner = "console.log(x)"; var outer = function(varname,inner) { return "(function("+varname+"){ eval('"+inner+"') })(1)" };

console.log(outer("x",inner));

eval(outer("x",inner)); // 1

eval(outer("z",inner)); // ReferenceError: x is not defined

The main code dynamically evaluates the result of outer, which dynamically evaluates inner. Yet the main code is able to establish in outer's result a static binding to be available to inner.

This level of unsafe freedom can be detrimental to programmer health, but it should give you the idea of what one might want to provide in a safer form. It also shows that compile-time can follow runtime with dynamically created/loaded code.

This nesting of eval-compiles in eval-runtimes is a bit like writing your meta-level programs in (meta-)continuation passing style but, at the cost of this awkward nesting, stage-n code retains control over code and static environment of stage-(>n) code.

So, by phrasing the question less narrowly, we already get one different answer.

This pair of constructs allows for multiple program stages, each with their own static/dynamic phases. You can load a module at runtime, then enter compile time for that module, then enter a new runtime stage with the newly compiled and linked code. This was used, eg, in orthogonally persistent programming languages to dynamically store/load statically typed code/modules to/from database-like storage.

Again, this misunderstands the relationship between staging and persistence. Persistence is about values -- storing and retrieving values of the language to a disk or database. The type Dynamic work shows how to do this safely in a typed language. Multi-state programming is about programs, not values. Even when a value contains computation, as with a function or an object, the program is gone.

As for misunderstandings: programs/modules/functions can be values, which can be persisted, and reflection/introspection allow to recover even the source for editing. In sufficiently advanced systems, such as those researched in the 1980/1990s, that was the basis for IDEs which hyper-linked source code to stored objects ("hyperprogramming").

It's been a long time, groups and online material have moved or disappeared, but I think this report is a reasonable overview of some of the work:

Orthogonally Persistent Object Systems (1995)
by Malcolm Atkinson , Ronald Morrison
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.25.8942

An early paper describes the use of persistent procedures as modules in PS-Algol:

Persistent First Class Procedures are Enough (1984)
MP Atkinson ; Ronald Morrison
http://www.cs.st-andrews.ac.uk/files/publications/download/AM84.pdf

I did implement a module system for a functional language once, where the introspection aspect was only available to the IDE, not programmatically; still, one was able to step through code that dynamically loaded modules, and the IDE would present the dynamically loaded code for inspection and editing as if it had just been entered.

Which is why I'm confused by the staging aspects of ES6 modules:

  • ES is a multi-stage language, thanks to eval and module load
  • ES6 modules offer "static" constructs only for the first stage, after which everything seems to devolve to dynamic again?

Neither of these is correct.

The modules design provides for both static and dynamic elements at every phase. In particular, if we have the program "A.js":

import sin from "@math"; console.log(sin(3));

and we load that from "B.js" with:

System.loadAsync("A.js", m => m)

then the runtime of "B.js" is the compile-time of "A.js", and "A.js" can use the static features of modules just fine.

Yes to the latter. But what are B's options for using static modules features after loading imports from A? Since A doesn't export anything, let me change the example - why is this not supported:

System.loadAsync("@math", m => {
    import sin from m;
    console.log(sin(3));
})

or this:

module math { export function sin .. }
module M = math;
import sin from M;

Naively, I would expect reflecting the module instance to involve a toDynamic, and attempting to import from a module instance object to involve a fromDynamic. I don't want to be reduced to late dynamic property selection and checks once I've used a loader to obtain a module instance. But, by nature of async callbacks, code that uses @math's exports, via m, needs to be in the loader callback, which doesn't admit "static" module constructs.

Consider the following hierarchy of late vs early checks in selecting/ extracting components from objects:

0 property selection: each selection stands or fails on its own

1 ES6 destructuring: irrefutable matching, very late checks; let {f,g} = obj This is sugar for separate property selection from objects - even after destructuring, the components may not actually exist

2 ES.next pattern-matching: refutable matching, check on match; let !{f,g} = obj // made-up syntax for strict matching If the match succeeds, the components will be available

3 structural typing: structural constraints can be propagated through the program source, to establish regions of code which share the same object structures; (obj::{f,g})=> {let !{f,g} = obj; ..} If this function is ever successfully entered, the match will succeed and the components will be available

4 static structural typing: there is only one region for structural consistency, which is the whole program; If the whole program is accepted, any matches or component selections in it will succeed

4 is much too restrictive to be useful for a dynamic environment, but if we augment it with some form of Dynamic types, we can get to 3, with its regions of structural consistency and early consistency checks on entering such a region. In my view, 3 is the most practical scenario for combining dynamic module loading with early errors. If loader callbacks are regions of structural consistency for the modules loaded, 3 gives us load time checking and errors.

2 is a simple variant of 3, where consistency constraints are not propagated by a type system, but are checked early, and in one swoop per object. This is the simplest system that can give us separation of concerns for dynamic modules - matching the structure corresponds to a module API consistency check. 2 gives us link time checking and errors.

1 or 0 give more flexibility, but the late checks and errors are not suitable for module separation - even after module loading and linking succeed, imports can fail at runtime.

My feeling is that ES6 modules aim for a variant of 4 in their static aspects, then fall back to 1/0 for dynamic modules. But the sweet spots for safe dynamic modules are 2 or 3. And the analysis hazards of JS might make 3 impractical. Still, if import declarations were sugar for refutable object matches, that would give a reasonable compromise, and if this can be augmented by an analysis that allows to shift checks from linking to loading, all the better.

Looking back to my dynamically loaded math example above, the loader has the source code of the module to be loaded, and the analysis results for its callback. It should be able to match the two at load time, enabling the safe use of "static" module features in the loader callback. If that is untractable for JS, the import declarations should act as refutable structure matches, providing safety at link time.

Claus

# Claus Reinke (13 years ago)

Libraries will need to work in old world browsers for a few years. Possible solutions:

a) Ask libraries to provide a lib.es-next.js version of themselves in addition to the old world version, so that compile time linking with new "module/import" syntax can be used.

b) Have a way for the library to do a runtime type check, and opt-in to the call.

c) Something else?

How about these two translation-based options (I'll use mod.current as a placeholder for whichever current module workaround is in use, be it CommonJS, NodeJS, AMD, RequireJS, or other script loaders)

  1. implement mod.current in terms of ES6 constructs
  2. implement ES6 modules in terms of mod.current

For the current phase of spec development, 1 is the more interesting, as it would show whether ES6 modules are sufficient as a basis for implementing currently used module patterns. Any gaps pointed out by such an effort could still influence the ES6 modules spec. However, such translators might tempt developers to stick with their old mod.current code, knowing that it will run on ES6.

Longer term, 2 is more desirable, as it would allow to retire/phase out the variety of mod.current efforts. Once ES6 modules are stable, programmers could start experimenting with them, giving feedback on the useability of the spec. The various mod.current implementations (via translation from ES6 modules) could serve as polyfills for ES implementations that do not yet support ES6 modules.

Note that, while it may be possible to map parts of mod.current to static ES6 modules, it is likely that some static-looking forms in mod.current will map to dynamic ES6 modules. Also, when mapping ES6 modules to mod.current, one will be hard-pressed to emulate the static checks, but as a polyfill, then remaining functionality will still be useful.

Claus

# James Burke (13 years ago)

On Sat, Mar 31, 2012 at 6:47 PM, David Herman <dherman at mozilla.com> wrote:

Static checking is a continuum. It's mathematically proven (e.g., Rice's theorem) that there are tons of things a computer just can't do for you in general. So we have to pick and choose which are the things that can be done for us that are a) decidable, b) tractable, and c) useful. In my experience, checking variables is really useful, even though it certainly can't check every aspect of a program's correctness.

I would add d) does not compromise other high value features. For me, one would be runtime module opt-in by code that also wants to work in non-ES.next environments.

If there are reasons not to treat that as a high value feature, that probably changes my feedback.

However to be clear, I would like more name/typing checking, and the ability to use something like "import *".

So to try to get a decent pass at all of those benefits, could the following evaluate/compile model be used:

Example:

module Foo { import * from 'Math'; }

Rules:

  • The 'Math' module is evaluated before Foo is evaluated.

  • Only the properties on Math that are available at the time of Foo's execution are bound to local variables via the "import *".

So, assuming Math has no dependencies (just to make this shorter), the sequence of events:

  • Load Foo, convert to AST, find "from" usage.
  • Load Math
  • Compile Math
  • Evaluate Math
  • Inspect Math's exported module value for properties
  • Modify the compiled structure for Foo to convert "import *" to have local variables for all of Math's properties that are known, only at this time (no funny dynamic 'with' stuff)
  • Evaluate Foo

I may not have all the right terminology, in particular "convert to AST/work with compiled structure" may not be correct, but hopefully the idea comes across.

Benefits:

  • It is not "with" or its ilk.

  • Allows both "top level" module export name/type checking and "second level" checking since more info on Math is available after running Math, including info prototypes on for any constructor functions.

  • Opens up allowing opt-in to ES.next modules and still be run in old world browsers.

  • Still seems to allow for some kind of macros and operator overloading later?

Possible hazards:

  • Something could modify Math's properties before a different module Bar is run, and Bar might see different * bindings than Foo. This happens with JS now though -- depending on when you execute a function, it may see different properties on objects it uses.

  • Too many processing stages?

  • Circular import * is a problem. This could be flagged as an error though. Circular dependencies are minority cases (import * of this case even smaller), and the benefit of opening up second level name/type checking and runtime module opt-in may be worth the tradeoff.

James

# John J Barton (13 years ago)

On Thu, Apr 5, 2012 at 10:01 AM, James Burke <jrburke at gmail.com> wrote:

So, assuming Math has no dependencies (just to make this shorter), the sequence of events:

  • Load Foo, convert to AST, find "from" usage.
  • Load Math
  • Compile Math
  • Evaluate Math
  • Inspect Math's exported module value for properties
  • Modify the compiled structure for Foo to convert "import *" to have local variables for all of Math's properties that are known, only at this time (no funny dynamic 'with' stuff)
  • Evaluate Foo

This is certainly the events I expect: is there an alternative? I suppose the module syntax could support moving Evaluate Math to just before Evaluate Foo. (a la a compiled language) The Math-level operations that created Math properties would not be known to Foo. That would be a major surprise to devs I think.

jjb

# Claus Reinke (13 years ago)

So, assuming Math has no dependencies (just to make this shorter), the sequence of events:

  • Load Foo, convert to AST, find "from" usage.
  • Load Math
  • Compile Math
  • Evaluate Math
  • Inspect Math's exported module value for properties
  • Modify the compiled structure for Foo to convert "import *" to have local variables for all of Math's properties that are known, only at this time (no funny dynamic 'with' stuff)
  • Evaluate Foo

This is certainly the events I expect: is there an alternative? I suppose the module syntax could support moving Evaluate Math to just before Evaluate Foo. (a la a compiled language) The Math-level operations that created Math properties would not be known to Foo. That would be a major surprise to devs I think.

Several. The interaction of dynamic modules and static program properties makes for an interesting design space - conveniences you want to allow in one place quickly have unwanted consequences in other places (for instance, I'm not convinced that James' scheme does not permit to emulate 'with'). A few alternatives:

  • you could try to determine Math's exported properties before compiling the importer

  • you could try to split Math into an interface (that statically fixes the exported properties) and an implementation (that allows dynamic instantiation of its interface); as long as the export interface remains invariant, dynamic changes to Math do not harm the dynamic importers static binding structure, even in the presence of 'import *'

  • you could allow Math's properties to be determined at load time (runtime of the importer), but drop the convenience of 'import *'; that allows Math's exported properties to change dynamically without affecting the static binding structure in the importer - essentially inlining a module interface in the import constructs (also, 'import *' is considered by many as bad documentation, even in static module languages)

  • ...

Note that none of the three I listed quite matches the one given by James. My feeling is that ES6 modules aim for the first of the alternatives I listed, with a dynamic fallback that does not permit 'import *'. I agree about not permitting 'import *' in combination with dynamically changing export lists, but I would like to be able to use import destructuring with an explicit import list even for this case.

Claus

# Claus Reinke (13 years ago)

I just noticed that James' original email had two more items:

  • The 'Math' module is evaluated before Foo is evaluated.
  • Only the properties on Math that are available at the time of Foo's execution are bound to local variables via the "import *".

which puts it in line with the first option I mentioned, contrary to my final paragraph. Also, he seems concerned mostly with static modules, not dynamic ones.

Claus

# Sam Tobin-Hochstadt (13 years ago)

On Thu, Apr 5, 2012 at 11:23 PM, John J Barton <johnjbarton at johnjbarton.com> wrote:

On Thu, Apr 5, 2012 at 10:01 AM, James Burke <jrburke at gmail.com> wrote:

So, assuming Math has no dependencies (just to make this shorter), the sequence of events:

  • Load Foo, convert to AST, find "from" usage.
  • Load Math
  • Compile Math
  • Evaluate Math
  • Inspect Math's exported module value for properties
  • Modify the compiled structure for Foo to convert "import *" to have local variables for all of Math's properties that are known, only at this time (no funny dynamic 'with' stuff)
  • Evaluate Foo

This is certainly the events I expect: is there an alternative? I suppose the module syntax could support moving Evaluate Math to just before Evaluate Foo. (a la a compiled language) The Math-level operations that created Math properties would not be known to Foo. That would be a major surprise to devs I think.

The properties available to Foo are exactly the ones declared with export in Math. I don't think that should be a surprise to anyone -- that's what export is for.

However, it is the case that the evaluation of Math and of Foo happen close together, even thought that doesn't make a difference in this case. It would potentially make a difference if there was some third module that imported from Foo.

# John J Barton (13 years ago)

On Fri, Apr 6, 2012 at 4:54 AM, Sam Tobin-Hochstadt <samth at ccs.neu.edu>wrote:

On Thu, Apr 5, 2012 at 11:23 PM, John J Barton <johnjbarton at johnjbarton.com> wrote:

On Thu, Apr 5, 2012 at 10:01 AM, James Burke <jrburke at gmail.com> wrote:

So, assuming Math has no dependencies (just to make this shorter), the sequence of events:

  • Load Foo, convert to AST, find "from" usage.
  • Load Math
  • Compile Math
  • Evaluate Math
  • Inspect Math's exported module value for properties
  • Modify the compiled structure for Foo to convert "import *" to have local variables for all of Math's properties that are known, only at this time (no funny dynamic 'with' stuff)
  • Evaluate Foo

This is certainly the events I expect: is there an alternative? I suppose the module syntax could support moving Evaluate Math to just before Evaluate Foo. (a la a compiled language) The Math-level operations that created Math properties would not be known to Foo. That would be a major surprise to devs I think.

The properties available to Foo are exactly the ones declared with export in Math. I don't think that should be a surprise to anyone -- that's what export is for.

Ok thanks, now I understand some of the previous discussions. If 'export' property list is literal, then computation in the Math module can't change the properties exported, but it can change the properties of those properties. So we get one level of checking.

The definite and small value of this check must be balanced against the less clear but possibility larger value of flexibility granted by an imperative export and JS's traditional compile/run cycle.

jjb

However, it is the case that the evaluation of Math and of Foo

# James Burke (13 years ago)

On Fri, Apr 6, 2012 at 2:04 AM, Claus Reinke <claus.reinke at talk21.com> wrote:

I just noticed that James' original email had two more items:

  • The 'Math' module is evaluated before Foo is evaluated.
  • Only the properties on Math that are available at the time of Foo's execution are bound to local variables via the "import *".

which puts it in line with the first option I mentioned, contrary to my final paragraph. Also, he seems concerned mostly with static modules, not dynamic ones.

The goal is to allow dynamically declared modules using an API to work with a module that is using new syntax keywords/declared as static modules, and done in a way that does not require implementing a userland script loader library to work out the correct load order, which is what is needed with the current design.

So, easier for code to opt-in upgrade, while hopefully giving deeper name/type checking, and allowing for import * and the desires for considering macros and operator overloading later.

James

# James Burke (13 years ago)

On Fri, Apr 6, 2012 at 4:54 AM, Sam Tobin-Hochstadt <samth at ccs.neu.edu> wrote:

The properties available to Foo are exactly the ones declared with export in Math.   I don't think that should be a surprise to anyone -- that's what export is for.

However, it is the case that the evaluation of Math and of Foo happen close together, even thought that doesn't make a difference in this case.  It would potentially make a difference if there was some third module that imported from Foo.

A clarification: 'Math' may have been a bad example. I chose it since it was something concrete where an "import *" on it made sense.

The order of events I listed was to allow "Math" to opt in to the registering a module using a runtime API. I do not expect the real, core "Math" lib to do so, just choosing some script example that makes sense to do import *.

By executing a dependent module before finishing the compilation of the module pulling in the dependency was to allow better interop with a runtime module API, in a way that would give deeper name/type checking and support an import * syntax, but not be like "with" because any properties dynamically added to Math after Foo executes are not available to Foo (assuming dynamically adding properties to Math were even allowed).

Since the import * burns in the local variables during the final AST modification of the module after evaluating its dependency, deeper name/type checking can be done besides the top level exports, since the dependency has actually been evaluated.

James

# John J Barton (13 years ago)

On Fri, Apr 6, 2012 at 10:59 AM, Brendan Eich <brendan at mozilla.org> wrote:

John J Barton wrote:

The definite and small value of this check must be balanced against the less clear but possibility larger value of flexibility granted by an imperative export and JS's traditional compile/run cycle.

There's no either-or here, so no need to balance. JS has lots of function&object-based expressiveness, almost all dynamic. Modules add checking and explicit control over staging that's latent in JS-in-HTML via multiple <script> tags. Want both.

With tags, developers control staging with declaration order, imperative execution order, and event processing. These allow staging of JS and non-JS resources as well as their computed outputs.

With modules, developers control staging by specifying dependencies. Without support for non-JS dependencies and no control over execution during the dependency traversal, the tag-based solution will be required and mixing these two approaches needs to work well. Supporting non-JS dependencies and allowing execution would open the possibility of loading JS entirely with the module system. I think this goal is worth more than the checking one.

jjb