Brian Di Palma (2014-10-08T12:11:53.000Z)
Hi Andreas,

Thanks for the response and explanations.

I didn't realize how limited in power these fast parsers actually
were, they are basically lexers.
So yes this would require more bookkeeping on their part and that
would impact performance.

I'm doubtful that it would have a significant user perceivable effect though.
I imagine modern browser engines perform a lot of work in parallel
where they can.
It would seem that the network would be far slower then the engine parsing JS.

None the less I understand the concerns.

One way around having an impact on current workloads is to only parse
in this fashion for modules.
This should mean no impact on existing code bases and the impact will
only be felt with native modules.
Those would rely on HTTP2, Server Push, Resource Hinting so they will
tend toward smaller files.
If you are shipping native modules then you couldn't bundle into a large module.

As these modules would be stand alone fast parsing should be
embarrassingly parallel.
Lets be optimistic and say that in 5 years developers will be able to
roll out native modules.
I don't think that in 5 years a shortage of cores will be a problem,
even on mobile devices.
Given how slowly older IEs are being replaced on enterprise desktops
it maybe 7 years...

This is all conjecture on my part of course...

Yes hoisting is another complication, more bookkeeping, it will
probably delay when an error can be raised.
But would you not have to deal with it anyway? Can you not export a
hoisted function?
The module system has to do binding checks on what it exports so
keeping track of hoisted functions might have to happen anyway.

Modules would end up being slower to parse then scripts but modules
are for large scale development.
I'm not sure in the grand scheme of things it will be that relevant though.
Especially when weighted against increased static reasoning of modules.

Fully closed modules are as you said are probably too tedious - that
can be dropped.
It's more about making modules closed against user defined state as
opposed to system defined state.
I think that will flow very well with the module system.

import * as global, {myGlobalFunction} from "@global";

Should be enough to allow easy upgrading from ES5 scripts while still
allowing the module system/tooling to provide more static guarantees.

B.

On Wed, Oct 8, 2014 at 11:30 AM, Andreas Rossberg <rossberg at google.com> wrote:
> On 5 October 2014 17:51, Brian Di Palma <offler at gmail.com> wrote:
>> 1) I think you mean that a parser wants to fail quickly if it comes
>> across an identifier that was not previously declared as an import but
>> that could be declared as one deeper in the module file? The simple
>> solution to that problem is to not allow binding references before
>> their import statements.
>
> No, fail fast is not the issue, nor is the recursive nature of scoping
> in JavaScript.
>
> The issue is that, in order to know what identifiers are bound -- even
> in the simplest, linear case -- you need to maintain environments,
> i.e., do analysis and bookkeeping of declarations and scopes, both
> surrounding and local ones. So far, "pre-parsing" (the early
> approximate parse of lazily compiled functions, that only looks for
> early errors) didn't need anything like that. Adding it probably isn't
> overly expensive, but it's not free either, and nobody has measured
> the impact on large applications.
>
>> 99.9% of modules will declare their imports at the top of the file
>> anyway, most developers won't come across this restriction very often.
>> It doesn't take away functionality, it's a useless degree of freedom
>> and I can't think of any language where it's good practice to
>> import/require etc anywhere but at the head of a module/class.
>
> It's not just imports, you can forward reference any declaration in
> JavaScript. That's a feature that cannot easily be removed, and
> binding analysis has to deal with it. It's not a problem, though, only
> a mild complication.
>
>
>> 2) I didn't explain this part well. I meant for the "@global" import
>> to be a special one who's bindings will not go through the same checks
>> that other modules do. In a browser importing global would return
>> window.
>>
>> This should make it trivial to upgrade current code to ES6 modules
>> while still allowing much stricter checks in modules.
>> You would use tools that take the ES5 code and scan it for free
>> identifiers and import them from "@global".
>> This then allows you to gradually move them all into ES6 modules.
>>
>> import * as global, {Date, XMLHttpRequest} from "@global";
>>
>> All bindings provided by "@global" bypass the normal bindings checks.
>
> IIUC, your suggestion is that modules should be closed, i.e., not be
> able to refer to any non-local or pre-defined identifiers. This idea
> has a long history in works on module systems, but I'm not aware of
> any practical language that ever managed to follow it through. It
> quickly becomes very tedious in practice, and I highly doubt it would
> jive well with JavaScript, or probably any language (although Gilad
> Bracha disagrees on the latter).
>
> /Andreas
domenic at domenicdenicola.com (2014-10-15T18:58:06.553Z)
Thanks for the response and explanations.

I didn't realize how limited in power these fast parsers actually
were, they are basically lexers.
So yes this would require more bookkeeping on their part and that
would impact performance.

I'm doubtful that it would have a significant user perceivable effect though.
I imagine modern browser engines perform a lot of work in parallel
where they can.
It would seem that the network would be far slower then the engine parsing JS.

None the less I understand the concerns.

One way around having an impact on current workloads is to only parse
in this fashion for modules.
This should mean no impact on existing code bases and the impact will
only be felt with native modules.
Those would rely on HTTP2, Server Push, Resource Hinting so they will
tend toward smaller files.
If you are shipping native modules then you couldn't bundle into a large module.

As these modules would be stand alone fast parsing should be
embarrassingly parallel.
Lets be optimistic and say that in 5 years developers will be able to
roll out native modules.
I don't think that in 5 years a shortage of cores will be a problem,
even on mobile devices.
Given how slowly older IEs are being replaced on enterprise desktops
it maybe 7 years...

This is all conjecture on my part of course...

Yes hoisting is another complication, more bookkeeping, it will
probably delay when an error can be raised.
But would you not have to deal with it anyway? Can you not export a
hoisted function?
The module system has to do binding checks on what it exports so
keeping track of hoisted functions might have to happen anyway.

Modules would end up being slower to parse then scripts but modules
are for large scale development.
I'm not sure in the grand scheme of things it will be that relevant though.
Especially when weighted against increased static reasoning of modules.

Fully closed modules are as you said are probably too tedious - that
can be dropped.
It's more about making modules closed against user defined state as
opposed to system defined state.
I think that will flow very well with the module system.

```js
import * as global, {myGlobalFunction} from "@global";
```

Should be enough to allow easy upgrading from ES5 scripts while still
allowing the module system/tooling to provide more static guarantees.