guest271314 (2019-05-25T22:02:13.000Z)
> If you come up with a benchmark for this, would you mind sharing the code
and not just the results?  I'm curious how my stuff will fare.

What specifically should be compared? What are you trying to determine?

The current question asks

> how many async-modules can js-app practically load?

which leads to asking how many async-modules are actually needed?

>From the outset the presumptive answer would probably be the available RAM
and hard disk space of the computer used (and browser preferences,
policies, configuration) to load the modules, similar to asking the maximum
content-length of a file which can be fetched and downloaded and the length
of time such a procedure takes in the browser, which depends on the
available RAM and hard-disk space of the computer(s) used; e.g., see How to
solve Uncaught RangeError when download large size json
 https://stackoverflow.com/q/39959467 where the result is can be vary
vastly when using devices having different available RAM and hard-disk
space.

Again, have not tried babel or webpack for the purpose of loading modules.
Normally try to use only code that is shipped with FOSS browsers, when the
code is being used in the browser, where possible.

On Sat, May 25, 2019 at 7:57 PM Wes Garland <wes at page.ca> wrote:

> If you come up with a benchmark for this, would you mind sharing the code
> and not just the results?  I'm curious how my stuff will fare.
>
> I'm in an environment where it is still not practical to use ES modules,
> and have started work again on BravoJS, which implements the old CommonJS
> Modules/2.0 strawman. (Same primordial development soup as AMD, but with a
> different set of flaws; favouring correctness/compatibility instead of
> brevity)
>
> How I've solved this problem historically is to have some smarts on the
> server side, that feeds (transitive) dependencies of a module at the same
> time as the module.  It will be interesting to play with that type of
> loader -- if possible -- once ES module loaders become
> well-defined/implemented/available.
>
> We also have to remember that loading via SCRIPT tags -- type=module or
> not -- is important to developers, so that we can load modules from CDNs
> and so on without cross-site security pain.
>
> Thanks,
> Wes
>
> On Sat, 25 May 2019 at 02:41, guest271314 <guest271314 at gmail.com> wrote:
>
>> Have not tried babel or webpack. You can write the test to answer your
>> own inquiry and post the result at a gist.
>>
>> On Sat, May 25, 2019 at 6:34 AM kai zhu <kaizhu256 at gmail.com> wrote:
>>
>>> Why would 50 separate ```<script type="module">``` tags be needed?
>>>
>>>
>>> because a reason for es-module's existence in the first place is to
>>> bring "large-scale modular development" to the browser.  i want to test
>>> that claim (hence this thread's subject-title).
>>>
>>> Have you tried the two described approaches and compared the result? How
>>> is "identically" determined?
>>>
>>>
>>> no i haven't, and i suspect nobody does in practice, because they all
>>> use babel/webpack.  identicality would be determined by if the app
>>> functions the same regardless whether you used a webpack-rollup or
>>> individual ```<script type="module">``` tags.
>>>
>>> -kai
>>>
>>>
>>>
>>> On 25 May 2019, at 01:20, guest271314 <guest271314 at gmail.com> wrote:
>>>
>>> > so async-loading 50 ```<script type="module">``` tags has equivalent
>>> side-effect
>>> as sync-loading single webpack-rollup (of same 50 modules)?
>>>
>>> Why would 50 separate ```<script type="module">``` tags be needed?
>>>
>>> > has anyone tried native async-loading large numbers (>10) of ```<script
>>> type="module">``` tags, and verify it resolves identically to using a
>>> single webpack-rollup?
>>>
>>> Have you tried the two described approaches and compared the result?
>>>
>>> How is "identically" determined?
>>>
>>> On Sat, May 25, 2019 at 6:12 AM kai zhu <kaizhu256 at gmail.com> wrote:
>>>
>>>> Asynchronous loading differs only in
>>>> that it takes more code to express the same logic and you have to take
>>>> into account concurrent requests (and you need to cache the request,
>>>> not the result), but it's otherwise the same from 1km away.
>>>>
>>>>
>>>> so async-loading 50 ```<script type="module">``` tags
>>>> has equivalent side-effect
>>>> as sync-loading single webpack-rollup (of same 50 modules)?
>>>>
>>>> i have nagging suspicion of doubts.  has anyone tried native
>>>> async-loading large numbers (>10) of
>>>> ```<script type="module">``` tags, and verify it resolves identically
>>>> to using a single webpack-rollup?
>>>>
>>>> again, i'm not that knowledgeable on es-modules, so above question may
>>>> be trivially true, and i'm just not aware.
>>>>
>>>> -kai
>>>>
>>>> On 24 May 2019, at 23:41, Isiah Meadows <isiahmeadows at gmail.com> wrote:
>>>>
>>>> There's two main reasons why it scales:
>>>>
>>>> 1. Modules are strongly encapsulated while minimizing global pollution.
>>>> 2. The resolution algorithm applies the same logic no matter how many
>>>> modules are loaded.
>>>>
>>>> It's much easier for it to scale when you write the code unaware of
>>>> how many modules you might be loading and unaware of how deep their
>>>> dependency graph is. Fewer assumptions here is key. It's an
>>>> engineering problem, but a relatively simple one.
>>>>
>>>> If you want a short example of how sync module resolution works, you
>>>> can take a look at this little utility I wrote:
>>>> https://github.com/isiahmeadows/simple-require-loader. That doesn't
>>>> asynchronously resolve modules, but it should help explain the process
>>>> from a synchronous standpoint. Asynchronous loading differs only in
>>>> that it takes more code to express the same logic and you have to take
>>>> into account concurrent requests (and you need to cache the request,
>>>> not the result), but it's otherwise the same from 1km away.
>>>>
>>>> -----
>>>>
>>>> Isiah Meadows
>>>> contact at isiahmeadows.com
>>>> www.isiahmeadows.com
>>>>
>>>> On Thu, May 23, 2019 at 10:49 AM kai zhu <kaizhu256 at gmail.com> wrote:
>>>>
>>>>
>>>> actually, i admit i don't know what i'm talking about.  just generally
>>>> confused (through ignorance) on how large-scale es-module dependencies
>>>> resolve when loaded/imported asynchronously.
>>>>
>>>> On Wed, May 22, 2019 at 10:42 PM Logan Smyth <loganfsmyth at gmail.com>
>>>> wrote:
>>>>
>>>>
>>>> Can you elaborate on what loading state you need to keep track of? What
>>>> is the bottleneck that you run into? Also to be sure, when you say
>>>> async-load, do you mean `import()`?
>>>>
>>>> On Wed, May 22, 2019, 20:17 kai zhu <kaizhu256 at gmail.com> wrote:
>>>>
>>>>
>>>> i don't use es-modules.
>>>> but with amd/requirejs, I start having trouble with
>>>> module-initializations in nodejs/browser at ~5 async modules (that may or
>>>> may not have circular-references).  10 would be hard, and 20 would be near
>>>> inhuman for me.
>>>>
>>>> can we say its somewhat impractical for most applications to load more
>>>> than 50 async modules (with some of them having circular-references)?  and
>>>> perhaps better design/spec module-loading mechanisms with this usability
>>>> concern in mind?
>>>>
>>>> p.s. its also impractical for me to async-load 5 or more modules
>>>> without using globalThis to keep track of each module's loading-state.
>>>> _______________________________________________
>>>> es-discuss mailing list
>>>> es-discuss at mozilla.org
>>>> https://mail.mozilla.org/listinfo/es-discuss
>>>>
>>>>
>>>> _______________________________________________
>>>> es-discuss mailing list
>>>> es-discuss at mozilla.org
>>>> https://mail.mozilla.org/listinfo/es-discuss
>>>>
>>>>
>>>> _______________________________________________
>>>> es-discuss mailing list
>>>> es-discuss at mozilla.org
>>>> https://mail.mozilla.org/listinfo/es-discuss
>>>>
>>>
>>> _______________________________________________
>> es-discuss mailing list
>> es-discuss at mozilla.org
>> https://mail.mozilla.org/listinfo/es-discuss
>>
>
>
> --
> Wesley W. Garland
> Director, Product Development
> PageMail, Inc.
> +1 613 542 2787 x 102
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.mozilla.org/pipermail/es-discuss/attachments/20190525/5fca06f2/attachment-0001.html>
guest271314 at gmail.com (2019-05-26T02:47:12.042Z)
> If you come up with a benchmark for this, would you mind sharing the code and not just the results?  I'm curious how my stuff will fare.

What specifically should be tested and compared and benchmarked? The same code being used at different computers; or different code used at the same computer? What are you trying to determine?

The current question asks

> how many async-modules can js-app practically load?

which leads to asking how many async-modules are actually needed?

From the outset one primary limiting factor as to "how many" would probably be the available RAM and hard disk space of the computer used (and browser preferences; cache policies; configuration) to load the modules (before the browser/OS freezes), similar to asking the maximum
content-length of a file which can be fetched and downloaded and the length
of time such a procedure takes in the browser, which depends on the
available RAM and hard-disk space of the computer(s) used (e.g., see How to
solve Uncaught RangeError when download large size json
 https://stackoverflow.com/q/39959467 ; https://stackoverflow.com/questions/39959467#comment67315240_40017582) where the result can vary considerably when testing code on different computers, depending on available RAM and hard-disk
space of the computer (for example, comparing results at a minimal OS at Raspberry Pi with 500MB of available RAM and the same OS having 16GB RAM; or a non-minimal OS with multiple applications and the browser running compared to only the browser application running; including or not including the cost of loading the code which loads the async modules - if the code is not shipped with the browser - in the test results; etc.). The tentative answer to the original question "how many async-modules can js-app practically load?" would be until memory and available disk space runs out.

Tests of such code should also include the hardware and software used for the test and perform the various tests at the same computer - which will be compared to the other tests performed at the same computer, locally. Then those test results can be compared to different architectures, browsers (browsers implement specifications and code differently; consider Why does Firefox produce larger WebM video files compared with Chrome? https://stackoverflow.com/q/54651869 "However, Firefox still ends up creating up to 4 times larger files compared with Chrome."; https://stackoverflow.com/a/54652768 "Because they don't use the same settings..."; https://gist.github.com/guest271314/f942fb9febd9c197bd3824973794ba3e ; https://gist.github.com/guest271314/17d62bf74a97d3f111aa25605a9cd1ca), OS's, configurations to create a graph of the various combinations for comparative analysis. 

Online tests of code (e.g. jsPerf) can provide inaccurate results, see https://stackoverflow.com/questions/46516234#comment80198567_46623314 ;  microbenchmarks fairy tale https://mrale.ph/blog/2012/12/15/microbenchmarks-fairy-tale.html (note also the list item links at this answer https://stackoverflow.com/a/47173390 at Why is using a loop to iterate from start of array to end faster than iterating both start to end and end to start? https://stackoverflow.com/q/47049004);  Is TIO acceptable for fastest-code questions? https://codegolf.meta.stackexchange.com/q/12707 ; https://codegolf.meta.stackexchange.com/a/12708

> Unfortunately, I don't think TIO is a very reliable way of timing code, even if counting CPU time rather than wall time.
>
> First of all, TIO uses multiple arena servers, where all user-supplied code is run. At point, there are only two arenas and both have identical specs, but even now, one of them is consistently and considerably slower than the other. I have no idea why this happens, but it does.
>
> Also – and this applies to all method of timing code – two processes running at the same time can affect each other in subtle ways that will impact CPU time. E.g., speed of memory access may be affected, especially if the data should already be in the CPU's cache, but was pushed out by another process. Computers also tend to be measurably faster after a restart.
>
> Of course, running code on TIO is still better than having everyone time their code on their own machines. I wouldn't even consider the latter an objective winning criterion for a fastest-code challenge.

https://codegolf.meta.stackexchange.com/questions/12707#comment43256_12708

> Sorry, I meant having people time their code on their own machines. That's a fastest-computer challenge, not a fastest-code challenge. There's also no way of verifying the scores. 

Again, have not tried babel or webpack for the purpose of loading modules.
Where possible, try to use only and test code that is shipped with FOSS browsers, when the
code is being used in the FOSS browser.
guest271314 at gmail.com (2019-05-26T02:46:51.206Z)
> If you come up with a benchmark for this, would you mind sharing the code and not just the results?  I'm curious how my stuff will fare.

What specifically should be tested and compared and benchmarked? The same code being used at different computers; or different code used at the same computer? What are you trying to determine?

The current question asks

> how many async-modules can js-app practically load?

which leads to asking how many async-modules are actually needed?

From the outset one primary limiting factor as to "how many" would probably be the available RAM and hard disk space of the computer used (and browser preferences; cache policies; configuration) to load the modules (before the browser/OS freezes), similar to asking the maximum
content-length of a file which can be fetched and downloaded and the length
of time such a procedure takes in the browser, which depends on the
available RAM and hard-disk space of the computer(s) used (e.g., see How to
solve Uncaught RangeError when download large size json
 https://stackoverflow.com/q/39959467 ; https://stackoverflow.com/questions/39959467#comment67315240_40017582) where the result can vary considerably when testing code on different computers, depending on available RAM and hard-disk
space of the computer (for example, comparing results at a minimal OS at Raspberry Pi with 500MB of available RAM and the same OS having 16GB RAM; or a non-minimal OS with multiple applications and the browser running compared to only the browser application running; including or not including the cost of loading the code which loads the async modules - if the code is not shipped with the browser - in the test results; etc.). The tentative answer to the original question "how many async-modules can js-app practically load?" would be until memory and available disk space runs out.

Tests of such code should also include the hardware and software used for the test and perform the various tests at the same computer - which will be compared to the other tests performed at the same computer, locally. Then those test results can be compared to different architectures, browsers (browsers implement specifications and code differently; consider Why does Firefox produce larger WebM video files compared with Chrome? https://stackoverflow.com/q/54651869 "However, Firefox still ends up creating up to 4 times larger files compared with Chrome."; https://stackoverflow.com/a/54652768 "Because they don't use the same settings..."; https://gist.github.com/guest271314/f942fb9febd9c197bd3824973794ba3e; https://gist.github.com/guest271314/17d62bf74a97d3f111aa25605a9cd1ca), OS's, configurations to create a graph of the various combinations for comparative analysis. 

Online tests of code (e.g. jsPerf) can provide inaccurate results, see https://stackoverflow.com/questions/46516234#comment80198567_46623314 ;  microbenchmarks fairy tale https://mrale.ph/blog/2012/12/15/microbenchmarks-fairy-tale.html (note also the list item links at this answer https://stackoverflow.com/a/47173390 at Why is using a loop to iterate from start of array to end faster than iterating both start to end and end to start? https://stackoverflow.com/q/47049004);  Is TIO acceptable for fastest-code questions? https://codegolf.meta.stackexchange.com/q/12707 ; https://codegolf.meta.stackexchange.com/a/12708

> Unfortunately, I don't think TIO is a very reliable way of timing code, even if counting CPU time rather than wall time.
>
> First of all, TIO uses multiple arena servers, where all user-supplied code is run. At point, there are only two arenas and both have identical specs, but even now, one of them is consistently and considerably slower than the other. I have no idea why this happens, but it does.
>
> Also – and this applies to all method of timing code – two processes running at the same time can affect each other in subtle ways that will impact CPU time. E.g., speed of memory access may be affected, especially if the data should already be in the CPU's cache, but was pushed out by another process. Computers also tend to be measurably faster after a restart.
>
> Of course, running code on TIO is still better than having everyone time their code on their own machines. I wouldn't even consider the latter an objective winning criterion for a fastest-code challenge.

https://codegolf.meta.stackexchange.com/questions/12707#comment43256_12708

> Sorry, I meant having people time their code on their own machines. That's a fastest-computer challenge, not a fastest-code challenge. There's also no way of verifying the scores. 

Again, have not tried babel or webpack for the purpose of loading modules.
Where possible, try to use only and test code that is shipped with FOSS browsers, when the
code is being used in the FOSS browser.
guest271314 at gmail.com (2019-05-26T00:13:15.598Z)
> If you come up with a benchmark for this, would you mind sharing the code and not just the results?  I'm curious how my stuff will fare.

What specifically should be tested and compared and benchmarked? The same code being used at different computers; or different code used at the same computer? What are you trying to determine?

The current question asks

> how many async-modules can js-app practically load?

which leads to asking how many async-modules are actually needed?

From the outset one primary limiting factor as to "how many" would probably be the available RAM and hard disk space of the computer used (and browser preferences; cache policies; configuration) to load the modules (before the browser/OS freezes), similar to asking the maximum
content-length of a file which can be fetched and downloaded and the length
of time such a procedure takes in the browser, which depends on the
available RAM and hard-disk space of the computer(s) used (e.g., see How to
solve Uncaught RangeError when download large size json
 https://stackoverflow.com/q/39959467 ; https://stackoverflow.com/questions/39959467#comment67315240_40017582) where the result can vary considerably when testing code on different computers, depending on available RAM and hard-disk
space of the computer (for example, comparing results at a minimal OS at Raspberry Pi with 500MB of available RAM and the same OS having 16GB RAM; or a non-minimal OS with multiple applications and the browser running compared to only the browser application running; including or not including the cost of loading the code which loads the async modules - if the code is not shipped with the browser - in the test results; etc.). The tentative answer to the original question "how many async-modules can js-app practically load?" would be until memory and available disk space runs out.

Tests of such code should also include the hardware and software used for the test and perform the various tests at the same computer - which will be compared to the other tests performed at the same computer, locally. Online tests of code (e.g. jsPerf) can provide inaccurate results, see https://stackoverflow.com/questions/46516234#comment80198567_46623314 ;  microbenchmarks fairy tale https://mrale.ph/blog/2012/12/15/microbenchmarks-fairy-tale.html (note also the list item links at this answer https://stackoverflow.com/a/47173390 at Why is using a loop to iterate from start of array to end faster than iterating both start to end and end to start? https://stackoverflow.com/q/47049004);  Is TIO acceptable for fastest-code questions? https://codegolf.meta.stackexchange.com/q/12707 ; https://codegolf.meta.stackexchange.com/a/12708

> Unfortunately, I don't think TIO is a very reliable way of timing code, even if counting CPU time rather than wall time.
>
> First of all, TIO uses multiple arena servers, where all user-supplied code is run. At point, there are only two arenas and both have identical specs, but even now, one of them is consistently and considerably slower than the other. I have no idea why this happens, but it does.
>
> Also – and this applies to all method of timing code – two processes running at the same time can affect each other in subtle ways that will impact CPU time. E.g., speed of memory access may be affected, especially if the data should already be in the CPU's cache, but was pushed out by another process. Computers also tend to be measurably faster after a restart.
>
> Of course, running code on TIO is still better than having everyone time their code on their own machines. I wouldn't even consider the latter an objective winning criterion for a fastest-code challenge.

https://codegolf.meta.stackexchange.com/questions/12707#comment43256_12708

> Sorry, I meant having people time their code on their own machines. That's a fastest-computer challenge, not a fastest-code challenge. There's also no way of verifying the scores. 

Again, have not tried babel or webpack for the purpose of loading modules.
Where possible, try to use only and test code that is shipped with FOSS browsers, when the
code is being used in the FOSS browser.
guest271314 at gmail.com (2019-05-26T00:08:35.794Z)
> If you come up with a benchmark for this, would you mind sharing the code and not just the results?  I'm curious how my stuff will fare.

What specifically should be tested and compared and benchmarked? The same code being used at different computers; or different code used at the same computer? What are you trying to determine?

The current question asks

> how many async-modules can js-app practically load?

which leads to asking how many async-modules are actually needed?

From the outset one primary limiting factor as to "how many" would probably be the available RAM and hard disk space of the computer used (and browser preferences; cache policies; configuration) to load the modules (before the browser/OS freezes), similar to asking the maximum
content-length of a file which can be fetched and downloaded and the length
of time such a procedure takes in the browser, which depends on the
available RAM and hard-disk space of the computer(s) used (e.g., see How to
solve Uncaught RangeError when download large size json
 https://stackoverflow.com/q/39959467 ; https://stackoverflow.com/questions/39959467#comment67315240_40017582) where the result can vary considerably when testing code on different computers, depending on available RAM and hard-disk
space of the computer (for example, comparing results at a minimal OS at Raspberry Pi with 500MB of available RAM and the same OS having 16GB RAM; or a non-minimal OS with multiple applications and the browser running compared to only the browser application being running, etc.). The tentative answer to the original question "how many async-modules can js-app practically load?" would be until memory and available disk space runs out.

Tests of such code should also include the hardware and software used for the test and perform the various tests at the same computer - which will be compared to the other tests performed at the same computer, locally. Online tests of code (e.g. jsPerf) can provide inaccurate results, see https://stackoverflow.com/questions/46516234#comment80198567_46623314 ;  microbenchmarks fairy tale https://mrale.ph/blog/2012/12/15/microbenchmarks-fairy-tale.html (note also the list item links at this answer https://stackoverflow.com/a/47173390 at Why is using a loop to iterate from start of array to end faster than iterating both start to end and end to start? https://stackoverflow.com/q/47049004);  Is TIO acceptable for fastest-code questions? https://codegolf.meta.stackexchange.com/q/12707 ; https://codegolf.meta.stackexchange.com/a/12708

> Unfortunately, I don't think TIO is a very reliable way of timing code, even if counting CPU time rather than wall time.
>
> First of all, TIO uses multiple arena servers, where all user-supplied code is run. At point, there are only two arenas and both have identical specs, but even now, one of them is consistently and considerably slower than the other. I have no idea why this happens, but it does.
>
> Also – and this applies to all method of timing code – two processes running at the same time can affect each other in subtle ways that will impact CPU time. E.g., speed of memory access may be affected, especially if the data should already be in the CPU's cache, but was pushed out by another process. Computers also tend to be measurably faster after a restart.
>
> Of course, running code on TIO is still better than having everyone time their code on their own machines. I wouldn't even consider the latter an objective winning criterion for a fastest-code challenge.

https://codegolf.meta.stackexchange.com/questions/12707#comment43256_12708

> Sorry, I meant having people time their code on their own machines. That's a fastest-computer challenge, not a fastest-code challenge. There's also no way of verifying the scores. 

Again, have not tried babel or webpack for the purpose of loading modules.
Where possible, try to use only and test code that is shipped with FOSS browsers, when the
code is being used in the FOSS browser.
guest271314 at gmail.com (2019-05-26T00:07:10.992Z)
> If you come up with a benchmark for this, would you mind sharing the code and not just the results?  I'm curious how my stuff will fare.

What specifically should be tested and compared and benchmarked? The same code being used at different computers; or different code used at the same computer? What are you trying to determine?

The current question asks

> how many async-modules can js-app practically load?

which leads to asking how many async-modules are actually needed?

From the outset one primary limiting factor as to "how many" would probably be the available RAM and hard disk space of the computer used (and browser preferences; cache policies; configuration) to load the modules (before the browser/OS freezes), similar to asking the maximum
content-length of a file which can be fetched and downloaded and the length
of time such a procedure takes in the browser, which depends on the
available RAM and hard-disk space of the computer(s) used (e.g., see How to
solve Uncaught RangeError when download large size json
 https://stackoverflow.com/q/39959467 ; https://stackoverflow.com/questions/39959467#comment67315240_40017582) where the result can vary considerably when testing code on different computers, depending on available RAM and hard-disk
space of the computer (for example, comparing results at a minimal OS at Raspberry Pi with 500MB of available RAM and the same OS having 16GB RAM; or a non-minimal OS with multiple applications and the browser running compared to only the browser application being running, etc.). The tentative answer to the original question "how many async-modules can js-app practically load?" would be until memory and available disk space runs out.

Tests of such code should also include the hardware and software used for the test and perform the various tests at the same computer - which will be compared to the other tests performed at the same computer, locally. Online tests of code (e.g. jsPerf) can provide inaccurate results, see https://stackoverflow.com/questions/46516234#comment80198567_46623314 ;  microbenchmarks fairy tale https://mrale.ph/blog/2012/12/15/microbenchmarks-fairy-tale.html (note also the list item links at this answer https://stackoverflow.com/a/47173390 at Why is using a loop to iterate from start of array to end faster than iterating both start to end and end to start? https://stackoverflow.com/q/47049004);  Is TIO acceptable for fastest-code questions? https://codegolf.meta.stackexchange.com/q/12707 ; https://codegolf.meta.stackexchange.com/a/12708

> Unfortunately, I don't think TIO is a very reliable way of timing code, even if counting CPU time rather than wall time.
>
> First of all, TIO uses multiple arena servers, where all user-supplied code is run. At point, there are only two arenas and both have identical specs, but even now, one of them is consistently and considerably slower than the other. I have no idea why this happens, but it does.
>
> Also – and this applies to all method of timing code – two processes running at the same time can affect each other in subtle ways that will impact CPU time. E.g., speed of memory access may be affected, especially if the data should already be in the CPU's cache, but was pushed out by another process. Computers also tend to be measurably faster after a restart.
>
> Of course, running code on TIO is still better than having everyone time their code on their own machines. I wouldn't even consider the latter an objective winning criterion for a fastest-code challenge.

https://codegolf.meta.stackexchange.com/questions/12707#comment43256_12708

> Sorry, I meant having people time their code on their own machines. That's a fastest-computer challenge, not a fastest-code challenge. There's also no way of verifying the scores. 

Again, have not tried babel or webpack for the purpose of loading modules.
Where possible try to use only and test code that is shipped with FOSS browsers, when the
code is being used in the FOSS browser.
guest271314 at gmail.com (2019-05-25T23:57:44.641Z)
> If you come up with a benchmark for this, would you mind sharing the code and not just the results?  I'm curious how my stuff will fare.

What specifically should be tested and compared and benchmarked? The same code being used at different computers; or different code used at the same computer? What are you trying to determine?

The current question asks

> how many async-modules can js-app practically load?

which leads to asking how many async-modules are actually needed?

From the outset one primary limiting factor as to "how many" would probably be the available RAM and hard disk space of the computer used (and browser preferences; cache policies; configuration) to load the modules (before the browser/OS freezes), similar to asking the maximum
content-length of a file which can be fetched and downloaded and the length
of time such a procedure takes in the browser, which depends on the
available RAM and hard-disk space of the computer(s) used (e.g., see How to
solve Uncaught RangeError when download large size json
 https://stackoverflow.com/q/39959467 ; https://stackoverflow.com/questions/39959467#comment67315240_40017582) where the result can vary considerably when testing code on different computers, depending on available RAM and hard-disk
space of the computer (for example, comparing results at a minimal OS at Raspberry Pi with 500MB of available RAM and the same OS having 16GB RAM; or a non-minimal OS with multiple applications and the browser running compared to only the browser application being running, etc.). The tentative answer to the original question "how many async-modules can js-app practically load?" would be until memory and available disk space runs out.

Tests of such code should also include the hardware and software used for the test and perform the various tests at the same computer - which will be compared to the other tests performed at the same computer, locally. Online tests of code (e.g. jsPerf) can provide inaccurate results, see https://stackoverflow.com/questions/46516234#comment80198567_46623314 ;  microbenchmarks fairy tale https://mrale.ph/blog/2012/12/15/microbenchmarks-fairy-tale.html (note also the list item links at this answer https://stackoverflow.com/a/47173390 at Why is using a loop to iterate from start of array to end faster than iterating both start to end and end to start? https://stackoverflow.com/q/47049004);  Is TIO acceptable for fastest-code questions? https://codegolf.meta.stackexchange.com/q/12707 ; https://codegolf.meta.stackexchange.com/a/12708

> Unfortunately, I don't think TIO is a very reliable way of timing code, even if counting CPU time rather than wall time.
>
> First of all, TIO uses multiple arena servers, where all user-supplied code is run. At point, there are only two arenas and both have identical specs, but even now, one of them is consistently and considerably slower than the other. I have no idea why this happens, but it does.
>
> Also – and this applies to all method of timing code – two processes running at the same time can affect each other in subtle ways that will impact CPU time. E.g., speed of memory access may be affected, especially if the data should already be in the CPU's cache, but was pushed out by another process. Computers also tend to be measurably faster after a restart.
>
> Of course, running code on TIO is still better than having everyone time their code on their own machines. I wouldn't even consider the latter an objective winning criterion for a fastest-code challenge.

https://codegolf.meta.stackexchange.com/questions/12707#comment43256_12708

> Sorry, I meant having people time their code on their own machines. That's a fastest-computer challenge, not a fastest-code challenge. There's also no way of verifying the scores. 

Again, have not tried babel or webpack for the purpose of loading modules.
Normally try to use only code that is shipped with FOSS browsers, when the
code is being used in the browser, where possible.
guest271314 at gmail.com (2019-05-25T23:52:10.154Z)
> If you come up with a benchmark for this, would you mind sharing the code and not just the results?  I'm curious how my stuff will fare.

What specifically should be tested and compared and benchmarked? The same code being used at different computers; or different code used at the same computer? What are you trying to determine?

The current question asks

> how many async-modules can js-app practically load?

which leads to asking how many async-modules are actually needed?

From the outset one primary limiting factor as to "how many" would probably be the available RAM and hard disk space of the computer used (and browser preferences; cache policies; configuration) to load the modules (before the browser/OS freezes), similar to asking the maximum
content-length of a file which can be fetched and downloaded and the length
of time such a procedure takes in the browser, which depends on the
available RAM and hard-disk space of the computer(s) used (e.g., see How to
solve Uncaught RangeError when download large size json
 https://stackoverflow.com/q/39959467 ; https://stackoverflow.com/questions/39959467#comment67315240_40017582) where the result can vary considerably when testing code on different computers, depending on available RAM and hard-disk
space of the computer (for example, comparing results at a minimal OS at Raspberry Pi with 500MB of available RAM and the same OS having 16GB RAM; or a non-minimal OS with multiple applications and the browser running compared to only the browser application being running, etc.). The tentative answer to the original question "how many async-modules can js-app practically load?" would be until memory and available disk space runs out.

Tests of such code should also include the hardware and software used for the test and perform the various tests at the same computer - which will be compared to the other tests performed at the same computer, locally. Online tests of code (e.g. jsPerf) can provide inaccurate results, see https://stackoverflow.com/questions/46516234#comment80198567_46623314 ;  microbenchmarks fairy tale https://mrale.ph/blog/2012/12/15/microbenchmarks-fairy-tale.html (note also the list item links at this answer https://stackoverflow.com/a/47173390 at Why is using a loop to iterate from start of array to end faster than iterating both start to end and end to start? https://stackoverflow.com/q/47049004);  Is TIO acceptable for fastest-code questions? https://codegolf.meta.stackexchange.com/q/12707 ; https://codegolf.meta.stackexchange.com/a/12708

> Unfortunately, I don't think TIO is a very reliable way of timing code, even if counting CPU time rather than wall time.
>
> First of all, TIO uses multiple arena servers, where all user-supplied code is run. At point, there are only two arenas and both have identical specs, but even now, one of them is consistently and considerably slower than the other. I have no idea why this happens, but it does.
>
> Also – and this applies to all method of timing code – two processes running at the same time can affect each other in subtle ways that will impact CPU time. E.g., speed of memory access may be affected, especially if the data should already be in the CPU's cache, but was pushed out by another process. Computers also tend to be measurably faster after a restart.
>
> Of course, running code on TIO is still better than having everyone time their code on their own machines. I wouldn't even consider the latter an objective winning criterion for a fastest-code challenge.

Again, have not tried babel or webpack for the purpose of loading modules.
Normally try to use only code that is shipped with FOSS browsers, when the
code is being used in the browser, where possible.
guest271314 at gmail.com (2019-05-25T23:45:19.480Z)
> If you come up with a benchmark for this, would you mind sharing the code and not just the results?  I'm curious how my stuff will fare.

What specifically should be tested and compared? The same code being used at different computers; or different code used at the same computer? What are you trying to determine?

The current question asks

> how many async-modules can js-app practically load?

which leads to asking how many async-modules are actually needed?

From the outset one primary limiting factor as to "how many" would probably be the available RAM and hard disk space of the computer used (and browser preferences; cache policies; configuration) to load the modules (before the browser/OS freezes), similar to asking the maximum
content-length of a file which can be fetched and downloaded and the length
of time such a procedure takes in the browser, which depends on the
available RAM and hard-disk space of the computer(s) used (e.g., see How to
solve Uncaught RangeError when download large size json
 https://stackoverflow.com/q/39959467 ; https://stackoverflow.com/questions/39959467#comment67315240_40017582) where the result can vary considerably when testing code on different computers, depending on available RAM and hard-disk
space of the computer (for example, comparing results at a minimal OS at Raspberry Pi with 500MB of available RAM and the same OS having 16GB RAM; or a non-minimal OS with multiple applications and the browser running compared to only the browser application being running, etc.). The tentative answer to the original question "how many async-modules can js-app practically load?" would be until memory and available disk space runs out.

Tests of such code should also include the hardware and software used for the test and perform the various tests at the same computer - which will be compared to the other tests performed at the same computer, locally. Online tests of code (e.g. jsPerf) can provide inaccurate results, see https://stackoverflow.com/questions/46516234#comment80198567_46623314 ;  microbenchmarks fairy tale https://mrale.ph/blog/2012/12/15/microbenchmarks-fairy-tale.html (note also the list item links at this answer https://stackoverflow.com/a/47173390 at Why is using a loop to iterate from start of array to end faster than iterating both start to end and end to start? https://stackoverflow.com/q/47049004);  Is TIO acceptable for fastest-code questions? https://codegolf.meta.stackexchange.com/q/12707 ; https://codegolf.meta.stackexchange.com/a/12708

> Unfortunately, I don't think TIO is a very reliable way of timing code, even if counting CPU time rather than wall time.
>
> First of all, TIO uses multiple arena servers, where all user-supplied code is run. At point, there are only two arenas and both have identical specs, but even now, one of them is consistently and considerably slower than the other. I have no idea why this happens, but it does.
>
> Also – and this applies to all method of timing code – two processes running at the same time can affect each other in subtle ways that will impact CPU time. E.g., speed of memory access may be affected, especially if the data should already be in the CPU's cache, but was pushed out by another process. Computers also tend to be measurably faster after a restart.
>
> Of course, running code on TIO is still better than having everyone time their code on their own machines. I wouldn't even consider the latter an objective winning criterion for a fastest-code challenge.

Again, have not tried babel or webpack for the purpose of loading modules.
Normally try to use only code that is shipped with FOSS browsers, when the
code is being used in the browser, where possible.
guest271314 at gmail.com (2019-05-25T23:34:18.307Z)
> If you come up with a benchmark for this, would you mind sharing the code and not just the results?  I'm curious how my stuff will fare.

What specifically should be tested and compared? The same code being used at different computers; or different code used at the same computer? What are you trying to determine?

The current question asks

> how many async-modules can js-app practically load?

which leads to asking how many async-modules are actually needed?

From the outset one primary limiting factor as to "how many" would probably be the available RAM and hard disk space of the computer used (and browser preferences; cache policies; configuration) to load the modules (before the browser/OS freezes), similar to asking the maximum
content-length of a file which can be fetched and downloaded and the length
of time such a procedure takes in the browser, which depends on the
available RAM and hard-disk space of the computer(s) used (e.g., see How to
solve Uncaught RangeError when download large size json
 https://stackoverflow.com/q/39959467 ; https://stackoverflow.com/questions/39959467#comment67315240_40017582) where the result can vary considerably when testing code on different computers, depending on available RAM and hard-disk
space of the computer (for example, comparing results at a minimal OS at Raspberry Pi with 500MB of available RAM and the same OS having 16GB RAM; or a non-minimal OS with multiple applications and the browser running compared to only the browser application being running, etc.). The tentative answer to the original question "how many async-modules can js-app practically load?" would be until memory and available disk space runs out.

Tests of such code should also include the hardware and software used for the test and perform the various tests at the same computer - which will be compared to the other tests performed at the same computer, locally. Online tests of code (e.g. jsPerf) can provide inaccurate results, see https://stackoverflow.com/questions/46516234#comment80198567_46623314 ; microbenchmarks fairy tale https://mrale.ph/blog/2012/12/15/microbenchmarks-fairy-tale.html;  Is TIO acceptable for fastest-code questions? https://codegolf.meta.stackexchange.com/q/12707 ; https://codegolf.meta.stackexchange.com/a/12708

> Unfortunately, I don't think TIO is a very reliable way of timing code, even if counting CPU time rather than wall time.
>
> First of all, TIO uses multiple arena servers, where all user-supplied code is run. At point, there are only two arenas and both have identical specs, but even now, one of them is consistently and considerably slower than the other. I have no idea why this happens, but it does.
>
> Also – and this applies to all method of timing code – two processes running at the same time can affect each other in subtle ways that will impact CPU time. E.g., speed of memory access may be affected, especially if the data should already be in the CPU's cache, but was pushed out by another process. Computers also tend to be measurably faster after a restart.
>
> Of course, running code on TIO is still better than having everyone time their code on their own machines. I wouldn't even consider the latter an objective winning criterion for a fastest-code challenge.

Again, have not tried babel or webpack for the purpose of loading modules.
Normally try to use only code that is shipped with FOSS browsers, when the
code is being used in the browser, where possible.
guest271314 at gmail.com (2019-05-25T23:27:34.691Z)
> If you come up with a benchmark for this, would you mind sharing the code and not just the results?  I'm curious how my stuff will fare.

What specifically should be tested and compared? The same code being used at different computers; or different code used at the same computer? What are you trying to determine?

The current question asks

> how many async-modules can js-app practically load?

which leads to asking how many async-modules are actually needed?

From the outset one primary limiting factor as to "how many" would probably be the available RAM and hard disk space of the computer used (and browser preferences, policies, configuration) to load the modules (before the browser/OS freezes), similar to asking the maximum
content-length of a file which can be fetched and downloaded and the length
of time such a procedure takes in the browser, which depends on the
available RAM and hard-disk space of the computer(s) used (e.g., see How to
solve Uncaught RangeError when download large size json
 https://stackoverflow.com/q/39959467 ; https://stackoverflow.com/questions/39959467#comment67315240_40017582) where the result can vary considerably when testing code on different computers, depending on available RAM and hard-disk
space of the computer (for example, comparing results at a minimal OS at Raspberry Pi with 500MB of available RAM and the same OS having 16GB RAM; or a non-minimal OS with multiple applications and the browser running compared to only the browser application being running, etc.). The tentative answer to the original question "how many async-modules can js-app practically load?" would be until memory and available disk space runs out.

Tests of such code should also include the hardware and software used for the test and perform the various tests at the same computer - which will be compared to the other tests performed at the same computer, locally. Online tests of code (e.g. jsPerf) can provide inaccurate results, see https://stackoverflow.com/questions/46516234#comment80198567_46623314 ; microbenchmarks fairy tale https://mrale.ph/blog/2012/12/15/microbenchmarks-fairy-tale.html;  Is TIO acceptable for fastest-code questions? https://codegolf.meta.stackexchange.com/q/12707 ; https://codegolf.meta.stackexchange.com/a/12708

> Unfortunately, I don't think TIO is a very reliable way of timing code, even if counting CPU time rather than wall time.
>
> First of all, TIO uses multiple arena servers, where all user-supplied code is run. At point, there are only two arenas and both have identical specs, but even now, one of them is consistently and considerably slower than the other. I have no idea why this happens, but it does.
>
> Also – and this applies to all method of timing code – two processes running at the same time can affect each other in subtle ways that will impact CPU time. E.g., speed of memory access may be affected, especially if the data should already be in the CPU's cache, but was pushed out by another process. Computers also tend to be measurably faster after a restart.
>
> Of course, running code on TIO is still better than having everyone time their code on their own machines. I wouldn't even consider the latter an objective winning criterion for a fastest-code challenge.

Again, have not tried babel or webpack for the purpose of loading modules.
Normally try to use only code that is shipped with FOSS browsers, when the
code is being used in the browser, where possible.
guest271314 at gmail.com (2019-05-25T23:25:47.019Z)
> If you come up with a benchmark for this, would you mind sharing the code and not just the results?  I'm curious how my stuff will fare.

What specifically should be tested and compared? The same code being used at different computers; or different code used at the same computer? What are you trying to determine?

The current question asks

> how many async-modules can js-app practically load?

which leads to asking how many async-modules are actually needed?

From the outset one primary limiting factor as to "how many" would probably be the available RAM and hard disk space of the computer used (and browser preferences, policies, configuration) to load the modules (before the browser/OS freezes), similar to asking the maximum
content-length of a file which can be fetched and downloaded and the length
of time such a procedure takes in the browser, which depends on the
available RAM and hard-disk space of the computer(s) used (e.g., see How to
solve Uncaught RangeError when download large size json
 https://stackoverflow.com/q/39959467 ; https://stackoverflow.com/questions/39959467#comment67315240_40017582) where the result can vary considerably when testing code on different computers, depending on available RAM and hard-disk
space of the computer (for example, comparing results at a minimal OS at Raspberry Pi with 500MB of available RAM and the same OS having 16GB RAM; or a non-minimal OS with multiple applications and the browser running compared to only the browser application being running, etc.). The tentative answer to the original question "how many async-modules can js-app practically load?" would be until memory and available disk space runs out.

Tests of such code should also include the hardware and software used for the test. Online tests of code (e.g. jsPerf) can provide inaccurate results, see https://stackoverflow.com/questions/46516234#comment80198567_46623314 ; microbenchmarks fairy tale https://mrale.ph/blog/2012/12/15/microbenchmarks-fairy-tale.html;  Is TIO acceptable for fastest-code questions? https://codegolf.meta.stackexchange.com/q/12707 ; https://codegolf.meta.stackexchange.com/a/12708

> Unfortunately, I don't think TIO is a very reliable way of timing code, even if counting CPU time rather than wall time.
>
> First of all, TIO uses multiple arena servers, where all user-supplied code is run. At point, there are only two arenas and both have identical specs, but even now, one of them is consistently and considerably slower than the other. I have no idea why this happens, but it does.
>
> Also – and this applies to all method of timing code – two processes running at the same time can affect each other in subtle ways that will impact CPU time. E.g., speed of memory access may be affected, especially if the data should already be in the CPU's cache, but was pushed out by another process. Computers also tend to be measurably faster after a restart.
>
> Of course, running code on TIO is still better than having everyone time their code on their own machines. I wouldn't even consider the latter an objective winning criterion for a fastest-code challenge.

Again, have not tried babel or webpack for the purpose of loading modules.
Normally try to use only code that is shipped with FOSS browsers, when the
code is being used in the browser, where possible.
guest271314 at gmail.com (2019-05-25T23:11:23.247Z)
> If you come up with a benchmark for this, would you mind sharing the code and not just the results?  I'm curious how my stuff will fare.

What specifically should be tested and compared? The same code being used at different computers; or different code used at the same computer? What are you trying to determine?

The current question asks

> how many async-modules can js-app practically load?

which leads to asking how many async-modules are actually needed?

From the outset one primary limiting factor as to "how many" would probably be the available RAM and hard disk space of the computer used (and browser preferences, policies, configuration) to load the modules (before the browser/OS freezes), similar to asking the maximum
content-length of a file which can be fetched and downloaded and the length
of time such a procedure takes in the browser, which depends on the
available RAM and hard-disk space of the computer(s) used (e.g., see How to
solve Uncaught RangeError when download large size json
 https://stackoverflow.com/q/39959467 ; https://stackoverflow.com/questions/39959467#comment67315240_40017582) where the result can vary considerably when testing code on different computers, depending on available RAM and hard-disk
space of the computer (for example, comparing results at a minimal OS at Raspberry Pi with 500MB of available RAM and the same OS having 16GB RAM; or a non-minimal OS with multiple applications and the browser running compared to only the browser application being running, etc.). The tentative answer to the original question "how many async-modules can js-app practically load?" would be until memory and available disk space runs out.

Tests of such code should also include the hardware and software used for the test. Online tests of code can provide inaccurate results, see Is TIO acceptable for fastest-code questions? https://codegolf.meta.stackexchange.com/q/12707 ; https://codegolf.meta.stackexchange.com/a/12708

> Unfortunately, I don't think TIO is a very reliable way of timing code, even if counting CPU time rather than wall time.
>
> First of all, TIO uses multiple arena servers, where all user-supplied code is run. At point, there are only two arenas and both have identical specs, but even now, one of them is consistently and considerably slower than the other. I have no idea why this happens, but it does.
>
> Also – and this applies to all method of timing code – two processes running at the same time can affect each other in subtle ways that will impact CPU time. E.g., speed of memory access may be affected, especially if the data should already be in the CPU's cache, but was pushed out by another process. Computers also tend to be measurably faster after a restart.
>
> Of course, running code on TIO is still better than having everyone time their code on their own machines. I wouldn't even consider the latter an objective winning criterion for a fastest-code challenge.

Again, have not tried babel or webpack for the purpose of loading modules.
Normally try to use only code that is shipped with FOSS browsers, when the
code is being used in the browser, where possible.
guest271314 at gmail.com (2019-05-25T22:54:48.540Z)
> If you come up with a benchmark for this, would you mind sharing the code and not just the results?  I'm curious how my stuff will fare.

What specifically should be tested and compared? The same code being used at different computers; or different code used at the same computer? What are you trying to determine?

The current question asks

> how many async-modules can js-app practically load?

which leads to asking how many async-modules are actually needed?

From the outset one primary limiting factor as to "how many" would probably be the available RAM and hard disk space of the computer used (and browser preferences, policies, configuration) to load the modules (before the browser/OS freezes), similar to asking the maximum
content-length of a file which can be fetched and downloaded and the length
of time such a procedure takes in the browser, which depends on the
available RAM and hard-disk space of the computer(s) used (e.g., see How to
solve Uncaught RangeError when download large size json
 https://stackoverflow.com/q/39959467 ; https://stackoverflow.com/questions/39959467#comment67315240_40017582) where the result can vary considerably when testing code on different computers, depending on available RAM and hard-disk
space of the computer (for example, comparing results at a minimal OS at Raspberry Pi with 500MB of available RAM and the same OS having 16GB RAM; or a non-minimal OS with multiple applications and the browser running compared to only the browser application being running, etc.). The tentative answer to the original question "how many async-modules can js-app practically load?" would be until memory and available disk space runs out.

Again, have not tried babel or webpack for the purpose of loading modules.
Normally try to use only code that is shipped with FOSS browsers, when the
code is being used in the browser, where possible.
guest271314 at gmail.com (2019-05-25T22:53:06.271Z)
> If you come up with a benchmark for this, would you mind sharing the code and not just the results?  I'm curious how my stuff will fare.

What specifically should be tested and compared? The same code being used at different computers; or different code used at the same computer? What are you trying to determine?

The current question asks

> how many async-modules can js-app practically load?

which leads to asking how many async-modules are actually needed?

From the outset one primary limiting factor as to "how many" would probably be the available RAM and hard disk space of the computer used (and browser preferences, policies, configuration) to load the modules (before the browser/OS freezes), similar to asking the maximum
content-length of a file which can be fetched and downloaded and the length
of time such a procedure takes in the browser, which depends on the
available RAM and hard-disk space of the computer(s) used (e.g., see How to
solve Uncaught RangeError when download large size json
 https://stackoverflow.com/q/39959467 ; https://stackoverflow.com/questions/39959467#comment67315240_40017582) where the result can vary considerably when testing code on different computers, depending on available RAM and hard-disk
space of the computer (for example, comparing results at a minimal OS at Raspberry Pi with 500MB of available RAM and the same OS having 16GB RAM; or a non-minimal OS with multiple applications and the browser running compared to only the browser application being running, etc.).

Again, have not tried babel or webpack for the purpose of loading modules.
Normally try to use only code that is shipped with FOSS browsers, when the
code is being used in the browser, where possible.
guest271314 at gmail.com (2019-05-25T22:50:01.645Z)
> If you come up with a benchmark for this, would you mind sharing the code and not just the results?  I'm curious how my stuff will fare.

What specifically should be tested and compared? The same code being used at different computers; or different code used at the same computer? What are you trying to determine?

The current question asks

> how many async-modules can js-app practically load?

which leads to asking how many async-modules are actually needed?

From the outset one primary limiting factor as to "how many" would probably be the available RAM and hard disk space of the computer used (and browser preferences, policies, configuration) to load the modules (before the browser/OS freezes), similar to asking the maximum
content-length of a file which can be fetched and downloaded and the length
of time such a procedure takes in the browser, which depends on the
available RAM and hard-disk space of the computer(s) used (e.g., see How to
solve Uncaught RangeError when download large size json
 https://stackoverflow.com/q/39959467 ; https://stackoverflow.com/questions/39959467#comment67315240_40017582) where the result can vary considerably when testing code on different computers, depending on available RAM and hard-disk
space of the computer (for example, comparing results at a minimal OS at Raspberry Pi with 500MB of available RAM and the same OS having 16GB RAM; or a non-minimal OS with multiple applications running compared to only the browser running without multiple applications running, etc.).

Again, have not tried babel or webpack for the purpose of loading modules.
Normally try to use only code that is shipped with FOSS browsers, when the
code is being used in the browser, where possible.
guest271314 at gmail.com (2019-05-25T22:48:16.718Z)
> If you come up with a benchmark for this, would you mind sharing the code and not just the results?  I'm curious how my stuff will fare.

What specifically should be tested and compared? The same code being used at different computers; or different code used at the same computer? What are you trying to determine?

The current question asks

> how many async-modules can js-app practically load?

which leads to asking how many async-modules are actually needed?

From the outset one primary limiting factor as to "how many" would probably be the available RAM and hard disk space of the computer used (and browser preferences, policies, configuration) to load the modules (before the browser/OS freezes), similar to asking the maximum
content-length of a file which can be fetched and downloaded and the length
of time such a procedure takes in the browser, which depends on the
available RAM and hard-disk space of the computer(s) used (e.g., see How to
solve Uncaught RangeError when download large size json
 https://stackoverflow.com/q/39959467 ; https://stackoverflow.com/questions/39959467#comment67315240_40017582) where the result can vary considerably when testing code on different computers, depending on available RAM and hard-disk
space of the computer (for example, comparing results at a minimal OS at Raspberry Pi with 500MB of available RAM and same OS having 16GB RAM).

Again, have not tried babel or webpack for the purpose of loading modules.
Normally try to use only code that is shipped with FOSS browsers, when the
code is being used in the browser, where possible.
guest271314 at gmail.com (2019-05-25T22:42:31.622Z)
> If you come up with a benchmark for this, would you mind sharing the code and not just the results?  I'm curious how my stuff will fare.

What specifically should be tested and compared? What are you trying to determine?

The current question asks

> how many async-modules can js-app practically load?

which leads to asking how many async-modules are actually needed?

From the outset one primary limiting factor as to "how many" would probably be the available RAM and hard disk space of the computer used (and browser preferences, policies, configuration) to load the modules (before the browser/OS freezes), similar to asking the maximum
content-length of a file which can be fetched and downloaded and the length
of time such a procedure takes in the browser, which depends on the
available RAM and hard-disk space of the computer(s) used (e.g., see How to
solve Uncaught RangeError when download large size json
 https://stackoverflow.com/q/39959467 ; https://stackoverflow.com/questions/39959467#comment67315240_40017582) where the result can vary considerably when testing code on different computers, depending on available RAM and hard-disk
space of the computer.

Again, have not tried babel or webpack for the purpose of loading modules.
Normally try to use only code that is shipped with FOSS browsers, when the
code is being used in the browser, where possible.
guest271314 at gmail.com (2019-05-25T22:13:16.541Z)
> If you come up with a benchmark for this, would you mind sharing the code and not just the results?  I'm curious how my stuff will fare.

What specifically should be tested and compared? What are you trying to determine?

The current question asks

> how many async-modules can js-app practically load?

which leads to asking how many async-modules are actually needed?

From the outset the presumptive answer would probably be the available RAM and hard disk space of the computer used (and browser preferences, policies, configuration) to load the modules (before the browser/OS freezes), similar to asking the maximum
content-length of a file which can be fetched and downloaded and the length
of time such a procedure takes in the browser, which depends on the
available RAM and hard-disk space of the computer(s) used (e.g., see How to
solve Uncaught RangeError when download large size json
 https://stackoverflow.com/q/39959467 ; https://stackoverflow.com/questions/39959467#comment67315240_40017582) where the result can vary considerably when testing code on different computers, depending on available RAM and hard-disk
space of the computer.

Again, have not tried babel or webpack for the purpose of loading modules.
Normally try to use only code that is shipped with FOSS browsers, when the
code is being used in the browser, where possible.
guest271314 at gmail.com (2019-05-25T22:11:59.319Z)
> If you come up with a benchmark for this, would you mind sharing the code and not just the results?  I'm curious how my stuff will fare.

What specifically should be tested and compared? What are you trying to determine?

The current question asks

> how many async-modules can js-app practically load?

which leads to asking how many async-modules are actually needed?

From the outset the presumptive answer would probably be the available RAM and hard disk space of the computer used (and browser preferences, policies, configuration) to load the modules (before the browser/OS freezes), similar to asking the maximum
content-length of a file which can be fetched and downloaded and the length
of time such a procedure takes in the browser, which depends on the
available RAM and hard-disk space of the computer(s) used (e.g., see How to
solve Uncaught RangeError when download large size json
 https://stackoverflow.com/q/39959467; https://stackoverflow.com/questions/39959467#comment67315240_40017582) where the result can vary
considerably when testing code on different computers, depending on available RAM and hard-disk
space of the computer.

Again, have not tried babel or webpack for the purpose of loading modules.
Normally try to use only code that is shipped with FOSS browsers, when the
code is being used in the browser, where possible.
guest271314 at gmail.com (2019-05-25T22:10:12.132Z)
> If you come up with a benchmark for this, would you mind sharing the code and not just the results?  I'm curious how my stuff will fare.

What specifically should be tested and compared? What are you trying to determine?

The current question asks

> how many async-modules can js-app practically load?

which leads to asking how many async-modules are actually needed?

From the outset the presumptive answer would probably be the available RAM and hard disk space of the computer used (and browser preferences, policies, configuration) to load the modules (before the browser/OS freezes), similar to asking the maximum
content-length of a file which can be fetched and downloaded and the length
of time such a procedure takes in the browser, which depends on the
available RAM and hard-disk space of the computer(s) used (e.g., see How to
solve Uncaught RangeError when download large size json
 https://stackoverflow.com/q/39959467) where the result can vary
considerably when testing code on different computers, depending on available RAM and hard-disk
space of the computer.

Again, have not tried babel or webpack for the purpose of loading modules.
Normally try to use only code that is shipped with FOSS browsers, when the
code is being used in the browser, where possible.
guest271314 at gmail.com (2019-05-25T22:09:40.719Z)
> If you come up with a benchmark for this, would you mind sharing the code and not just the results?  I'm curious how my stuff will fare.

What specifically should be tested and compared? What are you trying to determine?

The current question asks

> how many async-modules can js-app practically load?

which leads to asking how many async-modules are actually needed?

From the outset the presumptive answer would probably be the available RAM and hard disk space of the computer used (and browser preferences, policies, configuration) to load the modules (before the browser/OS freezes), similar to asking the maximum
content-length of a file which can be fetched and downloaded and the length
of time such a procedure takes in the browser, which depends on the
available RAM and hard-disk space of the computer(s) used (e.g., see How to
solve Uncaught RangeError when download large size json
 https://stackoverflow.com/q/39959467) where the result can vary
considerably when testing code on different devices, depending on available RAM and hard-disk
space of the device.

Again, have not tried babel or webpack for the purpose of loading modules.
Normally try to use only code that is shipped with FOSS browsers, when the
code is being used in the browser, where possible.
guest271314 at gmail.com (2019-05-25T22:08:51.333Z)
> If you come up with a benchmark for this, would you mind sharing the code and not just the results?  I'm curious how my stuff will fare.

What specifically should be compared? What are you trying to determine?

The current question asks

> how many async-modules can js-app practically load?

which leads to asking how many async-modules are actually needed?

From the outset the presumptive answer would probably be the available RAM and hard disk space of the computer used (and browser preferences, policies, configuration) to load the modules (before the browser/OS freezes), similar to asking the maximum
content-length of a file which can be fetched and downloaded and the length
of time such a procedure takes in the browser, which depends on the
available RAM and hard-disk space of the computer(s) used (e.g., see How to
solve Uncaught RangeError when download large size json
 https://stackoverflow.com/q/39959467) where the result can vary
considerably when testing code on different devices, depending on available RAM and hard-disk
space of the device.

Again, have not tried babel or webpack for the purpose of loading modules.
Normally try to use only code that is shipped with FOSS browsers, when the
code is being used in the browser, where possible.
guest271314 at gmail.com (2019-05-25T22:07:24.464Z)
> If you come up with a benchmark for this, would you mind sharing the code and not just the results?  I'm curious how my stuff will fare.

What specifically should be compared? What are you trying to determine?

The current question asks

> how many async-modules can js-app practically load?

which leads to asking how many async-modules are actually needed?

From the outset the presumptive answer would probably be the available RAM and hard disk space of the computer used (and browser preferences, policies, configuration) to load the modules, similar to asking the maximum
content-length of a file which can be fetched and downloaded and the length
of time such a procedure takes in the browser, which depends on the
available RAM and hard-disk space of the computer(s) used (e.g., see How to
solve Uncaught RangeError when download large size json
 https://stackoverflow.com/q/39959467) where the result can vary
considerably when testing code on different devices, depending on available RAM and hard-disk
space of the device.

Again, have not tried babel or webpack for the purpose of loading modules.
Normally try to use only code that is shipped with FOSS browsers, when the
code is being used in the browser, where possible.
guest271314 at gmail.com (2019-05-25T22:06:39.357Z)
> If you come up with a benchmark for this, would you mind sharing the code and not just the results?  I'm curious how my stuff will fare.

What specifically should be compared? What are you trying to determine?

The current question asks

> how many async-modules can js-app practically load?

which leads to asking how many async-modules are actually needed?

From the outset the presumptive answer would probably be the available RAM and hard disk space of the computer used (and browser preferences, policies, configuration) to load the modules, similar to asking the maximum
content-length of a file which can be fetched and downloaded and the length
of time such a procedure takes in the browser, which depends on the
available RAM and hard-disk space of the computer(s) used (e.g., see How to
solve Uncaught RangeError when download large size json
 https://stackoverflow.com/q/39959467) where the result can vary
considerably when testing code on using devices depending on available RAM and hard-disk
space.

Again, have not tried babel or webpack for the purpose of loading modules.
Normally try to use only code that is shipped with FOSS browsers, when the
code is being used in the browser, where possible.
guest271314 at gmail.com (2019-05-25T22:05:15.047Z)
> If you come up with a benchmark for this, would you mind sharing the code and not just the results?  I'm curious how my stuff will fare.

What specifically should be compared? What are you trying to determine?

The current question asks

> how many async-modules can js-app practically load?

which leads to asking how many async-modules are actually needed?

From the outset the presumptive answer would probably be the available RAM and hard disk space of the computer used (and browser preferences, policies, configuration) to load the modules, similar to asking the maximum
content-length of a file which can be fetched and downloaded and the length
of time such a procedure takes in the browser, which depends on the
available RAM and hard-disk space of the computer(s) used (e.g., see How to
solve Uncaught RangeError when download large size json
 https://stackoverflow.com/q/39959467) where the result is can be vary
considerably when testing code on using devices having different available RAM and hard-disk
space.

Again, have not tried babel or webpack for the purpose of loading modules.
Normally try to use only code that is shipped with FOSS browsers, when the
code is being used in the browser, where possible.
guest271314 at gmail.com (2019-05-25T22:04:34.082Z)
> If you come up with a benchmark for this, would you mind sharing the code and not just the results?  I'm curious how my stuff will fare.

What specifically should be compared? What are you trying to determine?

The current question asks

> how many async-modules can js-app practically load?

which leads to asking how many async-modules are actually needed?

From the outset the presumptive answer would probably be the available RAM and hard disk space of the computer used (and browser preferences, policies, configuration) to load the modules, similar to asking the maximum
content-length of a file which can be fetched and downloaded and the length
of time such a procedure takes in the browser, which depends on the
available RAM and hard-disk space of the computer(s) used (e.g., see How to
solve Uncaught RangeError when download large size json
 https://stackoverflow.com/q/39959467) where the result is can be vary
vastly when using devices having different available RAM and hard-disk
space.

Again, have not tried babel or webpack for the purpose of loading modules.
Normally try to use only code that is shipped with FOSS browsers, when the
code is being used in the browser, where possible.
guest271314 at gmail.com (2019-05-25T22:03:58.536Z)
> If you come up with a benchmark for this, would you mind sharing the code and not just the results?  I'm curious how my stuff will fare.

What specifically should be compared? What are you trying to determine?

The current question asks

> how many async-modules can js-app practically load?

which leads to asking how many async-modules are actually needed?

From the outset the presumptive answer would probably be the available RAM and hard disk space of the computer used (and browser preferences, policies, configuration) to load the modules, similar to asking the maximum
content-length of a file which can be fetched and downloaded and the length
of time such a procedure takes in the browser, which depends on the
available RAM and hard-disk space of the computer(s) used; e.g., see How to
solve Uncaught RangeError when download large size json
 https://stackoverflow.com/q/39959467 where the result is can be vary
vastly when using devices having different available RAM and hard-disk
space.

Again, have not tried babel or webpack for the purpose of loading modules.
Normally try to use only code that is shipped with FOSS browsers, when the
code is being used in the browser, where possible.
guest271314 at gmail.com (2019-05-25T22:03:21.482Z)
> If you come up with a benchmark for this, would you mind sharing the code and not just the results?  I'm curious how my stuff will fare.

What specifically should be compared? What are you trying to determine?

The current question asks

> how many async-modules can js-app practically load?

which leads to asking how many async-modules are actually needed?

>From the outset the presumptive answer would probably be the available RAM

and hard disk space of the computer used (and browser preferences,
policies, configuration) to load the modules, similar to asking the maximum
content-length of a file which can be fetched and downloaded and the length
of time such a procedure takes in the browser, which depends on the
available RAM and hard-disk space of the computer(s) used; e.g., see How to
solve Uncaught RangeError when download large size json
 https://stackoverflow.com/q/39959467 where the result is can be vary
vastly when using devices having different available RAM and hard-disk
space.

Again, have not tried babel or webpack for the purpose of loading modules.
Normally try to use only code that is shipped with FOSS browsers, when the
code is being used in the browser, where possible.