Performance concern with let/const

# Luke Hoban (12 years ago)

We've begun deeper investigations of implementation practicalities related to let/const, and two significant performance concerns have been raised. I think these both merit re-opening discussion of two aspects of the let/const design.

Temporal dead zones

For reference on previous discussion of temporal dead zone see [1].

I've expressed concerns with the performance overhead required for temporal dead zones in the past, but we did not at the time have any data to point to regarding the scale of the concern.

As an experiment, I took the early-boyer test from V8 and changed 'var' to 'let'. In Chrome preview builds with 'let' support, I saw a consistent ~27% slowdown. That is, the 'let is the new var' mantra leads to 27% slower code in this example for the same functionality.

However, we are aware that there are a class of dynamic checks that can be removed by static analysis - in particular intra-procedural use before assignment checks. We implemented these checks in a Chakra prototype, and even with these, we still see an ~5% slowdown.

Our belief is that any further removal of these dynamic checks (inter-procedural checks of accesses to closure captured let references) is a much more difficult proposition, if even possible in any reasonable percentage of cases.

Unless we can be sure that the above perf hit can indeed be easily overcome, I'd like to re-recommend that temporal dead zones for let and const be removed from the ES6 specification. Both would remain block scoped binding, but would be dynamically observable in 'undefined' state - including that 'const' would be observable as 'undefined' before single assignment.

In particular - the case against temporal dead zones is as follows:

  1. The value of temporal dead zones is to catch a class of programmer errors. This value is not overly significant (it's far from the most common error that lint-like tools catch today, or that affects large code bases in practice), and I do not believe the need/demand for runtime-enforced protection against this class of errors has been proven. This feature of let/const is not the primary motivation for either feature (block scoped binding, inlinability and errors on re-assignment to const are the motivating features).

  2. The stated goal of 'let' is to replace 'var' in common usage (and if this is not the goal, we should not be adding 'let')

  3. Unless the above performance hit can be overcome, and given #2 above, let will slow down the web by ~5%.

  4. Even if the above performance hit can be (mostly) overcome with net new engine performance work, that is performance work being forced on engine vendors simply to not make the web slower, and comes at the opportunity cost of actually working on making the web faster.

  5. We are fairly confident that it is not possible to fully remove the runtime overhead cost associated with temporal dead zones. That means that, as a rule, 'let' will be slower than 'var'. And possibly significantly slower in certain coding patterns. Even if that's only 1% slower, I don't think we're going to convince the world to use 'let' if it's primary impact on their code is to make it slower. (The net value proposition for let simply isn't strong enough to justify this).

  6. The only time-proven implementation of let/const (SpiderMonkey) did not implement temporal dead zones. The impact of this feature on the practical performance of the web is not well enough understood relative to the value proposition of temporal dead zones.

__ Early Errors__

Let and const introduce a few new early errors (though this general concern impacts several other areas of ES6 as well). Of particular note, assignment to const and re-declaration of 'let' are spec'd as early errors.

Assignment to const is meaningfully different than previous early errors, because detecting it requires binding references before any code runs. Chakra today parses the whole script input to report syntax errors, but avoids building and storing ASTs until function bodies are executed [2]. Since it is common for significant amounts of script on typical pages to be downloaded but not ever executed, this can save significant load time performance cost.

However, if scope chains and variable reference binding for all scopes in the file need to be established before any code executes, significantly more work is required during this load period. This work cannot be deferred (and potentially avoided entirely if the code is not called), because early errors must be identified before any code executes.

This ultimately means that any script which mentions 'const' will defeat a significant aspect of deferred AST building, and therefore take a load time perf hit.

More generally - this raises a concern about putting increasingly more aggressive static analysis in early errors. It may, for example, argue for a 3rd error category, of errors that must be reported before any code in their function body executes. But more likely, it just argues for allowing any heavy static analysis to be postponed to late errors (or removed entirely and left to lint tools, if the raw overhead is particularly significant).

Luke

[1] esdiscuss/2011-August/016188 [2] blogs.msdn.com/b/ie/archive/2012/06/13/advances-in-javascript-performance-in-ie10-and-windows-8.aspx

# Andreas Rossberg (12 years ago)

On 17 September 2012 03:35, Luke Hoban <lukeh at microsoft.com> wrote:

Temporal dead zones

For reference on previous discussion of temporal dead zone see [1].

I've expressed concerns with the performance overhead required for temporal dead zones in the past, but we did not at the time have any data to point to regarding the scale of the concern.

As an experiment, I took the early-boyer test from V8 and changed 'var' to 'let'. In Chrome preview builds with 'let' support, I saw a consistent ~27% slowdown. That is, the 'let is the new var' mantra leads to 27% slower code in this example for the same functionality.

Just to be clear, the V8 implementation of block scoping and 'let' has not seen any serious performance optimisation yet. So I'd take these numbers (which I haven't verified) with a large grain of salt.

Also, I conjecture that the main cost for 'let' isn't the temporal dead-zone, but block allocation. In particular, a 'let' in a loop costs a significant extra if no static analysis is performed that would allow hoisting the allocation out of the loop.

However, we are aware that there are a class of dynamic checks that can be removed by static analysis - in particular intra-procedural use before assignment checks. We implemented these checks in a Chakra prototype, and even with these, we still see an ~5% slowdown.

I would like to understand this better. AFAICT, you don't necessarily need much static analysis. In most cases (accesses from within the same closure) the check can be trivially omitted. In addition, once you optimise based on type feedback it is also trivial to discover that the check is no longer needed in other situations. I would expect those scenarios to cover most performance-relevant programming patterns quite well.

__ Early Errors__

Let and const introduce a few new early errors (though this general concern impacts several other areas of ES6 as well). Of particular note, assignment to const and re-declaration of 'let' are spec'd as early errors.

Assignment to const is meaningfully different than previous early errors, because detecting it requires binding references before any code runs. Chakra today parses the whole script input to report syntax errors, but avoids building and storing ASTs until function bodies are executed [2]. Since it is common for significant amounts of script on typical pages to be downloaded but not ever executed, this can save significant load time performance cost.

However, if scope chains and variable reference binding for all scopes in the file need to be established before any code executes, significantly more work is required during this load period. This work cannot be deferred (and potentially avoided entirely if the code is not called), because early errors must be identified before any code executes.

This ultimately means that any script which mentions 'const' will defeat a significant aspect of deferred AST building, and therefore take a load time perf hit.

This is indeed a concern. However, I don't think 'const' is the only problem, other ES6 features (such as modules) will probably introduce similar classes of early errors.

More generally - this raises a concern about putting increasingly more aggressive static analysis in early errors. It may, for example, argue for a 3rd error category, of errors that must be reported before any code in their function body executes.

I agree that this is a worthwhile possibility to consider. I mentioned this idea to Dave once and he didn't like it much, but maybe we should have a discussion.

# Allen Wirfs-Brock (12 years ago)

some comments below

On Sep 16, 2012, at 9:35 PM, Luke Hoban wrote:

We've begun deeper investigations of implementation practicalities related to let/const, and two significant performance concerns have been raised. I think these both merit re-opening discussion of two aspects of the let/const design.

Temporal dead zones

For reference on previous discussion of temporal dead zone see [1].

I've expressed concerns with the performance overhead required for temporal dead zones in the past, but we did not at the time have any data to point to regarding the scale of the concern.

As an experiment, I took the early-boyer test from V8 and changed 'var' to 'let'. In Chrome preview builds with 'let' support, I saw a consistent ~27% slowdown. That is, the 'let is the new var' mantra leads to 27% slower code in this example for the same functionality.

Without evaluating the quality of the Chrome implementation, this isn't a meaningful observation. As the Chrome implements have stated they have not done any optimization, this actually becomes a misleading statement. You really should just strike this assertion from your argument and start with your actual experiments as the evidence to support your position.

However, we are aware that there are a class of dynamic checks that can be removed by static analysis - in particular intra-procedural use before assignment checks. We implemented these checks in a Chakra prototype, and even with these, we still see an ~5% slowdown.

Our belief is that any further removal of these dynamic checks (inter-procedural checks of accesses to closure captured let references) is a much more difficult proposition, if even possible in any reasonable percentage of cases.

To understand the general applicability of this results we need to know what specific static optimizations you performed and evaluate that against the list of plausible optimizations. I'd also like to understand that the specific coding patterns within the test program could not be statically optimized.

It's great to use experimental results to support you point, but the result needs to be possible to independently validate and analyze the results.

Unless we can be sure that the above perf hit can indeed be easily overcome, I'd like to re-recommend that temporal dead zones for let and const be removed from the ES6 specification. Both would remain block scoped binding, but would be dynamically observable in 'undefined' state - including that 'const' would be observable as 'undefined' before single assignment.

We really don't have enough evidence to come to that conclusion

In particular - the case against temporal dead zones is as follows:

  1. The value of temporal dead zones is to catch a class of programmer errors. This value is not overly significant (it's far from the most common error that lint-like tools catch today, or that affects large code bases in practice), and I do not believe the need/demand for runtime-enforced protection against this class of errors has been proven. This feature of let/const is not the primary motivation for either feature (block scoped binding, inlinability and errors on re-assignment to const are the motivating features).

As far as I'm concerned the motivating feature for TDZs is to provide a rational semantics for const. There was significant technical discussion of that topic and TDZs emerged as the best solution. An alternative argument you could make would be to eliminate const. Is there a reason you aren't making that argument?

  1. The stated goal of 'let' is to replace 'var' in common usage (and if this is not the goal, we should not be adding 'let').

There is actually some disagreement about that statement of the goal. The goal of let is to provide variable that are scoped to the block level. That is the significant new semantics that is being added. The slogan-ism isn't the goal.

As stated above, let isn't the motivator for TDZ, it's const. Let could easily be redefined to not need a TDZ (if that really proved to be a major area of concern). So, you either need to argument against const or argue against block scoping, in general rather let.

  1. Unless the above performance hit can be overcome, and given #2 above, let will slow down the web by ~5%.

As covered above, this is a bogus assertion without data to support it.

  1. Even if the above performance hit can be (mostly) overcome with net new engine performance work, that is performance work being forced on engine vendors simply to not make the web slower, and comes at the opportunity cost of actually working on making the web faster.

Again, isn't this really a question about the value of const.

  1. We are fairly confident that it is not possible to fully remove the runtime overhead cost associated with temporal dead zones. That means that, as a rule, 'let' will be slower than 'var'. And possibly significantly slower in certain coding patterns. Even if that's only 1% slower, I don't think we're going to convince the world to use 'let' if it's primary impact on their code is to make it slower. (The net value proposition for let simply isn't strong enough to justify this).

I think you an fairly easily prove that there are use cases where the TDZ can not be statically eliminated But that does not man that on average and for a typical program "let will be slower than var".

Again, you might be better served to argue that block level lexical scoping is slower than a single contour function level scoping. However, that's an argument that's a forty year old argument that really should have to be reopenned.

  1. The only time-proven implementation of let/const (SpiderMonkey) did not implement temporal dead zones. The impact of this feature on the practical performance of the web is not well enough understood relative to the value proposition of temporal dead zones.

And it has a pretty bogus semantics for const. But generally this isn't relevant as FF let/const can not be interoperably used on the web so it doesn't tell us much about the practical perf of the web.

__ Early Errors__

Let and const introduce a few new early errors (though this general concern impacts several other areas of ES6 as well). Of particular note, assignment to const and re-declaration of 'let' are spec'd as early errors.

Assignment to const is meaningfully different than previous early errors, because detecting it requires binding references before any code runs. Chakra today parses the whole script input to report syntax errors, but avoids building and storing ASTs until function bodies are executed [2]. Since it is common for significant amounts of script on typical pages to be downloaded but not ever executed, this can save significant load time performance cost.

However, if scope chains and variable reference binding for all scopes in the file need to be established before any code executes, significantly more work is required during this load period. This work cannot be deferred (and potentially avoided entirely if the code is not called), because early errors must be identified before any code executes.

This ultimately means that any script which mentions 'const' will defeat a significant aspect of deferred AST building, and therefore take a load time perf hit.

More generally - this raises a concern about putting increasingly more aggressive static analysis in early errors. It may, for example, argue for a 3rd error category, of errors that must be reported before any code in their function body executes. But more likely, it just argues for allowing any heavy static analysis to be postponed to late errors (or removed entirely and left to lint tools, if the raw overhead is particularly significant).

I think you may have a stronger point, if this is really the actual basis of your concern. Perhaps there is call for a 3rd category of errors, that can be differed beyond initial parse. However, I think const is probably the least of your concerns in this regard. It would be useful to get deeper into what other semantics we are considering have impact on deferred AST construction.

# Luke Hoban (12 years ago)

From: Andreas Rossberg [mailto:rossberg at google.com]

On 17 September 2012 03:35, Luke Hoban <lukeh at microsoft.com> wrote:

Temporal dead zones

As an experiment, I took the early-boyer test from V8 and changed 'var' to 'let'. In Chrome preview builds with 'let' support, I saw a consistent ~27% slowdown. That is, the 'let is the new var' mantra leads to 27% slower code in this example for the same functionality.

Just to be clear, the V8 implementation of block scoping and 'let' has not seen any serious performance optimisation yet. So I'd take these numbers (which I haven't verified) with a large grain of salt.

Yes - sorry I didn't make this more clear. This baseline was relevant mostly because it motivated trying to gather data with some of the key optimizations implemented.

Also, I conjecture that the main cost for 'let' isn't the temporal dead-zone, but block allocation. In particular, a 'let' in a loop costs a significant extra if no static analysis is performed that would allow hoisting the allocation out of the loop.

That may well be another significant perf concern. For early-boyer in particular, I believe the structure of the code ensures that this particular issue will not come into play - the code largely hoists variable declarations to top of function scope.

However, we are aware that there are a class of dynamic checks that can be removed by static analysis - in particular intra-procedural use before assignment checks. We implemented these checks in a Chakra prototype, and even with these, we still see an ~5% slowdown.

I would like to understand this better. AFAICT, you don't necessarily need much static analysis. In most cases (accesses from within the same closure) the check can be trivially omitted.

Yes - this is what we implemented in the Chakra prototype. Though note that the 'switch' issue raised on the list recently leads to cases even within a closure body where static analysis is insufficient - such as this (though I would guess this case won't be the primary perf culprit):

function(x) { do { switch(x) { case 0: return x; //always a runtime error case 1: let x; x = 'let'; //never a runtime error case 2: return x; //sometimes a runtime error } } while (foo()); }

__ Early Errors__

This ultimately means that any script which mentions 'const' will defeat a significant aspect of deferred AST building, and therefore take a load time perf hit.

This is indeed a concern. However, I don't think 'const' is the only problem, other ES6 features (such as modules) will probably introduce similar classes of early errors.

Agreed - this concern is broader 'let'/'const'.

More generally - this raises a concern about putting increasingly more aggressive static analysis in early errors. It may, for example, argue for a 3rd error category, of errors that must be reported before any code in their function body executes.

I agree that this is a worthwhile possibility to consider. I mentioned this idea to Dave once and he didn't like it much, but maybe we should have a discussion.

I understand the potential concern with this sort of thing - it weakens the upfront feedback from early errors. But I'm not sure JavaScript developers really want to pay runtime performance cost in return for more up front static analysis during page load.

Luke

# François REMY (12 years ago)

(Just one opinion)

I'm all in favor of function-level parse errors. This reminds me an article of Ian Hickson where he wondered why, to the contrary of CSS, the ECMAScript language didn't define a generic syntax defining a well-formed program (tokens, parenthesis+brackets balance, ...) and which would replace any block he didn't understand by a { throw ParseError() } block.

function A() {
    ooops {
        return 3;
    }
}

would be the same as

function() {
    do { throw new ParseError("..."); }
}

I don't say we need to go that far (in fact, ASI probably makes it impossible), but any progress made to more modularity and lazy compilation is good to take.

# Brendan Eich (12 years ago)

Agree with your points in reply to Luke, one clarification here:

Allen Wirfs-Brock wrote:

As stated above, let isn't the motivator for TDZ, it's const. Let could easily be redefined to not need a TDZ (if that really proved to be a major area of concern). So, you either need to argument against const or argue against block scoping, in general rather let.

TC39 has been divided on this but managed to reach TDZ consensus. Waldemar argued explicitly for TDZ for let, as (a) future-proofing for guards; (b) to enable let/const refactoring without surprises.

One could argue that (a) can be deferred to let-with-guards, should we add guards. (b) I find more compelling.

# Luke Hoban (12 years ago)

From: Allen Wirfs-Brock [mailto:allen at wirfs-brock.com]

On Sep 16, 2012, at 9:35 PM, Luke Hoban wrote:

As an experiment, I took the early-boyer test from V8 and changed 'var' to 'let'. In Chrome preview builds with 'let' support, I saw a consistent ~27% slowdown. That is, the 'let is the new var' mantra leads to 27% slower code in this example for the same functionality.

Without evaluating the quality of the Chrome implementation, this isn't a meaningful observation. As the Chrome implements have stated they have not done any optimization, this actually becomes a misleading statement. You really should just strike this assertion from your argument and start with your actual experiments as the evidence to support your position.

Yes - this was definitely not a significant aspect of the argument - it was just the initial datapoint which motivated us to do a deeper performance investigation.

However, we are aware that there are a class of dynamic checks that can be removed by static analysis - in particular intra-procedural use before assignment checks. We implemented these checks in a Chakra prototype, and even with these, we still see an ~5% slowdown.

Our belief is that any further removal of these dynamic checks (inter-procedural checks of accesses to closure captured let references) is a much more difficult proposition, if even possible in any reasonable percentage of cases.

To understand the general applicability of this results we need to know what specific static optimizations you performed and evaluate that against the list of plausible optimizations. I'd also like to understand that the specific coding patterns within the test program could not be statically optimized.

These are good questions. Paul will be attending the TC39 meeting this week, and can likely talk to specific details. High level though, we statically eliminate the TDZ checks for references to 'let' within the same closure body as the declaration.

Unless we can be sure that the above perf hit can indeed be easily overcome, I'd like to re-recommend that temporal dead zones for let and const be removed from the ES6 specification. Both would remain block scoped binding, but would be dynamically observable in 'undefined' state - including that 'const' would be observable as 'undefined' before single assignment.

We really don't have enough evidence to come to that conclusion

I'm not as sure. I'm not convinced we have evidence that TDZ is actually demanded by developers. I'm more convinced that we have evidence that TDZ makes let strictly slower than var. The only question seems to be how much, and whether this is significant enough to counter balance the perceived developer demand for TDZ.

In particular - the case against temporal dead zones is as follows:

  1. The value of temporal dead zones is to catch a class of programmer errors. This value is not overly significant (it's far from the most common error that lint-like tools catch today, or that affects large code bases in practice), and I do not believe the need/demand for runtime-enforced protection against this class of errors has been proven. This feature of let/const is not the primary motivation for either feature (block scoped binding, inlinability and errors on re-assignment to const are the motivating features).

As far as I'm concerned the motivating feature for TDZs is to provide a rational semantics for const. There was significant technical discussion of that topic and TDZs emerged as the best solution. An alternative argument you could make would be to eliminate const. Is there a reason you aren't making that argument?

I'm not as convinced that a const which is undefined until singly assigned is "irrational". When combined with a 'let' which can be observed as 'undefined', I believe developers would understand this semantic. The optimization opportunity for 'const' remains the same - it can be inlined whenever the same static analysis needed to avoid TDZ checks would apply.

I am not arguing for eliminating const because 'const' at least has a potential performance upside and thus can motivate developer usage. I am honestly more inclined to argue for eliminating 'let' if it ends up having an appreciable performance cost over 'var', as it's upside value proposition is not strong.

  1. The stated goal of 'let' is to replace 'var' in common usage (and if this is not the goal, we should not be adding 'let').

There is actually some disagreement about that statement of the goal. The goal of let is to provide variable that are scoped to the block level. That is the significant new semantics that is being added. The slogan-ism isn't the goal.

This strikes at a critical piece of the discussion around 'let'. Adding a new fundamental block scoped binding form ('let') has a very significant conceptual cost to the language. If it is not the expectation of the committee that new code will nearly universally adopt 'let' instead of 'var', and that books will be able to state 'use let instead of var', then I think that brings into question whether 'let' is still passing the cost/value tradeoff. This tradeoff gets increasingly weaker as additional performance overheads are entered into the cost bucket.

  1. Unless the above performance hit can be overcome, and given #2 above, let will slow down the web by ~5%.

As covered above, this is a bogus assertion without data to support it.

I have to push back on this a bit. We of course don't have shipping, fully-optimizing, implementations of let/const yet. But we are doing early prototyping, and contributing input based on the performance investigations we can do so far. The best data we have so far, even after a pass of significant optimizations targeted at eliminating TDZ overhead, shows a significant remaining cost. That said, it is reasonable to expect we can find further optimization opportunities, so you are right that it's too early to stick a precise number on this.

  1. Even if the above performance hit can be (mostly) overcome with net new engine performance work, that is performance work being forced on engine vendors simply to not make the web slower, and comes at the opportunity cost of actually working on making the web faster.

Again, isn't this really a question about the value of const.

I would have seen it as the opposite - 'const' actually enables some new optimizations relative to existing web content.

  1. We are fairly confident that it is not possible to fully remove the runtime overhead cost associated with temporal dead zones. That means that, as a rule, 'let' will be slower than 'var'. And possibly significantly slower in certain coding patterns. Even if that's only 1% slower, I don't think we're going to convince the world to use 'let' if it's primary impact on their code is to make it slower. (The net value proposition for let simply isn't strong enough to justify this).

I think you an fairly easily prove that there are use cases where the TDZ can not be statically eliminated But that does not man that on average and for a typical program "let will be slower than var".

I don't understand this argument. I believe it exactly means that 'let will be slower than var' in the aggregate for a typical program. 'let' is certainly not going to be faster than 'var' in any case, so if there are any cases at all where TDZ checks cannot be removed, 'let' is in aggregate slower than 'var'. I don't think we can (or should!) expect developers to have to think about what patterns cause TDZ checks to be unavoidable, and use var in those cases instead. So I'm concerned about the ultimate message to developers being 'let makes your code slower'. I agree that the important question is about the magnitude of this aggregate overhead cost, which is what we have been trying to gather concrete data on.

Again, you might be better served to argue that block level lexical scoping is slower than a single contour function level scoping. However, that's an argument that's a forty year old argument that really should have to be reopenned.

I don't think there is a need to claim this.

  1. The only time-proven implementation of let/const (SpiderMonkey) did not implement temporal dead zones. The impact of this feature on the practical performance of the web is not well enough understood relative to the value proposition of temporal dead zones.

And it has a pretty bogus semantics for const. But generally this isn't relevant as FF let/const can not be interoperably used on the web so it doesn't tell us much about the practical perf of the web.

A primary motivation for let/const has been the 'defacto standard' line of reasoning based on existing experience and usage. Can we really then turn around and say that we can't rely on any of the experience we have with the existing features because they have bogus semantics?

__ Early Errors__

Let and const introduce a few new early errors (though this general concern impacts several other areas of ES6 as well). Of particular note, assignment to const and re-declaration of 'let' are spec'd as early errors.

Assignment to const is meaningfully different than previous early errors, because detecting it requires binding references before any code runs. Chakra today parses the whole script input to report syntax errors, but avoids building and storing ASTs until function bodies are executed [2]. Since it is common for significant amounts of script on typical pages to be downloaded but not ever executed, this can save significant load time performance cost.

However, if scope chains and variable reference binding for all scopes in the file need to be established before any code executes, significantly more work is required during this load period. This work cannot be deferred (and potentially avoided entirely if the code is not called), because early errors must be identified before any code executes.

This ultimately means that any script which mentions 'const' will defeat a significant aspect of deferred AST building, and therefore take a load time perf hit.

More generally - this raises a concern about putting increasingly more aggressive static analysis in early errors. It may, for example, argue for a 3rd error category, of errors that must be reported before any code in their function body executes. But more likely, it just argues for allowing any heavy static analysis to be postponed to late errors (or removed entirely and left to lint tools, if the raw overhead is particularly significant).

I think you may have a stronger point, if this is really the actual basis of your concern. Perhaps there is call for a 3rd category of errors, that can be differed beyond initial parse. However, I think const is probably the least of your concerns in this regard. It would be useful to get deeper into what other semantics we are considering have impact on deferred AST construction.

This is indeed a separate concern from the TDZ issue, and both are things that I believe are concerning from an overall performance impact perspective. But you are right - this issue is much broader than const/let, and relates to a whole class of new static checks being added in ES6 and the potential load time cost they incur.

I'll pull together a list of the concerning new checks in the current ES6 drafts for discussion at this week's meeting. As Andreas noted, I expect modules will bring another significant batch of these which is not yet in the spec.

Luke

# Luke Hoban (12 years ago)

From: Brendan Eich [mailto:brendan at mozilla.org]

Allen Wirfs-Brock wrote:

As stated above, let isn't the motivator for TDZ, it's const. Let could easily be redefined to not need a TDZ (if that really proved to be a major area of concern). So, you either need to argument against const or argue against block scoping, in general rather let.

TC39 has been divided on this but managed to reach TDZ consensus. Waldemar argued explicitly for TDZ for let, as (a) future-proofing for guards; (b) to enable let/const refactoring without surprises.

One could argue that (a) can be deferred to let-with-guards, should we add guards. (b) I find more compelling.

That's right - I referenced the original mail with the detailed writeup of the discussion leading to those decisions and the consensus you note. At the time, I raised concerns about performance overhead of TDZ, and I continue to believe it's important to weigh those performance concerns significantly in the discussion about TDZ.

My actual proposal is to remove TDZ for both 'let' and 'const', which addresses the refactoring concern. But it leads to 'const' being observable as undefined, which I expect is the more controversial aspect (though I'm not personally convinced this is a particularly significant practical concern).

Luke

# Allen Wirfs-Brock (12 years ago)

On Sep 17, 2012, at 12:37 PM, Luke Hoban wrote:

These are good questions. Paul will be attending the TC39 meeting this week, and can likely talk to specific details. High level though, we statically eliminate the TDZ checks for references to 'let' within the same closure body as the declaration.

The other major check that I would expect to be significant, is whether a inner function that references an outer TDZ binding is (potentially) called before initialization of the binding. EG:

{ function f(){return x} f(); //TDZ check of x in f can not be eliminated let x=1; }

{ function f(){return x} let x=1; //TDZ check of x in f should be eliminated f();
}

# Domenic Denicola (12 years ago)
  1. The stated goal of 'let' is to replace 'var' in common usage (and if this is not the goal, we should not be adding 'let').

There is actually some disagreement about that statement of the goal. The goal of let is to provide variable that are scoped to the block level. That is the significant new semantics that is being added. The slogan-ism isn't the goal.

This strikes at a critical piece of the discussion around 'let'. Adding a new fundamental block scoped binding form ('let') has a very significant conceptual cost to the language. If it is not the expectation of the committee that new code will nearly universally adopt 'let' instead of 'var', and that books will be able to state 'use let instead of var', then I think that brings into question whether 'let' is still passing the cost/value tradeoff. This tradeoff gets increasingly weaker as additional performance overheads are entered into the cost bucket.

To provide a (admittedly single) developer perspective: let/const are attractive because they bring us closer to eliminating the confusion inherent in hoisting and achieving the same semantics as C-family languages. Although it seems that disallowing use before declaration is not possible, hoisting to block-level plus TDZ checks for the intermediate code gives a reasonable approximation, at least assuming I've understood the proposals and email threads correctly.

There are also a number of auxiliary benefits like the fresh per-loop binding and of course const optimizations/safeguards (which eliminate the need for a dummy object with non-writable properties to store one's constants).

Personally in the grand scheme of things even a 5% loss of speed is unimportant to our code when weighed against the value of the saner semantics proposed. We would immediately replace all vars with let/const if we were able to program toward this Chakra prototype (e.g. for our Windows 8 app).

I am almost hesitant to bring up such an obvious argument but worrying about this level of optimization seems foolhardy in the face of expensive DOM manipulation or async operations. Nobody worries that their raw JS code will run 5% slower because people are using Chrome N-1 instead of Chrome N. Such small performance fluctuations are a fact of life even with ES5 coding patterns (e.g. arguments access, getters/setters, try/catch, creating a closure without manually hoisting it to the outermost applicable level, using array extras instead of for loops, …). If developers actually need to optimize at a 5% level solely on their JS they should probably consider LLJS or similar.

That said I do understand that a slowdown could make the marketing story harder as not everyone subscribes to my views on the speed/clarity tradeoff.

# Andreas Rossberg (12 years ago)

On 17 September 2012 18:37, Luke Hoban <lukeh at microsoft.com> wrote:

'let' is certainly not going to be faster than 'var' in any case

There is at least one very important counterexample to that claim: the global scope. Assuming lexical global scope (as we tentatively agreed upon at the last meeting) using 'let' in global scope will be significantly faster than 'var', without even requiring any cleverness from the VM.

# Andreas Rossberg (12 years ago)

On 17 September 2012 19:51, Allen Wirfs-Brock <allen at wirfs-brock.com> wrote:

On Sep 17, 2012, at 12:37 PM, Luke Hoban wrote:

These are good questions. Paul will be attending the TC39 meeting this week, and can likely talk to specific details. High level though, we statically eliminate the TDZ checks for references to 'let' within the same closure body as the declaration.

The other major check that I would expect to be significant, is whether a inner function that references an outer TDZ binding is (potentially) called before initialization of the binding. EG:

{ function f(){return x} f(); //TDZ check of x in f can not be eliminated let x=1; }

{ function f(){return x} let x=1; //TDZ check of x in f should be eliminated f(); }

Unfortunately, detecting this case in general requires significant static analysis, since f might be called indirectly through other functions (even ignoring the case where f is used in a first-class manner).

# Allen Wirfs-Brock (12 years ago)

On Sep 18, 2012, at 7:27 AM, Andreas Rossberg wrote:

On 17 September 2012 19:51, Allen Wirfs-Brock <allen at wirfs-brock.com> wrote:

On Sep 17, 2012, at 12:37 PM, Luke Hoban wrote:

These are good questions. Paul will be attending the TC39 meeting this week, and can likely talk to specific details. High level though, we statically eliminate the TDZ checks for references to 'let' within the same closure body as the declaration.

The other major check that I would expect to be significant, is whether a inner function that references an outer TDZ binding is (potentially) called before initialization of the binding. EG:

{ function f(){return x} f(); //TDZ check of x in f can not be eliminated let x=1; }

{ function f(){return x} let x=1; //TDZ check of x in f should be eliminated f(); }

Unfortunately, detecting this case in general requires significant static analysis, since f might be called indirectly through other functions (even ignoring the case where f is used in a first-class manner).

Yes but but there are fairly simple heuristics that approximate that result, for example: if no function calls dominate the initialization of x then TDZ checks will never need to be made for x

# Andreas Rossberg (12 years ago)

On 18 September 2012 13:41, Allen Wirfs-Brock <allen at wirfs-brock.com> wrote:

Yes but but there are fairly simple heuristics that approximate that result, for example: if no function calls dominate the initialization of x then TDZ checks will never need to be made for x

Yes, except that in JS, a function call can hide behind so many seemingly innocent operations...