const VS features detection
On Thu, Dec 19, 2013 at 3:03 PM, Andrea Giammarchi <andrea.giammarchi at gmail.com> wrote:
It seems that I need to create N amount of garbage by design.
This does not work, the const has already been defined:
try { new Proxy({},{}); const ES6_PROXY = true; } catch(o_O) { const ES6_PROXY = false; }
That doesn't work anyway, not because the const has already been defined, but because ES6_PROXY is defined within block bodies. Same as:
{
const IS_BOUND_TO_THE_BLOCK = true;
}
This does not work neither
try { new Proxy({},{}); var ES6_PROXY = true; } catch(o_O) { var ES6_PROXY = false; } const ES6_PROXY = false; // var 'ES6_PROXY' has already been declared
Because the var was hoisted up to the const's scope and const can't be used to redeclare an existing binding of the same name. Is ES6_PROXY meant to be bound in the global scope?
neither does the following
try { new Proxy({},{}); let ES6_PROXY = true; } catch(o_O) { let ES6_PROXY = false; } // Illegal let declaration outside extended mode
That's a Canary-specific error, but the code wouldn't do what you want anyway, for the same reason as the first example.
As summary, there is no way to feature detect and define a const in the same scope, a closure without the possibility to define such constant as well is mandatory.
const ES6_PROXY = function(){ try { new Proxy({},{}); return true; } catch(o_O) { return false; } }();
This works fine:
var result = true;
try {
new Proxy({},{});
} catch(o_O) {
result = false;
}
const ES6_PROXY = result;
Also, if you want to experiment with the closest-to-spec-so-far let/const behavior, use IE11.
Rick thanks but I wasn't strictly asking for solutions because I have one, I was rather pointing at the fact that there is no solution and by design we need to create garbage.
Your last example talks itself ... why do we need to define another
variable in that scope? That is annoying, imho ... I don't want define a
tmp
like variable per each const type I'd like to address down the road,
you know what I mean?
I cannot even drop that var, so that's a potential leak in the global scope/context if not loeaded through modules while I might want to define that constant globally (and maybe name-spaced, but that's not the issue here)
IE11 ... I don't have it with me now, would this work nicely ? I think no :-(
let ES6_PROXY = true;
try {
new Proxy({},{});
} catch(o_O) {
ES6_PROXY = false;
}
const ES6_PROXY = ES6_PROXY;
So const are not as simple and straight forward to be defined as it is in C
or others and because these have been defined on top of var
hoisting
behavior.
#ifdef WHATEVER
static int const NAME = 1;#else
static int const NAME = 0;
#endif
Thoughts?
On Thu, Dec 19, 2013 at 6:18 PM, Andrea Giammarchi <andrea.giammarchi at gmail.com> wrote:
Rick thanks but I wasn't strictly asking for solutions because I have one, I was rather pointing at the fact that there is no solution and by design we need to create garbage.
Your last example talks itself ... why do we need to define another variable in that scope? That is annoying, imho ... I don't want define a
tmp
like variable per each const type I'd like to address down the road, you know what I mean?I cannot even drop that var, so that's a potential leak in the global scope/context if not loeaded through modules while I might want to define that constant globally (and maybe name-spaced, but that's not the issue here)
IE11 ... I don't have it with me now, would this work nicely ? I think no :-(
let ES6_PROXY = true; try { new Proxy({},{}); } catch(o_O) { ES6_PROXY = false; } const ES6_PROXY = ES6_PROXY;
It doesn't matter if I ran this in IE11 today or Firefox/Chrome/whatever when those implementations are updated: let and const bindings don't allow redeclaration of bindings that already exist in that scope.
So const are not as simple and straight forward to be defined as it is in C or others and because these have been defined on top of
var
hoisting behavior.#ifdef WHATEVER static int const NAME = 1;#else static int const NAME = 0; #endif
Thoughts?
It's an invalid comparison, unless you're saying you want ifdefs in JS. This is an apples-to-apples comparison:
C:
if (1) {
static int const A_VAL = 1;
}
printf("%d", A_VAL);
// error: use of undeclared identifier 'A_VAL'
JS:
if (1) {
const A_VAL = 1;
}
console.log(A_VAL);
// 'A_VAL' is undefined
it's not invalind, it's what I am talking about.
There is no way to conditionally define constants if not through an inline ternary or a returned value from a closure otherwise, by design, a second variable (garbage, pointless hoisting pollution) is mandatory.
The only way to avoid this is to evaluate ... it looks dirty, it works "as I would like to" ... as any other C like language I know would do with constants without the old var hoisting problem.
try {
new Proxy({},{});
eval('const ES6_PROXY=true');
} catch(nope) {
eval('const ES6_PROXY=false');
}
console.log(ES6_PROXY);
Does this make sense?
quick recap:
why is this not possible, giving the ability to understand through typeof if there is a value or not?
// defined as const
// reserved in this scope
// but not assigned yet
const WHATEVER;
if (condition) {
// first come, first serves
WHATEVER = 123;
// that's it! const defined for the whole scope
// immutable from now on
} else {
WHATEVER = 456;
}
console.log(WHATEVER);
// should throw as it is now
var WHATEVER;
let WHATEVER;
function WHATEVER(){}
It seems to me this might be a desired behavior. Any chance this will happen? NO is a valid answer, thanks.
Andrea Giammarchi wrote:
why is this not possible, giving the ability to understand through typeof if there is a value or not?
// defined as const // reserved in this scope // but not assigned yet const WHATEVER; if (condition) { // first come, first serves WHATEVER = 123; // that's it! const defined for the whole scope // immutable from now on } else { WHATEVER = 456; }
Past JS2/ES4 designs have allowed this, but it requires definite assignment analysis and use-before-defining-assignment error checking.
In general, such checks can't be static in JS, so the language and VM complexity blow up a bit with runtime checking for an "uninitialized" (not same as undefined) sentinel value that must be guarded against by a read barrier where it can't be proven unnecessary.
This is pretty obnoxious for implementors, not great for users either (did I declare const IMPORTANT; and forget to assign IMPORTANT= in some branch of control flow that my tests miss?).
It's not in Harmony. We require an initialiser as part of the const declaration syntax. What you are doing here, by many measures, is varying a variable from its default (undefined) value to a new value.
If you want that variable to stop varying after, and you need it as a global (window) object property anyway, use Object.defineProperty to make it non-writable.
BTW, the last version your head post gave,
const ES6_PROXY = function(){
try {
new Proxy({},{});
return true;
} catch(o_O) {
return false;
}
}();
isn't bad at all, but are you really concerned about false-positive (typeof Proxy != "undefined") test results? Some other Proxy could easily be a function that does not throw when called with two objects as arguments. The standard object detection pattern:
if (typeof Proxy == "undefined")
this.Proxy = (/* polyfill Proxy here somehow... */);
often has a leading
var Proxy;
if (typeof Proxy == "undefined")
this.Proxy = (/* polyfill Proxy here somehow... */);
precisely to avoid errors from static analyzers looking for bare Proxy uses without a declared var in scope.
In any event, these patterns want variables, not constants, because they must work when there's already a binding. And you cannot redeclare with const (or let or class), as Rick points out.
thanks for the exhaustive answer, useful and more than appreciated.
The example was addressing one problem, the need for an extra variable, and was not meant to represent the best way to detect if the Proxy was the meant one.
About this, since you pointed that out, I'll come back on vendor prefixes, and the fact that to know if Proxy is the old one, and MDN has even a specific page for it, or "the real/standard one", a developer should go down the [native] check on the constructor and the inevitable try/catch since both old and new Proxy functions have that create method as public static.
Proxy is a very good example of those hard to detect features since the vendor/engine decided that prefixes were not a good option ... well, if(typeof Proxy === 'undefined') brings you nowhere in current node.js, as example, neither in any Chrome with experiments enabled ... and about that, there is no exposed flag anyone can feature detect to understand if the current constructor is out of an experimental feature or it's the real one spec'd and supported.
Best
as side note: in node.js using --harmony flag ... what a developer should do there to understand that a partially non standard version of Proxy is there instead of the real one?
Let's imagine I am a client/server library author ... just for a second, I'd like to grant one behaviour across platforms ... I'd love V8 to flag experimental features as v8Proxy instead, at least I know what I am dealing with!!! Don't care about multiple checks, as long as I can grant consistency.
This is a concern of mine that keeps coming up ... off topic here
Surely not the answer you want, but as developer, I would consider the following actions:
- Putting a prominent warning in my library doc: Do not use outdated builds with experimental features enabled. It would make babies cry.
- Opening a bug against implementations, asking that builds with experimental features enabled must have an expiration date. It may annoy users, but at least, it will prevent kitten from being killed.
On 20 December 2013 04:05, Brendan Eich <brendan at mozilla.com> wrote:
BTW, the last version your head post gave,
const ES6_PROXY = function(){ try { new Proxy({},{}); return true; } catch(o_O) { return false; } }();
Of course, the problem here is hardly specific to feature detection, or const, but simply an instance of the general annoyance induced by the old-school statement/expression separation. What you'd really want to write is something like
const ES6_PROXY = try new Proxy({}, {}), true catch (_) false;
For ES7 I would like to revive the do-expression proposal (hopefully at the next meeting), so that one can at least approximate the above with
const ES6_PROXY = do { try { new Proxy({}, {}); true } catch (_) { false } };
Of course, semantically the function is equivalent, and a fine solution, if a bit verbose.
Your C comparison was apples-to-oranges, #ifdef is evaluated at compile time.
No, that was to underline it is possible to define this twice
static int const NAME =
early send ... again:
That was to underline it is possible to define the constant twice, in two blocks, and use that later on as defined in one of those blocks.
In current specs this is not possible.
As Brendan mentioned about runtime checking for an "uninitialized" (not same as undefined), I would argue that undefined or declaration without assignment (which is === undefined) could be considered as const name reserved for the scope, first come, first serve, 'cause as developers might declare variables and forget to assign values, it should not be a specification concern how badly the developer can code.
Linters are used for this purpose, highlighting uninitialized values.
Last but not least, the first example Andreas wrote is a very handy piece of code: talking about the "inline try/catch expression" .. I wish it was already possible like that!
That would surely simplify const definition, when try/catch is needed.
many launches --harmony by default with node, many others surf the web on the edge. I don't want to tell anyone what to do in order to use a library, they know experimental is experimental, and as Developer I would like to be able to feature-detect experiments or at least know that I am in an experimental Environment.
Once again, this is off-topic here, but Proxy is a very good example for this problem, so are generators in older Spidermonkey versions, so are ... you name it, avoiding vendor prefixes for not finalized yet stuff is a hell of a foot-gun for both specifications and developers ... maybe we don't see this as a problem today, even if there are already concrete examples like this, but I am sure it will come back soon.
As far as the compiler is concerned it is only defined once. The preprocessor strips the second const out before the compilation phase.
Let me correct my earlier statement: "Your C comparison was apples-to-oranges, #ifdef is evaluated before compilation."
This is not helping ... yeah, apples-to-orange, as you wish .. now to imagine you have a flexible understanding of the issue and the example I was proposing so that:
if (stuff) {
const WHATEVER = 1;
} else {
const WHATEVER = 2;
}
two blocks, one const assigned with possibly only one value
Now tell me again how this works in C ...
As written above this couldn't possibly work in C -- const is block level, right? Originally you wrote this with #ifdefs, which aren't blocks. This isn't even close to apples-to-apples.
So are you suggesting that js grow a preprocessor? That block scoping shouldn't really mean block scoping? Or that const shouldn't reallymean const? Best I can tell it could only be one of those three -- and they all sound bad to me.
I am suggesting that const should:
- reserve the const name for the whole scope (similar to var)
- if assigned, keep that value and throw if re-assigned
- if not assigned, having the very first assignment "seal the deal" and throw to any other re-assignment attempt
In JS code, so we get eventually rid of that silly C example I put on the plate, a const in the global scope should be like the following, but the logic should be per scope and not per context.
Object.defineProperty(window, 'NAME', {
configurable: true,
get: function () {
return void 0;
},
set: function (value) {
Object.defineProperty(this, 'NAME', {value:value});
// eventually with a getter instead and a setter
// that instantly throw Errors
}
});
However I would rather improve try/catch so that const are easier to assign as these are now, without forgetting as Brendan said about the value, and many other handy situations might be solved without needing to create garbage around the try/catch
I've realized indeed thanks to Andreas hint that the problem about creating garbage around a constant assignment is rather about the current try/catch implementation and the fact it does not work inline as expression.
As summary: forget const, please improve the try/catch ... this will make life easier in many situations
On Fri, Dec 20, 2013 at 2:25 PM, Dean Landolt <dean at deanlandolt.com> wrote:
As written above this couldn't possibly work in C -- const is block level, right? Originally you wrote this with #ifdefs, which aren't blocks. This isn't even close to apples-to-apples.
So are you suggesting that js grow a preprocessor? That block scoping shouldn't really mean block scoping? Or that const shouldn't reallymean const? Best I can tell it could only be one of those three -- and they all sound bad to me.
Sorry, just getting caught up here... These points are the basis of my "invalid comparison" claim.
Andrea Giammarchi wrote:
I am suggesting that const should:
- reserve the const name for the whole scope (similar to var)
- if assigned, keep that value and throw if re-assigned
- if not assigned, having the very first assignment "seal the deal" and throw to any other re-assignment attempt
SpiderMonkey's primordial const (from 1999? I forget) was like this, except (pre-strict-mode) no throw on reassignment attempt, just silent failure. However it had quirks, e.g.:
// K in scope here, hoisted with value undefined
for (var i = 0; i < N; i++) {
const K = i*i;
...
}
// K still in scope here, like hoisted var
So you could see more than one value for a constant (due to hoisting, and if the assigning initialiser was in a loop).
TC39 voted "no" long ago. We are not going to do anything like your 1-3 list. Sorry.
Also, I think you are still barking up the wrong tree -- your issue is not const but a var that you can make non-writable after some fiddling -- which wants Object.defineProperty.
You are also mixing independent issues such as experimental feature detection, which is not a burning issue (as Claude said, stop supporting downrev browsers and browser vendors will stop putting experimental features in product release channels).
Please stick to one topic per thread if you can. I understand when they get tangled, but once untangled, don't rehash or go in circles.
I know SpiderMonkey was doing that and yes, too many topics here, apologies.
I just wanted to understand the rational for not having that behavior since I would not define a const inside a for loop but maybe somebody would do that.
Anyway, got it, nothing will change, it would be very cool to think about improving try/catch logic in any case, but that's another topic.
On Dec 20, 2013, at 5:29 PM, Andrea Giammarchi <andrea.giammarchi at gmail.com> wrote:
Anyway, got it, nothing will change, it would be very cool to think about improving try/catch logic in any case, but that's another topic.
It is a good use-case for do expressions, for sure.
One of the last (the last?) use cases for IIFEs. Shame it’s too late for ES6, they will be great to have in ES7.
On Dec 20, 2013, at 5:32 AM, Andreas Rossberg <rossberg at google.com> wrote:
For ES7 I would like to revive the do-expression proposal (hopefully at the next meeting)
Glad to hear you're in favor! I'll be happy to co-champion. The const-initializer use case is a good one, but it's also extremely valuable for code generators (it's got a much stronger equivalence property, aka TCP, than function(){}
or ()=>{}
). And more generally, it lets you more clearly localize temporary variables in a way that comma-expressions (which are ugly anyway) don't.
David Herman wrote:
Glad to hear you're in favor! I'll be happy to co-champion.
I will support your prospective championship ;-).
To further constrain design (since design is mostly about leaving things
out), I will address the ES4-era let (x = y, z = z /* outer z*/) ...|
let
blocks and let
expressions, which came up recently. We should not
revive these, given do
expressions. do
-exprs compose better with let
and
const
(and other binding form) declarations.
Sorry if this is obvious; wanted to settle it, since it came up here in the other thread.
Fully agreed.
And as my example shows, this means there's no way of rebinding an inner
z
whose initializer depends on an outer z
. When you need that, you'll
need an arrow IIFE (IIAFE? AIIFE? yikes).
The major new complication of do-expressions is that they allow for the occurrence of break
/continue
/return
abrupt completions in contexts such as for loop heads where they could not previously occur. However, do
-expressions where still on the table when I did the spec. work for "completion reform" so the ES6 draft already deals with these abrupt completions in those contexts. Even though there is currently no way to produce them.
I had been considering purging that handling from the ES6 spec. but maybe I'll leave it in.
The do
-expression proposal should address what happens with break
/continue
/return
completions in such contexts. It will probably match what is already in the ES6 spec. but if necessary the existing spec. language can change in the future since it isn't actually in play.
Allen Wirfs-Brock wrote:
I had been considering purging that handling from the ES6 spec. but maybe I'll leave it in.
Please do! This dates from block-lambda future-proofing days? I dimly recall ES1 drafts having full completion-type abstraction (over all forms, not just statements but also expressions).
The
do
-expression proposal should address what happens withbreak
/continue
/return
completions in such contexts. It will probably match what is already in the ES6 spec. but if necessary the existing spec. language can change in the future since it isn't actually in play.
It should be straightforward.
On 6 January 2014 17:59, Allen Wirfs-Brock <allen at wirfs-brock.com> wrote:
The major new complication of
do
-expressions is that they allow for the occurrence ofbreak
/continue
/return
abrupt completions in contexts such asfor
loop heads where they could not previously occur. However,do
-expressions where still on the table when I did the spec. work for "completion reform" so the ES6 draft already deals with these abrupt completions in those contexts. Even though there is currently no way to produce them.
I agree that's a complication, which is why I would propose to disallow them, at least for the time being. Motivation:
-
YAGNI -- I have a hard time coming up with a use case that isn't obfuscated code (even considering generated code).
-
They complicate the semantics and implementation -- for example, you would have to roll back non-empty expression stacks (in a stack machine implementation).
-
They destroy nice equivalences -- in particular, I'd like
do {...}
to be equivalent to(() => {...})()
, e.g. to minimise refactoring hazards. -
We can always allow them later, if the need should ever arise.
Dave, I remember you were in favour of allowing these. Do you have specific use cases in mind?
Andreas Rossberg wrote:
- YAGNI -- I have a hard time coming up with a use case that isn't obfuscated code (even considering generated code).
Always a good reason in the abstract, but concrete use cases have arisen, even in this thread. As you noted just last month (!),
For ES7 I would like to revive the do-expression proposal (hopefully at the next meeting), so that one can at least approximate the above with
const ES6_PROXY = do { try { new Proxy({}, {}); true } catch (_) { false } };
Of course, semantically the function is equivalent, and a fine solution, if a bit verbose.
- They complicate the semantics and implementation -- for example, you would have to roll back non-empty expression stacks (in a stack machine implementation).
This is minor in both actual effect (not many naive recursive expression parse-tree walkers) and implementation hardship (return completion types all over, respect abrupt ones in expression handlers).
- They destroy nice equivalences -- in particular, I'd like
do {...}
to be equivalent to(() => {...})()
, e.g. to minimise refactoring hazards.
What changed your mind from 20-December?
Anyway, JS has statements and expressions, but functions create new activations with their own scopes. Those create hazards when refactoring between statements and expressions.
Wanting the equivalence you state here tries to deny the facts of JS and its full (ahem, perhaps disputed legitimacy) heritage.
- We can always allow them later, if the need should ever arise.
ES7 is later.
Sorry, my wording may have been ambiguous. What I meant was
disallowing break
/continue
/return
inside a do
, not giving up do
.
;)
And just to be extra-clear: by that I'm only referring to "free" occurrences of those, that would refer to the enclosing statement. Nested ones are fine, of course.
Unless we can identify real implementation issues, the semantics of
do { }
should simply be those of a blocks. JS programmer shouldn't have to learn which subset of statement are invalid in a do
expression block. In particular, I see no reason why a JS programmer should be able to refactor any valid BlockStatement in to an equivalent ExpressionStatement simply by putting a do
in front of the leading {
The meaning of things like:
function (x) {
for (let i of x.state!==special? x : do {return bar(x)})
foo(i)
}
is clear and also easy enough to specify. Unless there are some non-obvious implementation issues, I don't see why we would want to disallow such things.
The only place where the possible semantics isn't totally obvious is things like:
for (x of z ? q : do {break}) ...
or
for (x of do { if (z) q; else continue}) ...
The semantics in the ES6 draft for an unlabeled break
or continue
completion in the head of a for
statement treats both of these as terminating the for
statement and continuing with the statement following the for
.
On 7 January 2014 20:44, Allen Wirfs-Brock <allen at wirfs-brock.com> wrote:
Unless we can identify real implementation issues, the semantics of
do { }
should simply be those of a blocks.
I don't think this flies anyway. It has to be more like a function
body, otherwise var
and function
declarations would hoist out of it,
which would be insane IMO.
What I'm arguing for, then, simply is to make it as much like a function body as possible. (That also matches the current IIFE practice best.)
Also, I really would want to avoid examples like
return do { break; }
and similar craze.
Is there a convincing example where cross-expression jumps would actually be useful?
Since "do-as-IIFE" carries with it a subset of the semantics carried by "do--as-block", I think it makes sense to proceed with the subset first, and expand if "do-as-IIFE" turns out to be surprising or lacking.
IIUC, the goal here is to allow a sequence of statements to produce a value, not (necessarily) to allow arbitrary block semantics.
On Wed, Jan 8, 2014 at 2:33 AM, Andreas Rossberg <rossberg at google.com>wrote:
I don't think this flies anyway. It has to be more like a function body, otherwise
var
andfunction
declarations would hoist out of it, which would be insane IMO.
strict function declarations don't hoist out of blocks, so the hoisting
issue is var
only. I would find it surprising if var
declarations did not
hoist out of do
expressions.
What I'm arguing for, then, simply is to make it as much like a function body as possible. (That also matches the current IIFE practice best.)
Also, I really would want to avoid examples like
return do { break; }
and similar craze.
Is there a convincing example where cross-expression jumps would actually be useful?
If all we want is sugar for IIFEs, I wouldn't bother. With arrow functions,
IIFEs are already a lot shorter. The extra brevity of do
expressions is not
worth it.
What would make do expressions worthy of consideration is if they repaired
the TCP violations of strict arrow IIFEs, including var
, arguments
, break
,
continue
, return
, and especially yield
.
If all you want is a non verbose IIFE, use an arrow function. We should consider do expressions only if they avoid the TCP violations of strict arrow IIFEs.
arrow function works "by accident" better than just function thanks to its
trapped context. Still bugs me by design we need to create garbage,
including one-shot functions, in order to inline a try
/catch
to assign to a
single "pointer"
const ES6_PROXY = ()=>{
try {
new Proxy({},{});
return true;
} catch(o_O) {
return false;
}
}();
I find the do{}
solution more elegant and I believe this pattern ()=>{}()
will be abused pretty soon and JS will start looking like brainfuck but that's another story I guess.
Probably no rush needed considering the amount of problems the do{}
syntax
might introduce.
Thanks for all thoughts and examples.
On Jan 8, 2014, at 8:32 AM, Mark S. Miller wrote:
If all we want is sugar for IIFEs, I wouldn't bother. With arrow functions, IIFEs are already a lot shorter. The extra brevity of
do
expressions is not worth it.What would make
do
expressions worthy of consideration is if they repaired the TCP violations of strict arrow IIFEs, includingvar
,arguments
,break
,continue
,return
, and especiallyyield
.
+1
You should be able to put a do
in front of any BlockStatement and turn it into an ExpressionStatement.
I don't think we should have a new expression level scoping construct that doesn't have the exact semantics of a Block.
Still bugs me by design we need to create garbage, including one-shot functions, in order to inline a try/catch to assign to a single "pointer"
Please note that you do not really create a one-shot function and garbage in this case, at least if the compiler does his job well. The F# compiler, and probably many functional language compilers, would correctly inline the lambda function here.
There’s probably no reason a JavaScript compiler couldn’t do the same here (and if this becomes a very used pattern, there will be traction to make sure this works well).
I still need to think in terms of creating garbage ... being "unaware" of optimizations behind the scene.
I like to believe compilers should help optimizing for me instead of me
developing to simplify compilers job so despite how complicated would be
behind the scene I meant a do { try/catch }
while I need to write a
one-shot inline invoked function, needing to think about the context (arrow
simplifies this part) inner scope, strict behavior ... etc etc ... yet I
meant a do { try/catch }
and/or an expression, not an invoke.
Once again, not a big deal: the arrow solves but it feels to me like a little necessary hack.
If all you want is a non verbose IIFE, use an arrow function. We should consider do expressions only if they avoid the TCP violations of strict arrow IIFEs.
One could say that they are verbose:
var x = (_=> { /* some statements, with a return statement somewhere */ })();
vs.
var x = do { /* some statements */ };
I thought this was the main issue that do
-expressions address.
However, I like that you brought up yield
. It seems like we could come
up with some plausible examples which use yield
in a straightforward way
within a do
-expression.
On 8 January 2014 17:32, Mark S. Miller <erights at google.com> wrote:
strict function declarations don't hoist out of blocks, so the hoisting issue is
var
only.
Good point.
I would find it surprising if
var
declarations did not hoist out of do expressions.
Interesting. I have the exact opposite expectation. And I don't see what good it would do usability-wise.
If all we want is sugar for IIFEs, I wouldn't bother. With arrow functions, IIFEs are already a lot shorter. The extra brevity of
do
expressions is not worth it.
It's not only the brevity as such, but having a natural, targeted language feature. IIFEs are merely an encoding, and as such a distraction. Like A + -B is brief enough, but I'm sure you prefer saying A - B.
What would make do expressions worthy of consideration is if they repaired the TCP violations of strict arrow IIFEs, including
var
,arguments
,break
,continue
,return
, and especiallyyield
.
Can you clarify what you mean by "repair"? I hope you don't suggest
that while (true) { (() => do { break })() }
should magically work.
I may warm up to the extra complexity more easily if somebody could present at least some compelling use cases. :)
On 8 January 2014 18:04, Allen Wirfs-Brock <allen at wirfs-brock.com> wrote:
You should be able to put a
do
in front of any BlockStatement and turn it into an ExpressionStatement.I don't think we should have a new expression level scoping construct that doesn't have the exact semantics of a Block.
Except for blocks where the cute function declaration legacy rules from the Appendix apply, I suppose.
I may warm up to the extra complexity more easily if somebody could present at least some compelling use cases. :)
Mark's mention of yield
got me thinking about await
expressions. Hopefully
I'm using this correctly:
// stat is null or a Stat object
const stat = do { try { await FS.stat(path) } catch (x) { null } }
Does that work as a use case for block semantics?
On Thu, Jan 9, 2014 at 6:18 AM, Andreas Rossberg <rossberg at google.com>wrote:
Interesting. I have the exact opposite expectation. And I don't see what good it would do usability-wise.
Now that we have const
and let
, var
s serve no usability purpose whatsoever.
However, given their existence in the language, the best we can do with
them usability-wise is to follow the principle of least surprise. Since we
have opposite surprise reactions, this doesn't tell us what to do, but at
least we should be able to agree on this criterion.
It's not only the brevity as such, but having a natural, targeted language feature. IIFEs are merely an encoding, and as such a distraction. Like A + -B is brief enough, but I'm sure you prefer saying A - B.
In a language with infix + and unary -, and the absence of prior expectations of an infix -, the pressure to add an infix minus is small for exactly this reason. A repeated pattern often comes to be perceived as a phrase. In the absence of infix minus or prior expectations, A + -B would rapidly become seen to do what it does.
I do have one usability concern with arrow IIFEs. I hate when I see them
written as ()=>{...whatever...}()
because you don't know that it's an IIFE until the end. Function expressions have the same issue. We should adapt Crock's recommended paren style to arrow IIFEs, to whit
(()=>{...whatever...}())
, even though this looses a bit more brevity.
Can you clarify what you mean by "repair"? I hope you don't suggest that
while (true) { (() => do { break })() }
should magically work.
No, I am not suggesting that code within the do
block is TCP wrt the
context outside the function containing the do
expression. This would be a
TCP violation wrt the context of the do
expression. Rather, I suggest that
the following must work:
while (true) { do { break; } }
I do have one usability concern with arrow IIFEs. I hate when I see them written as
()=>{...whatever...}()
because you don't know that it's an IIFE until the end. Function expressions have the same issue. We should adapt Crock's recommended paren style to arrow IIFEs, to whit(()=>{...whatever...}())
, even though this looses a bit more brevity.
I believe this is required by the grammar anyway.
On Jan 9, 2014, at 6:21 AM, Andreas Rossberg wrote:
Except for blocks where the cute function declaration legacy rules from the Appendix apply, I suppose.
Right. No legacy issues with do {}
.
And those ugly legacy rules only apply in limited circumstances...
Kevin Smith wrote:
I believe this is required by the grammar anyway.
No, what is required is
(() => {...whatever...})()
Arrow function expressions are an AssignmentExpression.
Right, I misread Mark's code sample.
You read my sample right. The mistake was mine, and the parens should be places where Brendan shows.
I don't think this flies anyway. It has to be more like a function body, otherwise var and function declarations would hoist out of it, which would be insane IMO.
Agreed.
Also, I really would want to avoid examples like [..]
Agreed.
IIUC, the goal here is to allow a sequence of statements to produce a value, not (necessarily) to allow arbitrary block semantics.
Agreed.
strict function declarations don't hoist out of blocks, so the hoisting issue is var only. I would find it surprising if var declarations did not hoist out of do expressions.
If the intention is to have do-as-IIFE, then it would surprising to see var host outside of a do expression.
If all we want is sugar for IIFEs, I wouldn't bother. With arrow functions, IIFEs are already a lot shorter. The extra brevity of do expressions is not worth it.
I disagree. It is nice to be able to define a one-off IIFE. Furthermore, wouldn't this allow a simpler GC implementation?
I find the do{} solution more elegant and I believe this pattern ()=>{}() will be abused pretty soon and JS will start looking like brainfuck but that's another story I guess.
Agreed.
I do have one usability concern with arrow IIFEs. I hate when I see them written as ()=>{...whatever...}() because you don't know that it's an IIFE until the end. Function expressions have the same issue.
Good point.
Rather, I suggest that the following must work: while (true) { do { break; } }
I am surprised by this requirement. I don't think that do
expression should allow control flow statements at all (of the block in which it is contained).
I would argue that do
expression would be mostly useful for promoting use of const
, e.g.,
_.map(groupedEvents, (locationEvents) => {
let locationName;
const locationEvent = locationEvents[0];
if (locationEvent.locationDisplayName) {
locationName = locationEvent.locationDisplayName;
} else if (locationEvent.cinemaIsPlatform) {
locationName = locationEvent.locationName;
} else if (isCinemaNamePartOfLocationName(locationEvent.locationName, locationEvent.cinemaName)) {
locationName = locationEvent.locationName;
} else {
locationName = locationEvent.cinemaName + ' ' + locationEvent.locationName;
}
// ...
I would like to avoid using let
in this case and do
expression is great for this:
_.map(groupedEvents, (locationEvents) => {
let locationName;
const locationEvent = locationEvents[0];
const locationName = do {
if (locationEvent.locationDisplayName) {
locationEvent.locationDisplayName;
} else if (locationEvent.cinemaIsPlatform) {
locationEvent.locationName;
} else if (isCinemaNamePartOfLocationName(locationEvent.locationName, locationEvent.cinemaName)) {
locationEvent.locationName;
} else {
locationEvent.cinemaName + ' ' + locationEvent.locationName;
}
};
// ...
However, I do not like at all that do
expression returns value of the last statement. I would much rather prefer to have return
to be able to control return of the value within do
expression, e.g.
_.map(groupedEvents, (locationEvents) => {
let locationName;
const locationEvent = locationEvents[0];
const locationName = do {
if (locationEvent.locationDisplayName) {
return locationEvent.locationDisplayName;
}
if (locationEvent.cinemaIsPlatform) {
return locationEvent.locationName;
}
if (isCinemaNamePartOfLocationName(locationEvent.locationName, locationEvent.cinemaName)) {
return locationEvent.locationName;
}
return locationEvent.cinemaName + ' ' + locationEvent.locationName;
};
// ...
Has this discussion been moved to some other medium?
No messages have been exchanged since 2014. In the mean time, transpilers such as Babel have implemented the proposal (babeljs.io/docs/plugins/transform-do-expressions) and are "promoting" its use in the form of the original proposal.
It seems that I need to create N amount of garbage by design.
This does not work, the const has already been defined:
try { new Proxy({},{}); const ES6_PROXY = true; } catch(o_O) { const ES6_PROXY = false; }
This does not work neither
try { new Proxy({},{}); var ES6_PROXY = true; } catch(o_O) { var ES6_PROXY = false; } const ES6_PROXY = false; // var 'ES6_PROXY' has already been declared
neither does the following
try { new Proxy({},{}); let ES6_PROXY = true; } catch(o_O) { let ES6_PROXY = false; } // Illegal let declaration outside extended mode
As summary, there is no way to feature detect and define a const in the same scope, a closure without the possibility to define such constant as well is mandatory.
const ES6_PROXY = function(){ try { new Proxy({},{}); return true; } catch(o_O) { return false; } }();
This is not such a huge deal, but it does not feel/look right with bigger amount of features detection and a growing adoption of constants.
Thoughts?