Jan 19 meeting notes
On Jan 19, 2011, at 5:23 PM, Waldemar Horwat wrote:
MarkM: If we're making a harmonizer, let's get rid of semicolon insertion as well.
For the record, I argued that we have only a handful of fingers to count breaking changes in Harmony. Trying for more risks not agreeing in TC39, or should we manage that feat, imposing too great a total migration tax (net of wins from migrating, which are real).
The global object removal from the scope chain is one such change. That's one finger. dherman pointed out that removing the global object from the scope chain should result in early errors, which helps migrate without running any optional Harmonizer tool.
Not so the typeof change, which requires runtime testing or a Harmonizer-style static analysis tool, not just a Harmony-JS compiler: two fingers.
Three fingers left. We probably have other not-quite-compatible changes to make, and the fewer the better.
I argued at today's meeting that ASI is not one to spend a scarce finger on.
For one thing, many JS developers depend on and want ASI. It's not universally condemned by a long shot, from all I hear at conferences, on various forums, on twitter.
Second, tons of JS on the web depends on ASI. Trying to deploy ASI-free Harmony engines in a new browser and get developers to migrate their script content into Harmony's opt-in type would be like trying to launch a strict parser of XML embedded as application/xhtml+xml, where any error causes a yellow screen of death; compared to text/html, which error-corrects.
Big stone to ask devs to roll up a hill, with ASI-dependent backsliding due to downrev-browser-only testing an ongoing cost.
Some on TC39 want ASI gone, but I think we are better off leaving it alone and working on clear added-value (and mostly non-breaking) Harmony changes, as sketched in
On Wed, Jan 19, 2011 at 10:06 PM, Brendan Eich <brendan at mozilla.com> wrote:
On Jan 19, 2011, at 5:23 PM, Waldemar Horwat wrote:
MarkM: If we're making a harmonizer, let's get rid of semicolon insertion as well.
For the record, I argued that we have only a handful of fingers to count breaking changes in Harmony.
Actually, I was asking about either getting rid of ASI or finding some helpful reform of it. I did not stress this and did not correct anyone when they assumed I was talking only about getting rid of it.
So, for that same record (;)), here's my thinking on reforming ASI. The following "rule" is not intended to be how parsers would actually think about the rule, but rather how users could think about it:
Parse the program once to AST1 using standard ASI rules. Parse it again to AST2 using the aggressive-ASI rule defined below. If both parses succeed and these ASTs are different, statically reject the program.
Agressive ASI rule: Every time the parser reaches a newline, if preceding the newline is a well formed expression, which would form a well formed expression statement if we inserted a semicolon here, then insert the semicolon.
For example, under ASI a() (function(){...})()
is equivalent to
a()(function(){...})();
while under aggressive ASI it is equivalent to
a();
(function(){...})();
Thus under reformed ASI we would reject the program with an early error. The rationale is that the programmer might plausibly have meant either, do silently doing the other is dangerous. However, an early error indicating position is easy to correct, even by programmers with no deep understanding of the issue. Add in semicolons or remove newlines until the program is accepted but still says what you mean.
For harmonizing old programs which were assumed correct under standard ASI, the fix would always be to add semicolons until accepted.
My untested claim: all the cases of omitted semicolons that are alleged to enhance readability and clarity could still be omitted under reformed ASI and thus would not be inserted by such a harmonizer.
Trying for more risks not agreeing in TC39, or should we manage that feat, imposing too great a total migration tax (net of wins from migrating, which are real).
The global object removal from the scope chain is one such change. That's one finger. dherman pointed out that removing the global object from the scope chain should result in early errors, which helps migrate without running any optional Harmonizer tool.
Not so the typeof change, which requires runtime testing or a Harmonizer-style static analysis tool, not just a Harmony-JS compiler: two fingers.
Three fingers left. We probably have other not-quite-compatible changes to make, and the fewer the better.
I argued at today's meeting that ASI is not one to spend a scarce finger on.
For one thing, many JS developers depend on and want ASI. It's not universally condemned by a long shot, from all I hear at conferences, on various forums, on twitter.
I do not disagree about the general finger point. And of all the things I want to argue about re Harmony, ASI is well down on my list. But reformed ASI may still be a much easier sell than killing ASI. Won't know without testing the es-discuss waters.
Second, tons of JS on the web depends on ASI. Trying to deploy ASI-free Harmony engines in a new browser and get developers to migrate their script content into Harmony's opt-in type would be like trying to launch a strict parser of XML embedded as application/xhtml+xml, where any error causes a yellow screen of death; compared to text/html, which error-corrects.
Big stone to ask devs to roll up a hill, with ASI-dependent backsliding due to downrev-browser-only testing an ongoing cost.
Some on TC39 want ASI gone, but I think we are better off leaving it alone and working on clear added-value (and mostly non-breaking) Harmony changes, as sketched in
I just finished it. I think this post is awesome. Really. I recommend everyone read it and that we discuss this. I'm more enthusiastic about this than about several of the proposals I've spent reams of email advocating -- especially the various lightweight identity-free frozen values introduced by the # syntax. Well done!
In the context of these proposals, I feel more strongly that we should fix === rather than introducing a separate fixed egal operator. I noticed your deep value equality examples were written using === rather than egal. No matter how broken it is, I don't think you'll be alone in this. So, when generalizing === to include its current behavior along with deep structural equality, which equality does it recur with? If ===, then
#{x: 10, y: NaN} !== #{x: 10, y: NaN}
which I find bizarre. If === recurs with egal, that's a better semantics but is harder to explain. (Thanks to Cormac for having raised approximately this question earlier.)
I used to think that this kind of breaking change to === and typeof were off the table. I'm very glad we might fix typeof. At the risk of more fingers, it does make me wonder whether we could fix === as well.
In either case, I think we should leave the numeric comparison behavior of == alone.
On Wed, Jan 19, 2011 at 10:58 PM, Mark S. Miller <erights at google.com> wrote:
On Wed, Jan 19, 2011 at 10:06 PM, Brendan Eich <brendan at mozilla.com>wrote:
On Jan 19, 2011, at 5:23 PM, Waldemar Horwat wrote:
MarkM: If we're making a harmonizer, let's get rid of semicolon insertion as well.
For the record, I argued that we have only a handful of fingers to count breaking changes in Harmony.
Actually, I was asking about either getting rid of ASI or finding some helpful reform of it. I did not stress this and did not correct anyone when they assumed I was talking only about getting rid of it.
So, for that same record (;)), here's my thinking on reforming ASI. The following "rule" is not intended to be how parsers would actually think about the rule, but rather how users could think about it:
Parse the program once to AST1 using standard ASI rules. Parse it again to AST2 using the aggressive-ASI rule defined below. If both parses succeed and these ASTs are different, statically reject the program.
Agressive ASI rule: Every time the parser reaches a newline, if preceding the newline is a well formed expression, which would form a well formed expression statement if we inserted a semicolon here, then insert the semicolon.
For example, under ASI a() (function(){...})()
is equivalent to
a()(function(){...})();
while under aggressive ASI it is equivalent to
a(); (function(){...})();
Thus under reformed ASI we would reject the program with an early error. The rationale is that the programmer might plausibly have meant either, do silently doing the other is dangerous.
Should be "... so silently doing the other is dangerous."
Dave: This is abstract. Also, would rather do a general user-extensible syntax for everything, which is perfectly doable as part of the modules proposal.
For the record, this isn't really what I was saying. For one, I'm not saying we should do user-extensible syntax instead of adding new features to the language.
My point was simply this:
-
The topic at hand (type/guard systems for JS) is a broad space that deserves research and exploration.
-
We should encourage people (committee members or not!) to engage in such research independently, and shouldn't spend much committee time doing this exploration.
-
The module loaders spec, in fact, makes it easier to do such exploration, since they involve the ability to do source-to-source transformation.
But the overall point doesn't hang deeply on the module loaders spec. All I was saying is that type systems are hard, there's more exploration to be done, and that exploration doesn't belong in the committee.
On Jan 19, 2011, at 10:58 PM, Mark S. Miller wrote:
On Wed, Jan 19, 2011 at 10:06 PM, Brendan Eich <brendan at mozilla.com> wrote: On Jan 19, 2011, at 5:23 PM, Waldemar Horwat wrote:
MarkM: If we're making a harmonizer, let's get rid of semicolon insertion as well.
For the record, I argued that we have only a handful of fingers to count breaking changes in Harmony.
Actually, I was asking about either getting rid of ASI or finding some helpful reform of it. I did not stress this and did not correct anyone when they assumed I was talking only about getting rid of it.
Thanks, this helps.
So, for that same record (;)), here's my thinking on reforming ASI. The following "rule" is not intended to be how parsers would actually think about the rule, but rather how users could think about it:
Parse the program once to AST1 using standard ASI rules. Parse it again to AST2 using the aggressive-ASI rule defined below. If both parses succeed and these ASTs are different, statically reject the program.
Agressive ASI rule: Every time the parser reaches a newline, if preceding the newline is a well formed expression, which would form a well formed expression statement if we inserted a semicolon here, then insert the semicolon.
For example, under ASI a() (function(){...})()
Recapping from today's TC39 meeting: this is an interesting idea, but does it require potentially costly forked parsing? Backtracking? What if there's a syntax error later?
Main points for me are: the exact case shown here is a problem in practice, not due to ASI kicking in but due to it not kicking in. So there's a bug to fix, somehow. But: fixing it as proposed (assuming the proposal holds together) would break fabjs.
Don't break fab! I said.
Dave suggested a "use semicolons" pragma (strawman:pragmas), and this seems much easier to swallow (speaking as a parser implementor). It would not break fab if used, either.
On Jan 20, 2011, at 11:50 AM, Brendan Eich wrote:
On Jan 19, 2011, at 10:58 PM, Mark S. Miller wrote:
On Wed, Jan 19, 2011 at 10:06 PM, Brendan Eich <brendan at mozilla.com> wrote: On Jan 19, 2011, at 5:23 PM, Waldemar Horwat wrote:
MarkM: If we're making a harmonizer, let's get rid of semicolon insertion as well.
For the record, I argued that we have only a handful of fingers to count breaking changes in Harmony.
Actually, I was asking about either getting rid of ASI or finding some helpful reform of it. I did not stress this and did not correct anyone when they assumed I was talking only about getting rid of it.
Thanks, this helps.
So, for that same record (;)), here's my thinking on reforming ASI. The following "rule" is not intended to be how parsers would actually think about the rule, but rather how users could think about it:
Parse the program once to AST1 using standard ASI rules. Parse it again to AST2 using the aggressive-ASI rule defined below. If both parses succeed and these ASTs are different, statically reject the program.
Agressive ASI rule: Every time the parser reaches a newline, if preceding the newline is a well formed expression, which would form a well formed expression statement if we inserted a semicolon here, then insert the semicolon.
For example, under ASI a() (function(){...})()
Recapping from today's TC39 meeting: this is an interesting idea, but does it require potentially costly forked parsing? Backtracking? What if there's a syntax error later?
Main points for me are: the exact case shown here is a problem in practice, not due to ASI kicking in but due to it not kicking in. So there's a bug to fix, somehow. But: fixing it as proposed (assuming the proposal holds together) would break fabjs.
I'd always considered the safest (at least to my mind) solution for removing ASI would be to produce a syntax error at an point ASI would be necessary in the existing spec.
This wouldn't require backtracking, and wouldn't produce any weird differences in behaviour between non-ASI mode vs ASI (if your code triggers ASI it would fail to parse in non-ASI mode).
On Jan 20, 2011, at 12:29 PM, Oliver Hunt wrote:
I'd always considered the safest (at least to my mind) solution for removing ASI would be to produce a syntax error at an point ASI would be necessary in the existing spec.
This wouldn't require backtracking, and wouldn't produce any weird differences in behaviour between non-ASI mode vs ASI (if your code triggers ASI it would fail to parse in non-ASI mode).
Sure. This is the "use noasi" or "use semicolons" idea. It's not going to be enabled just by opting into Harmony, though -- no consensus for that.
Sure. This is the "use noasi" or "use semicolons" idea.
Or just "no asi". </bikeshed>
On Jan 19, 2011, at 10:58 PM, Mark S. Miller wrote:
Some on TC39 want ASI gone, but I think we are better off leaving it alone and working on clear added-value (and mostly non-breaking) Harmony changes, as sketched in
brendaneich.com/2011/01/harmony-of-my-dreams
I just finished it. I think this post is awesome. Really. I recommend everyone read it and that we discuss this. I'm more enthusiastic about this than about several of the proposals I've spent reams of email advocating -- especially the various lightweight identity-free frozen values introduced by the # syntax. Well done!
Thanks!
In the context of these proposals, I feel more strongly that we should fix === rather than introducing a separate fixed egal operator. I noticed your deep value equality examples were written using === rather than egal. No matter how broken it is, I don't think you'll be alone in this. So, when generalizing === to include its current behavior along with deep structural equality, which equality does it recur with? If ===, then
#{x: 10, y: NaN} !== #{x: 10, y: NaN}
which I find bizarre. If === recurs with egal, that's a better semantics but is harder to explain. (Thanks to Cormac for having raised approximately this question earlier.)
As discussed, NaN is a hard case for hash-consing -- if there's a NaN you hash-miss and potentially bloat. It's survivable in my experience.
The idea of changing === to egal in Harmony is tempting but quite incompatible. A lot of devs have wised up to isNaN's broken-by-design conversion of its argument ToNumber, and use x !== x to detect NaN. We'd break them.
I also agree with Waldemar thant -0 === 0, which does not violate the equivalence relation ideal, is Out There. -0 leaks, people compare with === and (Waldemar's point) switch on it. I have no way to assess how deeply extant web JS depends on this, but it smells like trouble and there's not a huge win.
Separately, I do like the idea of a new pair of keyword-operators (contextual, we worked this out in the November meeting), say "is" and "isnt" (hat tip: CoffeeScript) or "eq" and "neq" (yuck), as sugar for egal (which could just be Object.eq as you've shown: harmony:egal).
Once again, here are my raw meeting notes.
Waldemar
Discussion of isNaN and isFinite.
Can/should we fix these in place rather than creating more functions?
Allen: Existing usage is consistent with normal numeric coercions done by other operators such as -.
Doug: Would it be useful to have something that doesn't do the coercions? Allen: It would be a convenience. Erik: Not sure if it's worth the extra API.
Decided to accept the proposal.
String duplication: Accepted proposal, but rejected proposed arbitrary limits of 255 or 4294967296 repetitions. Per Allen's comment on wiki, behave as though using ToInteger.
Proxy default handler: Some trivial bugs in the code: Calling getOwnPropertyDescriptor etc. with only one argument. desc in "desc.configurable = true" can be undefined.
set/put/canPut problem discussion. Allen: Clean up the list of primitive methods and handlers. MarkM: All existing uses of put can be written in terms of set. Waldemar: Would want a more generic way of invoking [[set]] rather than having to instantiate a new default proxy. Brendan: Issue remains with prototype chain.
Agreed to move this to proposal stage, with some open issues.
Discussion about typeof result strings: Dave: We should set expectation that typeof can return new things in the future. Doug: IE can already return a typeof of "unknown". Waldemar: If we don't add new return strings, in practice it doesn't matter what we say in the committee. If we do add new ones on occasion, then folks will pay more attention.
Brendan, MarkM: should have a "second hand of fingers as backups to break".
MarkM: Trouble with === generalization recurring into records that contain zeroes or NaNs.
Waldemar: To clarify, I'd very much would want === to be an equivalence relation but don't think that we can. A couple examples:
- switch (-0) {... case 0: ...}
- x !== x as an idiom for NaN testing
Discussion of MarkM's semicolon insertion proposal in response to yesterday's meeting notes. MarkM: Parse with and without virtual semicolon and if both succeed then fail. Waldemar: How far do you backtrack second parse to see if it fails? One token? Rest of program? One token would be tenable; rest of program, not so much.
Example pro: a() (function() {...})();
Examples con: x = a + b + c + d;
x = a[longindexexpression] [anotherlongindexexpression];
Brendan: Perhaps it's parentheses that are unique? Brendan: Make a pragma to turn off semicolon insertion as a strawman.
Dave: Recognize real pragmas, not just string literals.
Do we automatically insert a semicolon after the "no ASI" pragma?
Array create:
Allen would like to eliminate the [[class]] property -- it has too much undesirable conflation of separate concepts:
- magic length property?
- exploded by Array.concat?
- JSON serialized using [] or {}?
- recognized by Array.isArray?
- postMessage special treatment?
- transmitted via proxies?
Brendan: Make a quasiquote special interest group for brainstorming?
Allen: Semi-annual language innovation workshops, independent of day-to-day standards process?
Proposals on people's radar for Harmony:
Allen:
private names enhanced object literals [[Class]] Object.hash math enhancements array protocols spec MOP vs. proxy MOP numerics, operator overloading
MarkM:
classes and traits soft fields generative module expressibility (deferred compile time of modules parametrized by modules) quasis better random concurrency Function.prototype.toString simple maps and sets
Dave and Brendan:
records and tuples expression forms catch guards generators array comprehensions generator expressions yield* modules (also includes global object reform) module loaders pattern matching conditional expressions enumeration binary data pragmas paren-free versioning
Waldemar:
guards/types zero-inheritance classes
Erik:
#-functions
Doug:
modulo operator
Luke:
binary data Function.create
Tom:
extended Object methods
Alex:
promises/deferred/futures events
Discussion of Brendan's blog post:
case A: function f(z) { return #{x: 42, m: #(a) {a*a}}; }
case B: function f(z) { return #{x: 42, m: #(a) {a*a}, m2: #() {z}}; }
r1 = f(1) r2 = f(2)
r1 === r2 in case A but maybe not in case B r1 egal r2?
MarkM: Function comparison should not reveal captured values.
NaN/zero equivalence debate again.
Tuples are immutable and don't contain holes. Waldemar: What's a convenient way of replacing the nth element of a tuple (to make a new tuple)? Brendan: t = #[0, 1, 2] u = #[t[0], 5, t[2]] === #[0, 5, 2] v = #[...t[0:2], 6] === #[0, 1, 6]
How shallow is #? Do we need the #'s for the getter and setter below? #{get #p() {...}, set #p(q) {...}, oldSchool: function() ..., #newSchool() {"Yay!"} }
Declaration syntax: const #foo(x) {x} // Redundant const? let #foo(x) {x} var #bar(y) {y.z = #() {bar}} // Illegal? #() {} // expression #f() {} // expression or declaration MarkM and Waldemar would prefer #f() to default to const-bind rather than let-bind. General feeling in that direction. Implicit block-hoisting for mutual recursion?
Lexical «this»: If #-functions are «this»-transparent, then can't use them as methods. Any way to override? Dave: #(this, x, y) {...} Allen: #(this: x, y) {...} Consensus semantics, regardless of syntax choice for #-functions' «this» parameter:
- If you don't mention «this» as a parameter then you always get «this» from the outer lexical scope.
- If you do mention it then you always get «this» from the call site, or undefined if it's not called using the . notation or an equivalent.
Where do #-functions inherit from? Do they have call and apply methods?
Discussion of how to refer to obsolete features in the new spec. We don't want to have to point to ES5.1 to describe things like «with». Also there can be shared heaps where you can call a legacy function that uses «with» and is stored in a variable in a module or other new feature.
Can we put «with» into Annex B?
MarkM: There is a cost to maintaining spec compatibility with «with» and global objects: Global environment records. Brendan: We have to do this anyway for DOM event handlers.
Upcoming TC39 meetings: Mar 22-24 (3 days for Harmony), at Google San Francisco May 24-26, at UCSC July 26-28 or 27-28 at Microsoft September 28-29 at Mozilla November at Apple
Thank you Doug and Yahoo for hosting!
On Jan 20, 2011, at 4:03 PM, Waldemar Horwat wrote:
Once again, here are my raw meeting notes.
Thanks for these -- invaluable as always.
Waldemar: To clarify, I'd very much would want === to be an equivalence relation but don't think that we can. A couple examples:
- switch (-0) {... case 0: ...}
Still an e.r. -- {0,-0} is just one equivalence class.
NaN is the hard case but it can be coped with.
- x !== x as an idiom for NaN testing
This is the killer for me. Do not want to change === and require all-paths runtime test coverage to migrate code into Harmony.
Where do #-functions inherit from? Do they have call and apply methods?
The idea we seemed to agree on was this analogy:
primitiive string : String object :: #-function : Function object
On Jan 20, 2011, at 5:25 PM, Brendan Eich wrote:
Where do #-functions inherit from? Do they have call and apply methods?
The idea we seemed to agree on was this analogy:
primitive string : String object :: #-function : Function object
which implies #-functions delegate to Function.prototype, so apply and call work.
This seems to work. ES5 specs primitive wrapping as unobservable, and competitive engines don't implicitly wrap, they simply delegate from a primitive value to the relevant prototype (for the relevant global object!).
Primitives are here to stay, we have tried to get rid of them several times and we can't. #-funcitons, records and tuples go the other way and complete the menagerie:
record : Object :: string : String (or ... :: number : Number, or ... :: boolean : Boolean) tuple : Array :: " #-function : Function :: "
On Thu, Jan 20, 2011 at 8:25 PM, Brendan Eich <brendan at mozilla.com> wrote:
This is the killer for me. Do not want to change === and require all-paths runtime test coverage to migrate code into Harmony.
You're bang-on about the end user impact of this change - I would not be able to migrate any ES5 code to Harmony without full-on testing (not just automated regression tests), meaning Harmony uptake would be slowed, particularly in environments like mine where it is most cost effective to pick a language version and use it across all platforms (we are heavily invested in ES, and not just on the browser).
Changing the semantics of an existing language feature smarts: I got bit pretty hard as a relatively new JS developer with JavaScript 1.2 and am still wincing. Of course, this is a much smaller change, but we have much more code nowadays.. :)
On Fri, Jan 21, 2011 at 6:09 AM, Wes Garland <wes at page.ca> wrote:
On Thu, Jan 20, 2011 at 8:25 PM, Brendan Eich <brendan at mozilla.com> wrote:
This is the killer for me. Do not want to change === and require all-paths runtime test coverage to migrate code into Harmony.
You're bang-on about the end user impact of this change - I would not be able to migrate any ES5 code to Harmony without full-on testing (not just automated regression tests), meaning Harmony uptake would be slowed, particularly in environments like mine where it is most cost effective to pick a language version and use it across all platforms (we are heavily invested in ES, and not just on the browser).
No argument. In further discussion at the meeting, we also jointly concluded not to change === and to stick with a separate new eqal operation, spelling and syntax to be decided.
My rough notes from today's meeting.
Waldemar
DaveH: One JavaScript (Versioning debate) It's inevitable (and necessary) that ES6 will have some breaking changes around the edges. How to opt-in? DaveH's versioning proposal: A module block or file include is the only in-language ES6 opt-in. Modules can appear only at the top level or inside another module. This avoids the problem of a "use strict" nested in a function inside a with.
Brendan: var obj = get_random_obj(); var x, prop = 42 with (obj) { x = function() { "use strict"; return prop; }(); }
Differences between the de facto (traditional) semantics and ES6 (i.e. semantic changes instead of mere syntax additions):
- ES5 strict changes
- static scoping (really means static checking of variable existence; see below)
- block-local functions
- block-local const declarations
- tail calls (yikes - it's a breaking change due to Function.caller)
- typeof null
- completion reform
- let DaveH: Thinks we may be able to get away with enabling completion reform and let for all code.
Allen: Would a class be allowed outside a module? DaveH: Yes, but it would not support static scoping, block-local functions, etc. MarkM: Classes should not be allowed in traditional semantics. If you want a class, you need a "use strict" or be inside a module.
Waldemar: Given that you can't split a module into multiple script blocks, making modules be the only in-language opt-in is untenable. Programmers shouldn't have to be forced to use the broken scope/local function/etc. semantics to split a script into multiple script blocks. DaveH: Use out-of-language opt-ins.
MarkM: Wants a two-way fork (non-strict vs. strict) instead of a three-way fork (non-strict vs. strict vs. ES6-in-module). MarkM: Does a failed assignment inside a non-strict module throw?
DaveH: Most of the differences between strict and non-strict are code bugs. Luke, MarkM: No. Their developer colleague experience shows that there are plenty of changes to non-buggy code that need to be made to make it work under strict mode.
Allen, Waldemar: It's important to support the use case of someone writing global code using the clean new semantics and not having to learn about the obsolete traditional compatibility semantics.
Can "use strict" be the ES6 opt-in?
What DaveH meant by static scoping (i.e. static checking): What happens when there's a free variable in a function?
Nonstrict ES5 code:
- Reference error if variable doesn't exist at the time it is read; creates a global if doesn't exist at the time it is written.
Strict ES5 code:
- Reference error if variable doesn't exist at the time it is read or written.
Static checking goal for ES6 modules:
- Compilation error if variable doesn't exist at the time module is compiled.
- Reference error if variable doesn't exist at the time it is read or written. (It's possible to get the latter error and yet have the module compile successfully if someone deletes a global variable outside the module between when the module is compiled and when the variable is read or written at run time.)
Discussion of whether it is important to support non-statically-checked binding in modules.
MarkM: typeof is used to test for the existence of globals. If the test succeeds, they then proceed to use the global directly. This would then be rejected by static checks.
DaveH: Doesn't see a way to do static checking with strict code (due to, for example, the "with" case illustrated by Brendan earlier).
MarkM: The cost of having three modes is higher than the cost of not supporting static checking early errors.
DaveH's new proposal: Other than static checking, attach the incompatible ES6 semantics to the strict mode opt-in. These semantics are upwards-compatible with ES5 strict mode (but not ES5 non-strict mode). The semantics inside a module would be the strict semantics plus static checking.
Do we want other new ES6 syntax normatively backported to work in non-strict mode? Waldemar, MarkM: Not really. This requires everyone to be a language lawyer because it's introducing a very subtle new mode: ES6 with nonstrict scoping/const/local semantics. If an implementation wants to backport, the existing Chapter 16 exemption already allows it. DaveH, Brendan: Yes. People won't write "use strict". Don't want to punish people for not opting in. Alex: Split the middle. Backport new ES6 features to non-strict features where it makes sense.
Waldemar, DaveH: Want to make it as easiy as possible to make a strict opt-in for an entire page instead of littering opt-ins inside each script.
Allen: Backporting increases spec complexity and users' mental tax. The main costs are in making lots of divergent scoping legacy issues possible.
Doug: Modules are sufficient as an opt-in, without the need for a "use strict" opt-in. Waldemar: No. Having multiple scripts on a page would require each one to create its own module, and then issues arise when they want to talk to each other -- you'd need to explicitly export const bindings, etc. MarkM: No. The typeof test won't work. Also, this would make it impossible to write code that's backwards compatible with older browsers that don't implement modules.
Which ES6 features can be backported into non-strict mode? (blank: no significant issues; ?: possible issues; x: semantics conflict with de facto web)
? let (syntax issues) x const (divergent semantics) x function in block (divergent semantics) ? destructuring (syntactic conflict with array lookup) parameter default values rest parameters spread x tail calls (because of Function.caller) direct proxies simple maps and sets weak maps is / isnt (egal) iterators ? generators (interaction with scoping issues and yield keyword) generator expressions comprehensions private names quasi-literals pragmas (controversial) ? completion reform (Brendan: might be able to get away with it without breaking the web, but we don't know yet) x typeof null (Erik: It breaks the web) class super n/a modules methods in object literals <| ({[computed_name]: value}) (MarkM: what should happen with duplicate names in nonstrict mode?)
Brendan: Kill typeof null. Replace it with Ojbect.isObject?
How are the names introduced by generators and classes scoped in nonstrict mode?
MarkM: Example of code that might accidentally work one way in all current browsers but not in ES6: (another foo is also predefined in an outer scope.) if (...) { function foo() {...} } else { function foo() {...} } foo(); // foo will call one of the two foo's defined above in ES5 nonstrict, although it's implementation-dependent which. On some browsers the first definition wins; on some the last definition wins; on some the one which corresponds to the true branch of the if wins. In ES6 strict it will call the outer scope foo.
Discussion about whether we can move the web to the new local function and const semantics even in nonstrict ES5 mode. Also discussed an alternative of whether we can require nonstrict mode to support limited usage scenarios such as having the following work: if (...) { function foo() {...} ... foo(); } Waldemar: This doesn't work in current ES5 non-strict if the if is inside a with statement because an existing implementation might hoist foo across the with and then foo() could refer to a field of the with'd object. This also might not work in the presence of other definitions of foo inside the same function.
Not clear if specifying such limited cases in the normative spec is useful.
Current tentative decision is to support let, const, and local functions in nonstrict ES5 in the same way as in strict ES6. Fallback to either specifying limited cases or doing the ES5 nonstrict status quo (i.e. syntax error + Clause 16) if experiments show this to not be viable. We won't resolve this discussion without running some experiments.
Tail calls: Luke: Remove them altogether. Waldemar: If we support them only in strict mode, the failure mode is someone copying-and-pasting code from a module to the global level and silently losing tail recursion, leading to very confusing behavior. Debated. Waldemar: We can require tail calls in non-strict mode by taking advantage of the fact that Function.caller only remembers the last invocation of each function. Thus we can do an amortized algorithm analysis that allocates the cost of storage of one stack frame link at the time we create the function. This makes it possible to implement tail calls in non-strict mode while supporting Function.caller. Tentative decision is to support tail calls in strict mode only.
Resolved: Named generators behave in non-strict mode the same as in strict mode. "yield" is a contextual keyword in non-strict generators.
How to resolve let[x] = 5 in nonstrict mode? Will need to do experiments. Also, do we require no-line-terminator between "let" and the identifier? Probably not, because if we do then we'd get this annoying hazard in non-strict mode: { let x = 7; if (...) { let x = 5; } // Now x is 5 because the second "let" is just a useless identifier expression with an inserted semicolon, followed by an assignment to the existing x! }
Gavin: Module syntax hazard if we have no-line-terminator between "module" and the identifier: module { ... } gets interpreted as a useless expression (the identifier "module") followed by a block.
Completion value reform: Let's experiment.
Octal constants: Useful as arguments to chmod. Proposal for 0o123 (as well as 0b01110). MarkM: Concerned about 0O123. Waldemar: Nothing unusul here. We've lived with 36l (meaning 36 long instead of 361) in Java and C++ for a long time. Alternative for octal: 8r123 (but then we'd also want 16r123, 2r0101, and maybe more). Decided to allow 0o and 0b. Unresolved whether to allow 0O and 0B. Persistent weak feelings on both sides on the upper case forms.
Use proto in object literals to do a put (assuming that a proto getter/setter was created in Object.prototype) instead of a defineProperty? All modes or only nonstrict mode? Allen: Make such use of proto to be a synonym for <|. If a <| is already present, it's an error. DaveH: proto is ugly. Don't want it in the language forever. Waldemar: What about indirect [] expressions that evaluate to "proto"? In Firefox they evaluate to accesses that climb the prototype chain and usually reach a magic getter/setter-that-isn't-a-getter-setter named proto that sits on Object.prototype. MarkM: Likes the ability to delete proto setter and thereby prevent anything in the frame from mutating prototypes. Waldemar: How do you guard against cross-frame prototype mutations? DaveH: proto is in the "omg, what were we thinking" category. Waldemar: Opposed to making proto mutate prototypes other than at object construction. This is getting insanely complex. Unresolved.
Waldemar Horwat <mailto:waldemar at google.com> January 18, 2012 5:27 PM My rough notes from today's meeting.
Thanks yet again for these.
Use proto in object literals to do a put (assuming that a proto getter/setter was created in Object.prototype) instead of a defineProperty? All modes or only nonstrict mode? Allen: Make such use of proto to be a synonym for <|. If a <| is already present, it's an error. DaveH: proto is ugly. Don't want it in the language forever. Waldemar: What about indirect [] expressions that evaluate to "proto"? In Firefox they evaluate to accesses that climb the prototype chain and usually reach a magic getter/setter-that-isn't-a-getter-setter named proto that sits on Object.prototype. MarkM: Likes the ability to delete proto setter and thereby prevent anything in the frame from mutating prototypes. Waldemar: How do you guard against cross-frame prototype mutations? DaveH: proto is in the "omg, what were we thinking" category. Waldemar: Opposed to making proto mutate prototypes other than at object construction. This is getting insanely complex. Unresolved.
One point not recorded here: given MarkM's argument for Object.prototype.proto as the one property to delete to remove this old beast, what kind of property does that appear to be to ES5's Object.getOwnPropertyDescriptor? Arguments pro and con for data property (as it appears to be in SpiderMonkey) vs. accessor (JSC intended to move to that from its hardcoded magic id handling in Get and Put code).
Argument for data property facade: an accessor allows extracting the setter from the property descriptor, call it stolen__proto__setter. Then if one makes an object with a bespoke proto-object but not delegating to Object.prototype:
var o = { proto: Object.create(null) };
an attacker could mutate o's [[Prototype]] via stolen__proto__setter.call(o, evil_proto). This is not possible if Object.prototype.proto reflects as a data property, because o's two-level proto chain is cut off from Object.prototype, so no further means of updating [[Prototype]] is available.
Le 19/01/2012 02:27, Waldemar Horwat a écrit :
Brendan: Kill typeof null. Replace it with Ojbect.isObject?
What would be the semantics of this?
Object.isObject(null); // false Object.isObject({}); // true // so far so good :-) Object.isObject(function(){}) // ?
I'd like to advocate "true" for the last case. For now, the best way to test if something is of type Object (as defined in ES5.1 - 8.6, so including function) is to do "o === Object(o)" (an alternative being "o !== null && (typeof o === 'object' || typeof o === 'function')", which is rather long and I have not seen much) which is a bit hacky and not straightforward to read for those who are not familiar with this trick. If an Object.isObject is introduced, I'd be interested in seeing it covering the 8.6 definition. Or maybe introduce another function for this?
Use proto in object literals to do a put (assuming that a proto getter/setter was created in Object.prototype) instead of a defineProperty? All modes or only nonstrict mode? Allen: Make such use of proto to be a synonym for <|. If a <| is already present, it's an error. DaveH: proto is ugly. Don't want it in the language forever. Waldemar: What about indirect [] expressions that evaluate to "proto"? In Firefox they evaluate to accesses that climb the prototype chain and usually reach a magic getter/setter-that-isn't-a-getter-setter named proto that sits on Object.prototype. MarkM: Likes the ability to delete proto setter and thereby prevent anything in the frame from mutating prototypes. Waldemar: How do you guard against cross-frame prototype mutations?
With a bit of hope, this is not in use in the web now. One idea would be that no proto is defined on otherFrame.Object.prototype, and the frame would need to negociate with its parent to get the proto setting capability. This may break the web if currently there is a website which opens iframes which relies on proto.
DaveH: proto is in the "omg, what were we thinking" category.
Seriously! :-)
Le 19/01/2012 06:44, Brendan Eich a écrit :
Use proto in object literals to do a put (assuming that a proto getter/setter was created in Object.prototype) instead of a defineProperty? All modes or only nonstrict mode? Allen: Make such use of proto to be a synonym for <|. If a <| is already present, it's an error. DaveH: proto is ugly. Don't want it in the language forever. Waldemar: What about indirect [] expressions that evaluate to "proto"? In Firefox they evaluate to accesses that climb the prototype chain and usually reach a magic getter/setter-that-isn't-a-getter-setter named proto that sits on Object.prototype. MarkM: Likes the ability to delete proto setter and thereby prevent anything in the frame from mutating prototypes. Waldemar: How do you guard against cross-frame prototype mutations? DaveH: proto is in the "omg, what were we thinking" category. Waldemar: Opposed to making proto mutate prototypes other than at object construction. This is getting insanely complex. Unresolved.
One point not recorded here: given MarkM's argument for Object.prototype.proto as the one property to delete to remove this old beast, what kind of property does that appear to be to ES5's Object.getOwnPropertyDescriptor? Arguments pro and con for data property (as it appears to be in SpiderMonkey) vs. accessor (JSC intended to move to that from its hardcoded magic id handling in Get and Put code).
Argument for data property facade: an accessor allows extracting the setter from the property descriptor, call it stolen__proto__setter. Then if one makes an object with a bespoke proto-object but not delegating to Object.prototype:
var o = { proto: Object.create(null) };
an attacker could mutate o's [[Prototype]] via stolen__proto__setter.call(o, evil_proto). This is not possible if Object.prototype.proto reflects as a data property, because o's two-level proto chain is cut off from Object.prototype, so no further means of updating [[Prototype]] is available.
Every time I've been thinking of an issue like this, the solution I've found was "whoever runs first wins". Assuming proto is an accessor of Object.prototype: If trusted code runs first, it can protect itself by removing the setter and making the property non-configurable. If an attacker runs first... you're screwed as you made the demonstration.
Even in the data property case, if an attacker runs first, she can probably change quite a lot of built-in prototypes, change built-in properties (of any object it has access to) to non-configurable accessors, add loggers, all over the place, return evil values to function calls. I have a script [1] which replaces every function with a function that is semantically equivalent, but logs "this", the arguments and the return value. If an attacker runs this before any other script (before initSES.js, for instance :-° ), but adds more harmful than loggers, she can really do nasty stuffs.
It seems that the threat may be a bit smaller if proto is a data property, but I'm not sure it's significantly smaller than all the things you can already do if an attacker runs first.
If you run first, proto being an accessor or a data property does not make a difference, you can protect yourself in any case. The accessor has the advantage that you can have fine-grained control over who can change and what cannot. Specifically, you can bind __proto__setter, and share this with someone so that this party can change the prototype of a given object (or set of objects) you've chosen. A data property is more "all (everyone can change the prototype of every object inheriting from Object.prototype) or nothing (or no one can change the prototype of any object)".
The question that remains is "how can you make sure your trusted run first?" which, I think goes beyond ECMAScript scope and should be considered in each context (browser, node.js, etc.)
For the browser, I can't think of a good solution that would be backward compatible and efficient. Suggestions welcome.
David
[1] DavidBruant/JSTraversers (not really production ready and makes some browser crash or hang, because they don't seem to appreciate their DOM builtins being traversed)
Le 19/01/2012 02:27, Waldemar Horwat a écrit :
Waldemar: Opposed to making proto mutate prototypes other than at object construction. This is getting insanely complex.
Just found a minute ago [1]. At line 50, proto is used. Here, the
notion of "object construction" is subtle (which is probably one of the
cases considered to say "This is getting insanely complex"). It has to
be noted that this is node.js code, which runs on an ES5-capable
environment.
In this particular case, since no runner is created in the file, a
standard equivalent to "Runner.prototype.proto =
EventEmitter.prototype;"
could be:
"Runner.prototype = Object.create(EventEmitter.prototype);"
David
Waldemar Horwat wrote:
Which ES6 features can be backported into non-strict mode? (blank: no significant issues; ?: possible issues; x: semantics conflict with de facto web)
? let (syntax issues) x const (divergent semantics) x function in block (divergent semantics) ? destructuring (syntactic conflict with array lookup) parameter default values rest parameters spread x tail calls (because of Function.caller) direct proxies simple maps and sets weak maps is / isnt (egal) iterators ? generators (interaction with scoping issues and yield keyword) generator expressions comprehensions private names quasi-literals pragmas (controversial) ? completion reform (Brendan: might be able to get away with it without breaking the web, but we don't know yet) x typeof null (Erik: It breaks the web) class super n/a modules methods in object literals <| ({[computed_name]: value}) (MarkM: what should happen with duplicate names in nonstrict mode?)
What about obj.{ ... } literal extension? It is not mentioned, and afaict is unproblematic, too.
Herby Vojčík <mailto:herby at mailbox.sk> January 19, 2012 6:32 AM
What about obj.{ ... } literal extension? It is not mentioned, and afaict is unproblematic, too.
Thanks, we did miss that one -- it was among the object literal extensions not at the top level of harmony:proposals.
David Bruant <mailto:bruant.d at gmail.com> January 19, 2012 1:15 AM Le 19/01/2012 02:27, Waldemar Horwat a écrit :
Brendan: Kill typeof null. Replace it with Ojbect.isObject? What would be the semantics of this?
It was not obvious but the precedent stems from the strawman that led to my proposal to change typeof null:
doku.php?id=strawman:object_isobject&rev=1295471005
This week we considered the draft spec:
Object.isObject = function isObject(value) { return typeof value === 'object'&& value !== null; }
to be deficient because a function is also an object, so one might rather have
Object.isObject = function isObject(value) { return (typeof value === 'object'&& value !== null) || typeof value == 'function'; }
Object.isObject(null); // false Object.isObject({}); // true // so far so good :-) Object.isObject(function(){}) // ?
I'd like to advocate "true" for the last case. For now, the best way to test if something is of type Object (as defined in ES5.1 - 8.6, so including function) is to do "o === Object(o)" (an alternative being "o !== null && (typeof o === 'object' || typeof o === 'function')", which is rather long and I have not seen much) which is a bit hacky and not straightforward to read for those who are not familiar with this trick. If an Object.isObject is introduced, I'd be interested in seeing it covering the 8.6 definition. Or maybe introduce another function for this?
That came up too: Object.type(x) would be the new typeof. But it will take a while to get adoption, it's not built-in so monkey-patchable etc.
Use proto in object literals to do a put (assuming that a proto getter/setter was created in Object.prototype) instead of a defineProperty? All modes or only nonstrict mode? Allen: Make such use of proto to be a synonym for <|. If a <| is already present, it's an error. DaveH: proto is ugly. Don't want it in the language forever. Waldemar: What about indirect [] expressions that evaluate to "proto"? In Firefox they evaluate to accesses that climb the prototype chain and usually reach a magic getter/setter-that-isn't-a-getter-setter named proto that sits on Object.prototype. MarkM: Likes the ability to delete proto setter and thereby prevent anything in the frame from mutating prototypes. Waldemar: How do you guard against cross-frame prototype mutations? With a bit of hope, this is not in use in the web now. One idea would be that no proto is defined on otherFrame.Object.prototype, and the frame would need to negociate with its parent to get the proto setting capability. This may break the web if currently there is a website which opens iframes which relies on proto.
The threat model does not necessarily involve only cross-frame/window attacks.
David Bruant <mailto:bruant.d at gmail.com> January 19, 2012 1:43 AM
Every time I've been thinking of an issue like this, the solution I've found was "whoever runs first wins".
This is not relevant to the example I showed. We have a de-facto standard with SpiderMonkey that protects an object from having its [[Prototype]] changed once it has cut itself off from Object.prototype. Reflecting proto as an accessor breaks this guarantee.
Does this matter? Clearly, it does cross-frame in the web embedding, and perhaps there is a narrower solution for that case. But we should consider carefully the full impact.
for all WeakMap shims and examples I have seen this to guard the key as object is basically:
Object.isObject = function isObject() { return Object(value) === value; };
why such difference with indeed function ambiguity with your first example?
Agreed on Object.type since it's easy to monkey patch while typeof is already causing my code to look like this
typeof obj != "null" && typeof obj == "object" && !!obj
which looks a bit pleonastic ( since you like the word ) for code that would like to run cross milestone and understand if the object is an object and not a function ... however, now that I wrote it I understand concerns about Object.type and the function case ...
br
Andrea Giammarchi <mailto:andrea.giammarchi at gmail.com> January 19, 2012 4:10 PM for all WeakMap shims and examples I have seen this to guard the key as object is basically:
Object.isObject = function isObject() { return Object(value) === value; };
why such difference with indeed function ambiguity with your first example?
We agreed not to treat functions as non-"isObject" objects.
Agreed on Object.type since it's easy to monkey patch while typeof is already causing my code to look like this
typeof obj != "null" && typeof obj == "object" && !!obj
V8-proofing shows this, yeah. It's a good warning sign in addition to the sheer runtime incompatibility of changing typeof null to be "null".
So Object.type is the thing to draft. Probably it should return typeof-like strings with the "null" fix and only that. I do not believe it should start returning "array" for Array instances, or take other experimental risks. But we should discuss.
give typeof its role and leave Array.isArray, Object.isObject, Function.isFunction ... Whaveter.isWhatever it's easy to implement logic :-)
For classes it's fine to me to keep using the {}.toString.call(something).slice(8, -1) trick ( Rhino engine as exception but ... they can align with "[object " + [[Class]].name + "]" here )
for what I have seen, typeof has been always mainly used to grant an object like reference, with the boring null exception ... I don't think developers will complain about missing "array" also because I would expect typeof new Number to be "number" at that point but this is misleading until primitives exist
"function" is a must have because developers want to know about [[Call]] so here comes the ambiguity problem between typeof and Object.isObject but once these global public static methods are there I would keep typeof simple/backward compatible as it has always been
br
OT: also with direct proxy Function.prototype could have a get(key) able to /^is([A-Z][a-zA-Z$_]*)$/.test(key) return the right check for the generic global/userDefined constructor through {}.toString.call(value).slice(8, -1) === RegExp.$1 check ... never mind, just random thoughts :-)
So Object.type is the thing to draft. Probably it should return typeof-like strings with the "null" fix and only that. I do not believe it should start returning "array" for Array instances, or take other experimental risks. But we should discuss.
FWIW: I’ve written down all typeof use cases that I could come up with: www.2ality.com/2012/01/typeof-use-cases.html
A way of returning [[class]] would be great. Any reason why such a function couldn’t be merged with Object.type()? My attempt at implementing such a merged function (along with a detailed rationale – I prefer returning "Array" to returning "array") is documented here: www.2ality.com/2011/11/improving-typeof.html
+1 for the returned class ... also if we distinguish between "array" and "Array" then the new Boolean/Number/String case can be covered via "Number", if object, rather than "number", which is cool.
Th only weird thing would be "object" rather than "Object", as if its [[class]] is unknown
br
Le 20/01/2012 00:54, Brendan Eich a écrit :
David Bruant <mailto:bruant.d at gmail.com> January 19, 2012 1:15 AM Le 19/01/2012 02:27, Waldemar Horwat a écrit :
Brendan: Kill typeof null. Replace it with Ojbect.isObject? What would be the semantics of this?
It was not obvious but the precedent stems from the strawman that led to my proposal to change typeof null:
doku.php?id=strawman:object_isobject&rev=1295471005
This week we considered the draft spec:
Object.isObject = function isObject(value) { return typeof value === 'object'&& value !== null; }
to be deficient because a function is also an object, so one might rather have
Object.isObject = function isObject(value) { return (typeof value === 'object'&& value !== null) || typeof value == 'function'; }
That would be perfect (for me at least).
Object.isObject(null); // false Object.isObject({}); // true // so far so good :-) Object.isObject(function(){}) // ?
I'd like to advocate "true" for the last case. For now, the best way to test if something is of type Object (as defined in ES5.1 - 8.6, so including function) is to do "o === Object(o)" (an alternative being "o !== null && (typeof o === 'object' || typeof o === 'function')", which is rather long and I have not seen much) which is a bit hacky and not straightforward to read for those who are not familiar with this trick. If an Object.isObject is introduced, I'd be interested in seeing it covering the 8.6 definition. Or maybe introduce another function for this?
That came up too: Object.type(x) would be the new typeof. But it will take a while to get adoption, it's not built-in so monkey-patchable etc.
If Object.isObject has the second definition you showed, I don't think an Object.type will be necessary, because every type will be testable in one "instruction". Strings, numbers, booleans have typeof, undefined and null are unique values (testable with ===) and Object.isObject will test for ES5.1 - 8.6 definition of objects. It won't be consistent as an Object.type method would be, but as far as I'm concerned, I don't care.
Le 20/01/2012 00:57, Brendan Eich a écrit :
David Bruant <mailto:bruant.d at gmail.com> January 19, 2012 1:43 AM
Every time I've been thinking of an issue like this, the solution I've found was "whoever runs first wins". This is not relevant to the example I showed.
All in all, regardless of data or accessor, or the guarantee you put in which object can or cannot have its prototype changed, it seems that at the very least, security relies (at least partly) on proto being deletable (configurable). And deleting it is something that the first code that runs gets to decide. If it's trusted code, it can delete it, if it's an attacker, it can make it non-configurable.
We have a de-facto standard with SpiderMonkey
I'm not sure I agree with the idea of a "de-facto standard with SpiderMonkey". In Chrome (V8):
var o = Object.create(null); console.log(o.a); // undefined o.proto = {a:1}; console.log(o.a); // 1
If something is de-facto, it's SpiderMonkey implementation, not a standard. And as I showed in an earlier message, there is out there code running in node.js (V8) using proto, hence making V8's version also worth not breaking.
that protects an object from having its [[Prototype]] changed once it has cut itself off from Object.prototype. Reflecting proto as an accessor breaks this guarantee.
In the strawman [1], the security of an object is not bound to what is or is not in the prototype chain, but rather whether or not the object is extensible or not. Why wouldn't that be enough?
Does this matter? Clearly, it does cross-frame in the web embedding
I'm not sure I understand. Can you provide an example please? Specifically, an example that would show a threat for the accessor property case, but not the data property case.
Regarding cross-frame issues, I can't think of how one case is worse than the other. Data or accessor, a new frame defines a new Object, a new Object.prototype and a new Object.prototype.proto. This one has to be deleted by the parent. If in the parent, trusted code run first, it can make sure to be the only entity allowed to dynamically create frames and to be the first to access them (to delete Object.prototype.proto). If in the parent, an attacker runs first, it can make sure to be the first one to access the frame.Object.prototype and do whatever it wants to its proto property.
David
On Wed, 2012-01-18 at 17:27 -0800, Waldemar Horwat wrote:
My rough notes from today's meeting.
Thanks very much for these notes.
DaveH's new proposal: Other than static checking, attach the incompatible ES6 semantics to the strict mode opt-in. These semantics are upwards-compatible with ES5 strict mode (but not ES5 non-strict mode). The semantics inside a module would be the strict semantics plus static checking.
I had a hard time following the conclusions here. There does seem to be some uncertainty regarding ES6 features in non-strict code, but is in strict mode and in modules is this (DaveH's new proposal) the consensus?
Thanks,
Andy
On Thu, Jan 19, 2012 at 7:53 PM, Andrea Giammarchi < andrea.giammarchi at gmail.com> wrote:
give typeof its role and leave Array.isArray, Object.isObject, Function.isFunction ... Whaveter.isWhatever it's easy to implement logic :-)
When I first read the meeting notes and saw the reference to Object.isObject(), I also immediately thought of these as complimentary to the existing Array.isArray() and it matches what developers want and are using:
jQuery [1]:
jQuery.isArray() jQuery.isFunction() jQuery.isNumeric() jQuery.isPlainObject() (ie. an object created with {} or new Object())
underscore.js [2]:
_.isArray() _.isBoolean() _.isDate() _.isFunction() _.isNaN() _.isNull() _.isNumber() _.isRegExp() _.isString() _.isUndefined()
Mootools [3]:
typeOf()
.. which is similar to Object.type()
YUI [4]:
Y.Lang.isArray() Y.Lang.isBoolean() Y.Lang.isDate() Y.Lang.isFunction() Y.Lang.isNull() Y.Lang.isNumber() Y.Lang.isObject() Y.Lang.isString() Y.Lang.isUndefined()
And a "type" function: Y.Lang.type()
Ext/Sencha:
Ext.isArray() Ext.isBoolean() Ext.isDate() Ext.isDefined() Ext.isFunction() Ext.isNumber() Ext.isObject() Ext.isString()
[1] trends.builtwith.com/javascript/jQuery [2] trends.builtwith.com/javascript/Underscore.js [3] trends.builtwith.com/javascript/MooTools [4] trends.builtwith.com/javascript/YUI3
Andy Wingo <mailto:wingo at igalia.com> January 20, 2012 7:43 AM I had a hard time following the conclusions here. There does seem to be some uncertainty regarding ES6 features in non-strict code, but is in strict mode and in modules is this (DaveH's new proposal) the consensus? Thanks, Andy
I listed every harmony:proposals (eventually -- some eluded me) on the white board and we worked through them, evaluating how they could work in non-strict code. That was the list with ? or x next to any proposal with questions or overt (we thought) compatibility problems.
We eventually solved all the cases that survived (typeof null == "null" died). This means new ES6 features work in non-strict code. Great news!
We need some experiments to prove that those early x marks can be removed. For instance, const and function-in-block require experimentation by SpiderMonkey and JavaScriptCore in nightly builds to switch to the ES6 semantics in non-strict code and see what breaks.
David Bruant <mailto:bruant.d at gmail.com> January 20, 2012 12:51 AM Le 20/01/2012 00:54, Brendan Eich a écrit :
That came up too: Object.type(x) would be the new typeof. But it will take a while to get adoption, it's not built-in so monkey-patchable etc. If Object.isObject has the second definition you showed, I don't think an Object.type will be necessary, because every type will be testable in one "instruction". Strings, numbers, booleans have typeof, undefined and null are unique values (testable with ===) and Object.isObject will test for ES5.1 - 8.6 definition of objects. It won't be consistent as an Object.type method would be, but as far as I'm concerned, I don't care.
We want "consistency along the major dimensions" for new features, if we can. Having a bunch of isFoo predicates does not help switch on an enumerated type value. On the other hand, we can't make a categorical sum for all "types" or "classes" without a catch-call category within which further subtype tests will be required anyway.
I think given existing practice we might justify
Object.typeOf(x) - typeof with Object.typeOf(null) == "null". Object.classOf(x) - Object.prototype.toString.call(x).slice(8, -1) using the original value of O.p.toString.
Comments?
Object.classOf(x) - Object.prototype.toString.call (x).slice(8, -1) using the original value of O.p.toString.
Comments?
Including the name of the module that the class comes from might be nice.
We do that for classes implemented in C in GPSEE modules today by being bad -- we modify JSClass::name after JS_InitClass() returns -- and it's helpful during debugging.
On Wed, Jan 18, 2012 at 17:27, Waldemar Horwat <waldemar at google.com> wrote:
Octal constants: Useful as arguments to chmod. Proposal for 0o123 (as well as 0b01110). MarkM: Concerned about 0O123. Waldemar: Nothing unusul here. We've lived with 36l (meaning 36 long instead of 361) in Java and C++ for a long time. Alternative for octal: 8r123 (but then we'd also want 16r123, 2r0101, and maybe more). Decided to allow 0o and 0b. Unresolved whether to allow 0O and 0B. Persistent weak feelings on both sides on the upper case forms.
On behalf of Node.js, thank you. This is the right call.
FWIW (ie, not much), I'm personally 100% ambivalent about 0O and 0B. 0O is less visually distinctive than 0o, and caps support isn't really necessary.
Nr### is not really necessary. Programming happens primarily in bases 2, 8, 10, and 16. (And 64, but that's mostly just for serializing.) If we have 0b, 0o, 0x, and the default base 10, then that's plenty.
Isaac Schlueter <mailto:i at izs.me> January 20, 2012 1:00 PM
On behalf of Node.js, thank you. This is the right call.
On behalf of old Unix people, thank us -- and you! :-)
FWIW (ie, not much), I'm personally 100% ambivalent about 0O and 0B. 0O is less visually distinctive than 0o, and caps support isn't really necessary.
See second day notes:
Revisited octal/binary constants. Waldemar: Note that we currenty allow both upper and lower cases for a, b, c, d, e, f in hex literals as well as e in exponents and x in hex literals. Breaking symmetry by forbidding upper case for b and o to avoid lookalikes would be "nanny language design". We can't forbid lookalikes anyway: { let y = 3; { let у = 7; return y; } } This is a valid ES6 strict function body that returns 3, of course. The second let defines a local variable named with a Cyrillic letter.
Decision: Allow upper case B and O to start binary or octal literals.
Nr### is not really necessary. Programming happens primarily in bases 2, 8, 10, and 16. (And 64, but that's mostly just for serializing.) If we have 0b, 0o, 0x, and the default base 10, then that's plenty.
Everyone agrees -- woohoo!
Here are my rough notes for today's meeting.
Internationalization standard: Part of E262 or separate track? Pros and cons to each one, and either would be workable. There is a substantial area of interaction (ES5 locale methods, normalization, and such) between them that will need to be addressed regardless of which approach we take.
Lunch discussion over whether we want all locale-specific behavior (examples: date formatting) to be implementation-defined or whether we'd want to specify it for at least the major locales. Having everything implementation-defined makes testing a hassle and will result in different behavior on different platforms/browsers (as is happening today). On the other hand, specifying the results for a lot of locales is a lot of work.
Waldemar: Either would work. Personal preference is to make it part of E262 if it's small (in terms of number of pages of standard) or make it separate if it's large.
Agreed not to fast-track internationalization library (if it's a separate standard) for now. It's going to be evolving too quickly.
Allen: Update on ISO fast track and ES5.1. Applause.
TC39 requested a vote on ES5.1 at the upcoming summer GA.
Allen: There's a significant community of users who are used to the classical patterns of encapsulation (rather than using closures for all encapsulation).
Waldemar and Dave: Important to be able to late-bind design decisions and change them easily. This means that a programmer should be able to relatively easily take code that uses a public object property and refactor the code to make it private (or vice versa) without having to restructure the code.