July 24, 2012 - TC39 Meeting Notes
July 25 2012 Meeting Notes
Present: Mark Miller (MM), Brendan Eich (BE), Yehuda Katz (YK), Luke Hoban (LH), Andreas Rossberg (ARB), Rick Waldron (RW), Alex Russell (AR), Tom Van-Cutsem (TVC), Bill Ticehurst (BT), Rafeal Weinstein (RWS), Sam Tobin-Hochstadt (STH), Allen Wirfs-Brock (AWB), Doug Crockford (DC), John Neumann (JN), Erik Arvidsson (EA), Dave Herman (DH), Norbert Lindenberg (NL), Oliver Hunt (OH)
Scoping Rules for Global Lexical Declaration
AWB:
-
Global scoping var vs. let and const declarations var and function need to go on global object
-
What do we do with new binding forms? (class, module, imports, let, const) Q. Should these become properties of the global object?
DH: Not sure a restriction is needed, the global scope is the global object in JavaScript. With modules, globals are less of a problem.
YK: (clarification)
AWB, DH, BE: (providing background, e.g. on temporal dead zone for let/const/classs
BE: Agree there needs to be some form of additional info not in property descriptor
ARB: Need additional static scope information e.g. for modules. Need additional dynamic information for temporal deadzone.
DH: If you drop the idea that let is always let everywhere. Questions whether let should be more like var at global scope.
ARB: Does not work for modules.
AR: Reasonable to say that the global scope is never finished and that properties can continue to be defined
AWB: An example.
A const declaration; it creates a property on the global object; it's not defined yet; Before it's initialized another piece of code sets the value - what happens?
DH: (board notes)
- 2 Contours, Nested "REPL"
- var, function go in global
- let, const, module, class… all get modeled lexically as usual in inner contour
- each script's inner contour is embedded in previous script's inner contour
- 2 Contours, Not Nested "Uniform Let"
- var, function, go in global
- let, const, module, class… all get modeled lexically as usual in inner contour
- each script's inner contour is "private" to that script
- 1 Contour, Global "Traditional"
- var, function, let, const, module, class… everything is a property of the global object.
- Additional scope refs in a side table of global, shared across scripts
- each script updates the side table
- 2 Contours, Not Nested - Merged "Expando"
- var, function, go in global
- let, const, module, class… all lexical
- each script updates lexical contour of previous scripts
AWB: "Expando" was previously agreed upon, where the additional layer of lexical scope is available but shared. (Notes that Andreas did not buy into this)
DH: Agrees. Explains where "Expando" fixes the problems of "Traditional".
|---------| | get x | | set x | | |---------|
x: let |
---|
This would identify that "x" was declared with "let" and so forth.
STH:
A. <s>
let x; </s> <s>
var x; </s>
"Expando" (#4) Makes this an error
B. <s>
let x = 1; window.x; </s>
C. <s>
let x = 1; </s> <s>
x; </s>
"Contour"/"Expando" both result in 1
D. <s>
const x = 1; </s> <s>
if (null) { x = 2; } </s>
"Contour"/"Expando" both result in no error
STH: Final debate that remains is reification on the window object. Allen is not in favor.
ARB: In favor of reification; but would like to get rid of the global object someday.
DH: Points out that non-reification will result in "WAT" from community (I agree).
Discussion about module unloading…
BE: Let's talk about unloading as a seperate, secondary conversation.
DH: Keep the global garbage dump as is - maintain consistency
AWB: No objection to a global garbage dump.
DH: If we add complexity to to the global mess, there is no win.
DC: Global is mess, we can't change it. Arguments are that consistency win, but we have an opportunity to clean this up. var can remain the same, but let is a new thing and we can afford it new behaviours.
RW: I agree, but we need to stop claiming "let is the new var" because general population will take that literally. If let has different behaviour, then "let is the new let".
DH: If you consider each script to be a "block" ie. { block }
YK/DC: Agree
DH: I have a crazy alternative… We could special case unconditional brace blocks in … scope. If you write a pair of block braces at the global scope and a let inside it, it will exist in that scope, but not global. functions, var hoisted out of the block brace.
.#2 and #3, most coherent options.
DH: function prevents us from having a coherent story about implicit scope.
STH: Might want to do something other than reification
BE: Disagree with imputing curlies as way of illustrating a <script>'s top
level scope.
DH: Need some way to explain where that scope lives.
OH: If you explain that let identifiers only exist in one script tag, developers will understand that.
RW: Agree.
AR: Agree, but they will say it's wrong
YK, BE: Agree with AR
AR: (Explanation of strawman developer concept of lexical ownership)
ARB: Also, want to be able to access e.g. modules from HTML event attributes
YK: The concat reality.
DH, BE: Agree on opposing concat hazards
Summary: #3 is the path. Champions spec this out and present for next in-person. (AWB, ARB, DH, RW)
Object.observe
(Presented By Rafeal Weinstein) strawman:observe
obj {}[[Notifier]] ----> {}N [[ChangeObservers]] [[Target]] <---- obj
Object.observe Object.getNotifier(obj).notify(changeRecord);
{}Function [[PendingChangeRecords]]
When a data property is mutated on an object, change records are delivered.
[[ObserverCallbacks]] Used to order delivery
Object.deliverChangeRecords(callback); …mitigates side-channel communication by preventing change records from escaping.
Explanation of specification history and roots in newer DOM mutation mechanism.
AWB: Is this sufficient for implementing DOM mutation event mechanisms?
RWS: Yes, those could be built on top of Object.observe
AWB/AR: Good, that should be a goal as well.
TVC: [If you're in the finalization phase and another observation is triggered, what happens]?
MM: FIFO event queue, deliveries run to completion
DH: Consider a two level, nested event queue
RWS: Very close to internal event queue, but is not. A single observer is being delivered changes, but not necessarily in the order that they occurred.
YK/RWS: Agree on delivery of script data mutation first, in any context.
RWS: Explanation of how mutation is handled and data binding as a whole.
DH: Concerned that it's too complicated and may conflict with expectation of run-to-completion.
RWS: Agree, but feel as though it is unavoidably complex, but this is for library authors to build better data binding abstractions.
YK: Can confirm that this proposal addresses web reality pain points.
DH: Not sure there is a good policy for knowing when to process what and when on a queue.
(stepped out, missed too much, need fill in)
TVC and RWS discussion of how Proxy can benefit from Object.observe Unless/until we have an actual use case for virtualizing the observation system, don't let proxies virtualize observation: proxies have their own internal notifier like normal objects. Object.observe(proxy, callback) registers callback on the proxy. Proxy handler needs to actively observe target and re-notify its own observers for observation to work transparently across a proxy.
AWB: Concerns about whether the overall complexity is something that belongs in a general purpose language spec
LH: The complexities are such that they meet half way between policy that allows for too much, and for not enough.
DH: Agrees, the conversation has been helpful and agree that the complexity is on the right track for the right reason. Need to ensure that the right middle ground is met. Maybe current state is too high level, but closer than original too low level state.
BE: agree with DH, want to avoid premature/overlarge spec, do want implementation and user-testing. Let other impls know when spec is ready for trial impl.
Summary of next steps:
DH: Coordinate with YK colleague, to do real world work. Update TVCs prototype? Implementation prototypes. (How to leverage developers to work on mini projects with prototype implementations)
RW: Would like to get access to a build that I can bring back to devs at Bocoup, where we can put dev resources towards developing projects with Object.observe; for example converting existing Backbone applications, etc.
RWS: Agree and will arrange.
Weak References
DH: The GC issue.
MM: A security concern, how to determine what is adequately privileged. WeakMap does not have this issue, WeakRef does
YK: WeakMap doesn't meet the use case
DH: WeakMap meets its own use case really well. WeakRef portability issue: non-determinism. If the web relies on un specified behaviour, you get defacto "worst case scenario". Safer: only null between turns, as the web does today? If we go with traditional WeakRef, it's conceivable that the non-determinism is not an issue. Again, safe if in event turns.
Discussion about determinism/non-determinism.
Discussion about finalization, and whether it is a necessary part of the proposal. MM considers it important, AWB, ARB think it's too much of a hazard. Agreement at least that weak refs are useful without.
Only considering post-mortem finalization (finalizer does not have access to the collected object; it's already been collected), so no "zombie revival" issues.
BE: programmers will expect some sort of promptness to finalization, whereas it's not possible to provide any such guarantees; not testable
YK: frameworks will have to periodically eagerly collect empty WeakRefs myself, which they can live with, but it's definitely less convenient; anyway, setTimeout FTW
Script Concat Issue
DH: remember that the purpose of ES6 modules is to do sync-style loading without runtime blocking on I/O; this means that if you want to do configuration before loading, you have to run one script before compiling another:
<script defer>
System.set("@widget", patch(System.get("@widget"))); </script> <script defer>
import widget from "@widget"; </script>
not the same as…
<script defer>
System.set("@widget", patch(System.get("@widget")));
import widget from "@widget"; </script>
Not possible for people to concat scripts for deployment and have the configuration happen before the loading
Submitting for discussion: the shebang as "a concat seperator" that…
- Fixes the concat ASI hazard
- Allows for artificial parsing boundary
- Note that this will change semantics of var hoisting
EA: concatenation of modules will require non-trivial compilation anyway; there will be ways to do this kind of thing with translation, without needing built-in support
DH: and loaders also make it possible to deploy multi-file formats
Discussion about the reality of concatenation hazards of modules
Defer, but still open for future discussion.
Fix "override mistake"
The can put check
var p = Object.create(null, {x: {writable:false, value:42}, y: {{get: function(){return 42}}})
var o = Object.create(p);
o.x = 99; o.y = 100;
Property in a prototype object that is read-only cannot be shadowed.
Just the same as get-only accessor.
Causes SES/Caja grief on impls that follow spec. Must replace proto-props in prototype object to be frozen with accessors where the set function manually shadows.
Summary: There is no change for now, needs to be looked at when subclassing is addressed.
July 26 2012 Meeting Notes
Present: Mark Miller (MM), Brendan Eich (BE), Yehuda Katz (YK), Luke Hoban (LH), Rick Waldron (RW), Alex Russell (AR), Tom Van-Cutsem (TVC), Bill Ticehurst (BT), Sam Tobin-Hochstadt (STH), Allen Wirfs-Brock (AWB), Doug Crockford (DC), John Neumann (JN), Erik Arvidsson (EA), Dave Herman (DH), Norbert Lindenberg (NL), Oliver Hunt (OH)
Maxmin class semantics
YK: namespacing pattern: class that goes inside existing object; like Ember.View
DH: Ember.View = class ...
AWB: or Ember = { View: class ... }
AWB: early error list
- naming class eval/arguments
- duplicate class element names
- extends expression contains a
yield
- method name constructor used on get, set, or generator
MM: yield
should not be an error!
DH: definitely not! burden of proof is on the rejector; there's no reason to reject here
YK: why can't we do a getter?
DH: there's no way to declaratively figure out what the actual function for the class is, because the getter returns the function
AWB: class declarations create const bindings
AR: can you justify?
AWB: why would you want to overwrite it?
RW: what about builtins needing to be patched?
DH: those are independently specified to be writable; the relevant question is whether user programs will want to patch up local class bindings
AWB: whether this is a good idea probably depends on whether you're a library writer or application writer; if you aren't exporting class definitions
AR: you could still say const x = class
YK: that distinction isn't useful; every app has stuff like libraries
AR: restriction needs justification
DC: my preference is only for the expression form so there's no confusion
RW: surveyed ~200 developers, majority did not want const bindings by default
MM: I like crock's suggestion, just don't do the declarative one
EA: what?
LH: that's just putting cost on everyone else rather than us
MM: no, I'm talking about saving the cognitive cost to user
YK: if we went with const by default, I'd agree we shouldn't do declarative
AR: goal is most value for shortest syntax, without footguns; the analogy with const seems tenuous
AWB: this is subtle, and most people won't even notice
DH: I don't buy that there are significant errors being caught, there's no benefit to engines, there's not enough benefit to users, and it's clear there are costs. so I don't see any reason to do const binding by default
<<general agreement>>
MM: I'm opposed to declarative form. but if it is going to be declarative, should pick a declarative form and say it's the same as that, and let is the only clear candidate
DH: I'm not convinced function is impossible
MM: the expression extends is the killer. makes it impossible
LH: I'm convinced it can't hoist
DH: why not a more restricted syntax for declarative form in order to get hoisting?
{ class Sup extends Object { ... } class Sub extends Sup { ... } }
LH: surprising that you can't compute the parent
DH: there are surprises in each alternative we've talked about here; but I claim it's surprising to lose hoisting
OH: relevant analogy here is the fact that other languages with declarative classes don't care about order
LH: CoffeeScript does; it translates to var x = ...
AR: pulse?
DH: I think we all acknowledge this is tricky; I feel strongest that leaving out the declarative is failing in our duty
MM: if we leave out the declarative, then people will simply learn that the language is let c = class
BE: why are we debating this?
STH: Mark and Doug are arguing it
BE: over-minimizing and failing at usability
YK: let x = class extends Bar { }
is just crazy
DH: that's laughable as the common case
AWB: this came from the hoisting debate
BE: I thought we agreed to dead zone. if we get stuck on this we'll never finish classes
LH: agreed; we need a separate proposal for hoisting
DH: happy to revisit later if I can come up with better alternatives
MM: we have adequate consensus that declarative desugars to let
AWB: classes are strict?
STH: I thought class did not imply strict mode
AR: does anyone want that?
<<no>>
AWB: default constructor has empty body? we'll get back to this
AWB: local class name scoping? similar to named function expression, but const bound?
DH: const bound?
AWB: just like NFE
DH: I actually didn't know NFE's had a const binding!
AWB: is this a bug? should we reconsider?
MM: avoids refactoring hazard
MM: my first choice would be to fix function: within function body its name is const; second choice is for class to be consistent
BE: not sure why we're talking about this, can't be changed
MM: in that case the class expression form should follow NFE
<<general agreement>>
DC: I disagree with the scoping decision about class declarations
DH: confused what we're talking about
STH: in body of class declaration, should there be a fresh scope contour
OH: it's not uncommon to overwrite the class
MM: example:
class Foo { self() { return Foo } } ... new Foo().self() === Foo // can fail
this is very confusing for this to fail
DH: why would you ever want the extra scope contour?
STH: Rick gave a good example:
class C { m(x) { return x instanceof C } } var y = new C; C = 17 y.m(y)
DH: not compelling; you mutated C! if you need the earlier value, you should save it; the confusion would only arise if you expected C to be a static class like in Java, but that's not how JavaScript bindings work
RW: the common pattern being the defensive-constructor pattern:
function C() { if (!(this instanceof C)) { return new C(); } ... }
DH: now I'm that much more confident that there should not be another scope contour; I don't see any compelling argument
AWB: let me throw up another justification: class declarations often appear at global scope, not uncommon for somebody to write class body where there are references to the class; at global scope, anybody could have assigned to that value
DH: I don't want to poison non-global cases just to protect against one hazard of global code, when global code is hazardous anyway
AWB: I would put protecting global code at a higher priority than a subtlety of inner bindings, but I'll go with the flow if I can't convince you
DC: I don't want to hold this up
MM: are you willing to go with the function parallel?
DC: yes; I don't prefer it but I won't hold this up
AWB: missing extends, what's the default? intrinsics
<<agreement>>
AWB: extends null: prototype is null, Foo.[[Prototype]] extends intrinsic Function.prototype
<<agreement>>
AWB: extends a constructor:
class Foo extends Object { } Foo.[[Prototype]]: (Object) Foo.prototype.[[Prototype]]: (Object).prototype
IOW, class-side inheritance
MM: I disagree, the history of JS does not have it
BE: I disagree with that claim, history shows some examples on both sides
EA: people do refer to this
in static functions; they have the freedom to
use the class name or this
, and they do both
LH: CoffeeScript does class-side inheritance, but they don't do it like this -- they copy
BE: but they will avoid the copy once you implement dunder-proto
MM: you can't depend on it
BE: this gives programmers more flexibility to do it however they want
MM: but then people can't use a this-sensitive function!
BE: not true, the contract of a JS function includes its this-sensitivity
Arv, AR: <<nod visibly>>
LH: at end of day, plenty of static functions in JS that are this-sensitive
YK: that's the style of program that I write
EA: some style guides say don't do it
LH: backbone does this
MM: so Foo will inherit Object.create, Object.getOwnPropertyDescriptor, etc?
DH: that does mean we'll be more and more hampered from adding methods to Object
EA: but now we have modules
BE: true, that's the right answer
MM: polluting of statics with everything in Object is fatal; those are just not relevant to most of the class abstractions people write; when I write
class Point { }
I don't want Point.getOwnPropertyDescriptor
AWB: you only opt into that with class Point extends Object; with class Point { } you don't get any of that stuff
DH: <<feels giddy and sees the clouds part and sun shining through, with angels singing from on high>>
YK: also, there are override hazards of pollution: if someone freezes Object, then you wouldn't be able to override sweet class method names like keys(), so the ability to avoid that pollution is important
MM: valid point. thing is, we don't have static form b/c you can supposedly use imperative assignment, but that won't work for frozen classes
BE: that's just an argument for statics in the future
AWB: minimality ftw
AWB: class Foo extends Object.prototype?
LH: this surprised me when I saw it in the spec
AWB: older version of class proposal had a "prototype" contextual keyword for this case
DH: what happens if you're defining a meta-class? you can't tell whether it's a prototype or a constructor
BE: that's a smell
AWB: constructor trumps object
BE: YAGNI, cut it
AWB: so what do we do if it's not a constructor?
DH: throw
BE: that's more future-proof
<<general agreement>>
AWB: extends neither an object nor null: type error
DH: actual type errors in JS, yay!
RW: curious: what if Foo extends { ... }
DH: non-constructable, so error; but could use a function literal
AWB: extends value is constructor but its prototype value is neither an object nor null: type error (existing semantics of new: silently uses Object.prototype)
<<agreement>>
AWB: Foo.prototype is an immutable binding? builtin constructors are immutable, user function(){} mutable
<<some surprise>>
MM: make .constructor mutable but .prototype immutable
YK: why? (I want mutable)
MM: nice for classes for instanceof to be a reliable test
YK: why?
AWB: classes are higher integrity; association between constructor and prototype actually means something now
BE: I'm moved by higher-integrity, self-hosting with minimal work
STH: not compelling to make self-hosting easy, just possible; defineProperty is just fine for that
DH: most everyone seems to agree that .prototype is immutable, .constructor is mutable. Arv and AR, thoughts?
EA: that's fine
AR: yup, that's fine
AWB: method attributes: sealed? (writeable: true, configurable: false, enumerable: false)
- configurable: false -- you've established a specific shape
YK: you don't want to switch from a data property to an accessor?
AWB: non-configurable but writable is reasonable
MM: this depends crucially on our stance on override mistake; this prevents me from making an accessor
AR: I don't see why we're considering making this anything other than writeable: true, configurable: true
BE: Allen feels having the shape be fixed is useful
<<discussion>>
BE: so consensus is writable: true, configurable: true
<<agreement>>
AWB: methods are not constructable?
DH: what?
MM: biggest benefit: this further aligns classes with builtins
MM: three reasons for this:
- precedent in builtins
- using a method as a constructor is generally nonsense
- to freeze a class, I have to freeze the .prototype of the methods on the prototype!!
LH: compelling for me: never seen a class-like abstraction on a prototype of a class-like abstraction
MM: I have, but you still can; just do it in a way that's obvious, don't do it with method syntax
BE: hard cases make bad law! (agreeing with MM -- use a longhand)
YK: so you can say classes really only existed as builtins, now they're expressible
AWB: get/sec accessors are constructors? that's just the way they are in ES5
BE: is there precedent in builtins?
AWB: nothing explicit
YK: I'd prefer consistency between these last two cases
AWB: accessor properties on prototype are enumerable
BE: what about DOM/WebIDL? accessors on prototype?
LH: they're enumerable, yes
AWB: suggestion: concise methods should be the same for both classes and object literals
- strictness
- enumerability
- constructability
- attributes
AWB: breaking change from ES5: get/set functions non-constructable
AWB: class accessor properties:
- enumerable: false, configurable: false
AR: no
EA: no
YK: when you use an accessor you're trying to act like a data property
BE: so compelling argument is: accessors are enumerable, configurable, and writable
AWB: Luke suggests that default constructor should do a super-constructor
call with same arguments constructor(...args) {super(...args)}
BE: default constructor in CoffeeScript, Ruby
AWB: perhaps needs to test for Object constructor and not call it
DH: no observable difference!
MM: if there's no observable difference, go with simplest spec
AWB: other places where we do implicit super call? I say no
DH: I say no.
LH: I agree, I think there's no clear way for us to do it, but I also think there will be many, many bugs
BE: irreducible complexity here, caveat refactorer
getPrototypeOf trap
TVC: (introduction)
proto writable destroys invariant that [[Prototype]] link is stable
Frozen objects should continue to have stable prototype chain
getPrototypeOf trap result should be consistent wth target object's proto
MM: if the proto can be changed, the proxy should…?
TVC: spec interceptable [[Prototype]]
[[Prototype]] is currently an internal prop
Would need to become internal accessor prop or split into [[GetProto]] / [[SetProto]]
[[GetProto]] / [[SetProto]] would trigger traps for proxies
AWB/BE: This is good
YK: Do we want an analogous setPrototypeOf trap?
TVC: Yes
AWB: If you have capability to set prototype ?
TVC: proxy.proto should just trigger the proxy's get trap
var p = Proxy(target, handler)
p.proto // => handler.get(target, "proto", p)
p.proto = x // => handler.set(target, "proto", x, p)
…
Trapping instanceof
Function [[HasInstance]]
x instanceof Global answering true if x and Global live in separate frames/windows
var fp = Proxy(targetFunction, handler);
x instanceof fp // handler.hasInstance(targetFunction, x)
MM: Explains concerns originally raised on es-discuss list by David Bruant, but shows the cap-leak is tolerable …
DH: if hasInstance private name on instanceof RHS...
MM: What Object.prototype does private name inherit from?
AWB: Probably null
BE: the E4X any (*) name had null proto in SpiderMonkey, was true singleton in VM
AWB: functions have home context, but no reason for objects to
DH: this is a new idea of value that is not really any object
OH: if it has no properties and no prototype
BE: cannot be forged.
Discussion about unforgeability.
DH: Trapping instanceof use case
Trapping Object.isExtensible
Currently Object.isExtensible doesnt trap same for isSealed isFrozen
var p = Proxy(target, handler)
Object.isExtensible( p ) => Object.isExtensible
Direct Proxies: "internal" properties
Issue raised by Jason Orendorff; auto unwrapping is dangerous if built-in methods return non-primitive values
Case:
var arr = [o1, o2, o3]; var it = arr.iterator();
var membraneP = wrap(it);
it.next.call(membraneP)
Solution (?)
Instead of auto-unwrapping, delegate to a nativeCall trap (which auto-unwraps by default)
[[PrimitiveValue]]
BE: nativeCall trap is back door between built-in this-type-specific method impls and proxies. Not good for standardization. Better to make such built-ins generic via Name object internal property identifiers, a la AWB's subclassing built-ins strawman
Discussion moved to Subclassing…
MM: re: what you want syntax wise
AWB: one way to address, not use instance that is automattically created, create new array and patch the proto
… BE: (back to nativeCall trap)
AWB: Let's continue the issue of subclassability on es-discuss
TVC: defaultValue slide
See slide?
BE/AWB: defer this to reflect spec handling, non-observable way.
Proxies and private names
TVC: getName(target, name.public) instead of get(target, name.public) -- this way get trap that doesn't expect name objects won't break on unexpected inputs
DH: has, delete, ...? bigger surface area
TVC: you'd still have to branch in the code, so this is cleaner for user
YK: debugging tool will want to be able to see these things
OH: a built-in debugger will have hooks into the VM
YK: many debuggers use reflection
BE: so it's just a matter of having a bunch of XXXName traps. in for a penny, in for a pound
STH: this is simple and straightforward, we know how to do it
BE: when in doubt use brute force (K. Thompson)
STH: when brute force doesn't work, you're not using enough of it
TVC: if getName returns undefined, forwards to target; so default behavior is transparent proxying
TVC: otherwise, getName takes public name and returns [privateName, value] to show that you know the private name and produce the value
STH: what about set?
TVC: returns name and success value
DH: what about unique names?
TVC: same mechanism
DH: so name.public === name?
MM: I like that
MM: are unique names in?
DH: I think so
BE: are they actually distinguishable?
MM: have to be if name.public === name or name.public !== name distinction
DH: (named) boolean flag to Name constructor
DH: do we have some way of reflecting unique names?
TVC: Object.getNames() ?
DH: ugh...
AWB: maybe a flag to Object.getOwnPropertyNames({ unique: true })
BE (editing notes): flags to methods are an API design anti-pattern
TVC: VirtualHandler fundamental traps throw, should they forward instead?
<<agreement>>
TVC: and rename to Handler?
<<agreement>>
MM: next issue: freeze, seal, defineOwnProperties each modify configuration of bunches of separate properties, and can fail partway through; we tried & failed in ES5 to make it atomic
MM: current unspecified order means could break
MM: tom did something in his code that's beautiful: order-independent. just keep going, remember you failed, do as many as you can, and then throw at the end
STH: if target is proxy, weird unpredictably stuff can happen
DH: no worse than anything that does for-in loops, right?
TVC: well, it's getOwnPropertyNames
MM: that's specified to for-in order, right?
DH: but what does for-in order say about non-enumerable properties? <<evil grin>>
MM: <<cracks up>>
AWB: sounds like an ES5 bug!
VirtualHandler
VirtualHandler Rename VirtualHandler to just Handler?
Tom Van-Cutsem's Proxy presentation slides:
soft.vub.ac.be/~tvcutsem/invokedynamic/presentations/TC39-Proxies-July2012.pdf
Template strings
AWB: first order of business, to ban the term "quasis"
<<applause>>
AWB: proposing "string templates"
DH: a lot of people say "string interpolation" in other languages
AWB: must use ${identifier}, don't allow $identifier
EA: uncomfortable with that
BE: troublesome to identify right end of identifier
EA: withdraw my objection
AWB: untagged quasi is PrimaryExpression, tagged quasi is CallExpression
AWB: at runtime, tag must evaluate to a function
DH: well, you just do a call and that does the check
AWB: lexing treated similarly to regexp; add a new context called "lexical goal" so lexer can tell what a curly means (like a flex(1) mode)
AWB: default escaping should be equivalent to normal strings
BE: we should canonicalize line separators to \n
AWB: for both cooked and raw?
BE: raw should be raw!
AWB: raw tag is a property of the String constructor:
String.rawIn Javascript '\n' is a line-feed.
DH: that's pretty badass
BE: too long a name; wanna import a small name from a module
AWB: well, importing takes more characters than renaming with a var declaration
BE: let's put off the bikeshed in the interest of time
AWB: simplify call site object (first arg to prefix-tag function): it's just an array of the cooked elements since that's the common case, with a .raw expando holding array of the raw elements, both arrays frozen
BE: is there a grawlix problem with ` syntax?
DH: I've tried polling and opinions are utterly mutually incompatible
BE: what about mandated prefix but with existing e.g. ' or " quotes
LH: that's just wrong, the most common case will be unprefixed
MM: proposal for object literals inside ${...} context, based on object literal shorthand {foo} meaning not {foo:foo} but rather {get foo() foo, set foo(bar) {foo=bar}} to sync variable foo with property (!)
STH: that is going to be utterly unexpected
MM: ok, not gonna argue for it
Map and Set methods: conclusion
AWB: what's left on the agenda?
RW: Erik is gonna take another whack at the error stack proposal
BE: forEach on maps and sets -- how about common signature, set passes e as index:
array a.forEach((e, i, a) => ~~~)
map m.forEach((v, k, m) => ~~~)
set s.forEach((e, e, s) => ~~~)
FILED: ecmascript#591 FILED: ecmascript#592
Scoping for C-style loops
NL: the wiki page for `` makes it sounds like they solve problems for internationalization/localization, and they don't
DH: I'd love help with a documentation hack day for the wiki
LH: another agenda item we skipped: for (let ; ; ) binding semantics
DH: I thought we came to agreement on that at the Yahoo! meeting?
AWB: we had a long discussion and consensus was to make for (let ; ;) bind on each iteration
AWB: subsequent to that, considerable discussion on es-discuss about that, issues associated with closure capture occurring in the initialization expressions; couple different semantics to work around that, with more complex copying at each iteration; another approach is a new kind of Reference value, got really complex
AWB: working on the specs, I took easy way out for now; defined it a la C# (per-loop lexical binding); just for now b/c it's simple, understandable, and there's still controversy
AWB: another option is not to have a let for of C-style loops
STH, DH, OH: no!!!
DH: this needs another trip around the block but no time today
MM: my opinion is it doesn't matter what happens with closure capture in the head, b/c it's an esoteric case that will be extremely rare
BE: I think the January semantics is still probably the right answer:
var g; for (let f = () => f; ; ) { g = f; break; } g(); // returns () => f
OH: it logically makes sense
First and foremost, thanks for the notes :-)
Le 28/07/2012 01:55, Rick Waldron a écrit :
Fix "override mistake"
The can put check
(...)
Property in a prototype object that is read-only cannot be shadowed.
Just the same as get-only accessor.
I'd like to add a use case here. Every once in a while, I write something like:
var a = [];
a.push = function(elem){
if(condition(elem)){
// do something like change the elem value then do an actual
push // or throw an error // or just ignore this value to avoid duplicates, for instance } else{ Array.prototype.push.call(this, elem) } };
// use a.push (there is an implicit contract on only using .push to
add elements)
There is such a snippet in a Node.js server in production right now, so that's really not hypothetical code. If I ever consider to move to SES, then, before the above snippet is run, Array.prototype gets frozen and the "a.push" assignment will fail (at runtime!).
Several things here:
- I could change a.proto, but it's a bit weird since the condition in the custom push is often very specific to this exact array, so changing the [[prototype]] feels like too much, just for one instance (though that would work fine)
- I could use Object.defineProperty, but the above code is definitely more readable and intuitive.
- An implicit contract is not the best idea ever, but that works when the array. In an ES6 world the array would certainly be a proxy and whatever invariant could be preserved even for numerical property value assignments. But we're not there yet, so that's not an option
As far as I'm concerned, the biggest issue with this use case is that I have written code which reads well (I'm open to debate on that if some disagree) and that what is read may not be what will occur. Also, if one day, one Node.js module I use decides it's better to freeze Array.prototype, it will be a very painful bug to track down when I update. It would be mush easier to track down if I was monkey-patching Array.prototype.push, but I'm not.
As a final note, I don't know how often people do what I've describe. I'll adapt my code if what is decided is to keep the [[canPut]] error, but I don't know how many people this kind of problem would affect.
Le 28/07/2012 01:58, Rick Waldron a écrit :
July 26 2012 Meeting Notes
getPrototypeOf trap
TVC: (introduction)
proto writable destroys invariant that [[Prototype]] link is stable
Frozen objects should continue to have stable prototype chain
Frozen objects should continue to have stable [[prototype]]. You can't guarantee for the entire chain.
getPrototypeOf trap result should be consistent wth target object's proto
MM: if the proto can be changed, the proxy should…?
TVC: spec interceptable [[Prototype]] [[Prototype]] is currently an internal prop Would need to become internal accessor prop or split into [[GetProto]] / [[SetProto]] [[GetProto]] / [[SetProto]] would trigger traps for proxies
AWB/BE: This is good
YK: Do we want an analogous setPrototypeOf trap?
TVC: Yes
This is inconsistant with below...
AWB: If you have capability to set prototype ?
TVC: proxy.proto should just trigger the proxy's get trap
var p = Proxy(target, handler)
p.proto // => handler.get(target, "proto", p) p.proto = x // => handler.set(target, "proto", x, p)
If there is a setPrototypeOf trap as said above, it should be handler.setPrototypeOf, no?
… Trapping instanceof
Function [[HasInstance]]
x instanceof Global answering true if x and Global live in separate frames/windows
var fp = Proxy(targetFunction, handler);
x instanceof fp // handler.hasInstance(targetFunction, x)
MM: Explains concerns originally raised on es-discuss list by David Bruant, but shows the cap-leak is tolerable
I'm interested in the demonstration :-)
…
DH: if hasInstance private name on instanceof RHS...
MM: What Object.prototype does private name inherit from?
I assume s/Object.prototype/[[Prototype]], here?
AWB: Probably null
BE: the E4X any (*) name had null proto in SpiderMonkey, was true singleton in VM
AWB: functions have home context, but no reason for objects to
DH: this is a new idea of value that is not really any object
OH: if it has no properties and no prototype
BE: cannot be forged.
Discussion about unforgeability.
DH: Trapping instanceof use case
Does this line mean that DH asked for the use case? questioned it? reminded it? How did it relate to this discussion?
Trapping Object.isExtensible
Currently Object.isExtensible doesnt trap same for isSealed isFrozen
var p = Proxy(target, handler)
Object.isExtensible( p ) => Object.isExtensible
Are there new traps here? The conclusion of this part is hard to understand.
Direct Proxies: "internal" properties
Issue raised by Jason Orendorff; auto unwrapping is dangerous if built-in methods return non-primitive values
Case:
var arr = [o1, o2, o3]; var it = arr.iterator();
var membraneP = wrap(it);
it.next.call(membraneP)
Solution (?)
Instead of auto-unwrapping, delegate to a nativeCall trap (which auto-unwraps by default)
I don't understand this use case and the problem that comes with it. Is it specific to generators?
Proxies and private names
(...) DH: so name.public === name?
MM: I like that
MM: are unique names in?
DH: I think so
If they are, the .public part of private names could be retired with the following setting:
- Private names don't trigger proxy traps call and are not be reflectable at all. This is practically equivalent to calling a trap with a useless public counterpart from the caller perspective. From the proxy perspective, since the public part is useless, being called or not sounds like it would be more or less equivalent.
- Unique names would be trapped and passed as unchanged as argument to the trap (actually since name.public === name, passing the unique name or its public counterpart is equivalent). If the proxy wants the unique name not to be accessed, it cannot remove it from getOwnPropertyNames trap result. So proxies can emulate their own private names.
BE: are they actually distinguishable?
MM: have to be if name.public === name or name.public !== name distinction
DH: (named) boolean flag to Name constructor
If we have private and unique names, we might as well have 2 constructors : PrivateName and UniqueName. I find that more readable than "new Name(true)".
DH: do we have some way of reflecting unique names?
TVC: Object.getNames() ?
DH: ugh...
AWB: maybe a flag to Object.getOwnPropertyNames({ unique: true })
BE (editing notes): flags to methods are an API design anti-pattern
What's the conclusion of this part?
David Bruant wrote:
Hi,
First and foremost, thanks for the notes :-)
Le 28/07/2012 01:55, Rick Waldron a écrit :
Fix "override mistake"
The can put check
(...)
Property in a prototype object that is read-only cannot be shadowed.
Just the same as get-only accessor. I'd like to add a use case here. Every once in a while, I write something like:
var a = []; a.push = function(elem){ if(condition(elem)){ // do something like change the elem value then do an actual
push // or throw an error // or just ignore this value to avoid duplicates, for instance } else{ Array.prototype.push.call(this, elem) } };
// use a.push (there is an implicit contract on only using .push to
add elements)
There is such a snippet in a Node.js server in production right now, so that's really not hypothetical code. If I ever consider to move to SES, then, before the above snippet is run, Array.prototype gets frozen and the "a.push" assignment will fail (at runtime!).
Several things here:
- I could change a.proto, but it's a bit weird since the condition in the custom push is often very specific to this exact array, so changing the [[prototype]] feels like too much, just for one instance (though that would work fine)
- I could use Object.defineProperty, but the above code is definitely more readable and intuitive.
Well, yes. But from the philosophical PoV, imho, you should do Object.defineProperty here, because that is what you do (your intent is not "put a value to a's push property").
Though not very constructive, I'd say this is the case where
a.{ push(elem) { ... } };
is definitely missing.
Le 28/07/2012 13:43, Herby Vojčík a écrit :
David Bruant wrote:
var a = []; a.push = function(elem){ if(condition(elem)){ // do something like change the elem value then do an
actual push // or throw an error // or just ignore this value to avoid duplicates, for instance } else{ Array.prototype.push.call(this, elem) } };
// use a.push (there is an implicit contract on only using .push to
add elements)
(...)
- I could use Object.defineProperty, but the above code is definitely more readable and intuitive.
Well, yes. But from the philosophical PoV, imho, you should do Object.defineProperty here, because that is what you do (your intent is not "put a value to a's push property").
My intent is "I want a custom 'push' property for this particular array", because I'm filling the array afterwards using .push calls. I don't know what I "should" be doing from a philosophical point of view, but the code written above describes my intention pretty well. If I saw a call to Object.defineProperty instead, my first reaction would certainly be "but why isn't a regular assignment used here?". A comment could be added to explain the [[CanPut]], but that's what I would call "boilerplate comment".
So far, to the general question "why is Object.defineProperty used instead of a regular assignment used here?", the only answer I find acceptable is "defining custom configurable/writable/enumerable", because these are things local to the code that have no syntax for them. In most cases, getter/setters can be defined in object literals. Adding "the prototype may be frozen, thus preventing shadowing" to the acceptable answers makes local code review harder.
Though not very constructive, I'd say this is the case where
a.{ push(elem) { ... } };
is definitely missing.
I remembered that .{ semantics was a [[Put]] semantic, so it wouldn't solve the problem. Did I remember something wrong?
Arguably, I could use a different name than "push". But it doesn't change the problem: If I add an 'x' property to my array and later in the history of ES, an Array.prototype.x property is added, my code will break by virtue of engines updating... hmm... That's a worse situation than I initially thought.
David Bruant wrote:
Le 28/07/2012 13:43, Herby Vojčík a écrit :
David Bruant wrote:
var a = []; a.push = function(elem){ if(condition(elem)){ // do something like change the elem value then do an
actual push // or throw an error // or just ignore this value to avoid duplicates, for instance } else{ Array.prototype.push.call(this, elem) } };
// use a.push (there is an implicit contract on only using .push to
add elements)
(...)
- I could use Object.defineProperty, but the above code is definitely more readable and intuitive. Well, yes. But from the philosophical PoV, imho, you should do Object.defineProperty here, because that is what you do (your intent is not "put a value to a's push property"). My intent is "I want a custom 'push' property for this particular array", because I'm filling the array afterwards using .push calls. I don't know what I "should" be doing from a philosophical point of view, but the code written above describes my intention pretty well. If I saw
To be precise, [[Put]] and [[DefineProperty]] are different intents. Dveelopers may not like it, because they used to [[Put]], but it is probably needed to distinguish them.
[[Put]] is high-level contract (a, update your 'push' facet with value), [[DefineProperty]] is low-level contract (a, add/update your slot named 'push' with value).
I am inclined to see [[Put]] used to shadow methods as an abuse of high-level interface to do low-level patching.
But of course, unless there is nice sugar, everyone uses [[Put]] since it's easier to write (and read).
a call to Object.defineProperty instead, my first reaction would certainly be "but why isn't a regular assignment used here?". A comment could be added to explain the [[CanPut]], but that's what I would call "boilerplate comment".
So far, to the general question "why is Object.defineProperty used instead of a regular assignment used here?", the only answer I find acceptable is "defining custom configurable/writable/enumerable", because these are things local to the code that have no syntax for them. In most cases, getter/setters can be defined in object literals. Adding "the prototype may be frozen, thus preventing shadowing" to the acceptable answers makes local code review harder.
:-/ But that is how it is, no?
Though not very constructive, I'd say this is the case where
a.{ push(elem) { ... } };
is definitely missing. I remembered that .{ semantics was a [[Put]] semantic, so it wouldn't solve the problem. Did I remember something wrong?
Of course. Mustache has the same semantics as extended literal, so it was [[DefineProperty]] with appropriate enum/conf/writ (and setting home context for methods, so in fact it did defineMethod).
Le 28/07/2012 14:37, Herby Vojčík a écrit :
David Bruant wrote:
Le 28/07/2012 13:43, Herby Vojčík a écrit :
David Bruant wrote:
var a = []; a.push = function(elem){ if(condition(elem)){ // do something like change the elem value then do an
actual push // or throw an error // or just ignore this value to avoid duplicates, for instance } else{ Array.prototype.push.call(this, elem) } };
// use a.push (there is an implicit contract on only using
.push to add elements)
(...)
- I could use Object.defineProperty, but the above code is definitely more readable and intuitive. Well, yes. But from the philosophical PoV, imho, you should do Object.defineProperty here, because that is what you do (your intent is not "put a value to a's push property"). My intent is "I want a custom 'push' property for this particular array", because I'm filling the array afterwards using .push calls. I don't know what I "should" be doing from a philosophical point of view, but the code written above describes my intention pretty well. If I saw
To be precise, [[Put]] and [[DefineProperty]] are different intents.
I don't understand where you're getting at. Let's try to agree on some definitions, first:
- There is my intention which I described above
- there is the JS "VM" (set of primitive operations, like [[Put]] and [[DefineProperty]])
- there is syntax which is expected to be in between, allowing to translate intentions (high-level descriptions) into the language primitive operations.
My definition of "intention" is a fairly high-level description. As I said, what I need is the fill my array with "push" calls. How this method ended up here is not part of my intent, that's why I could implement y intention with a new a.proto. Then, there is the syntax. "a.push = function(elem){...}" expresses very well my intent: it references the only object for which I want a custom "push", it shows the 'push' name and the assignment with "=" is part of the programming culture.
So, the way i see it, [[Put]] and [[DefineProperty]] are not intentions. They are operations through which I may be able to implement my use case. As it turns out, both map to one syntactic form. It could not be the case.
Dveelopers may not like it, because they used to [[Put]], but it is probably needed to distinguish them.
[[Put]] is high-level contract (a, update your 'push' facet with value), [[DefineProperty]] is low-level contract (a, add/update your slot named 'push' with value).
I am inclined to see [[Put]] used to shadow methods as an abuse of high-level interface to do low-level patching.
Currently, [[Put]] does shadow prototype methods and the sky hasn't fallen. The question in debate is whether [[Put]] should shadow when the prototype is frozen.
a call to Object.defineProperty instead, my first reaction would certainly be "but why isn't a regular assignment used here?". A comment could be added to explain the [[CanPut]], but that's what I would call "boilerplate comment".
So far, to the general question "why is Object.defineProperty used instead of a regular assignment used here?", the only answer I find acceptable is "defining custom configurable/writable/enumerable", because these are things local to the code that have no syntax for them. In most cases, getter/setters can be defined in object literals. Adding "the prototype may be frozen, thus preventing shadowing" to the acceptable answers makes local code review harder.
:-/ But that is how it is, no?
That's what the spec says, but V8 has implemented something else (and I haven't seen an intention to change this behavior), so what the spec says doesn't really matter.
David Bruant wrote:
Le 28/07/2012 14:37, Herby Vojčík a écrit :
David Bruant wrote:
Le 28/07/2012 13:43, Herby Vojčík a écrit :
David Bruant wrote:
var a = []; a.push = function(elem){ if(condition(elem)){ // do something like change the elem value then do an
actual push // or throw an error // or just ignore this value to avoid duplicates, for instance } else{ Array.prototype.push.call(this, elem) } };
// use a.push (there is an implicit contract on only using
.push to add elements)
(...)
- I could use Object.defineProperty, but the above code is definitely more readable and intuitive. Well, yes. But from the philosophical PoV, imho, you should do Object.defineProperty here, because that is what you do (your intent is not "put a value to a's push property"). My intent is "I want a custom 'push' property for this particular array", because I'm filling the array afterwards using .push calls. I don't know what I "should" be doing from a philosophical point of view, but the code written above describes my intention pretty well. If I saw To be precise, [[Put]] and [[DefineProperty]] are different intents. I don't understand where you're getting at. Let's try to agree on some definitions, first:
- There is my intention which I described above
- there is the JS "VM" (set of primitive operations, like [[Put]] and [[DefineProperty]])
- there is syntax which is expected to be in between, allowing to translate intentions (high-level descriptions) into the language primitive operations.
I am getting at philosophical difference between "assignment" and "define" and I aim at "[[Put]] is used wrong to define methods", IOW "[[Put]] should be used to change state (preferable only for that)".
IOW, "a.foo = 42;" is asking an object to change its state. I would underline asking here, it's like it's a kind of ".foo=" is part of API of an object.
Your case of "a.push=..." is not this kind of API.
(and I know of course there is no state/behaviour distinction nor am I calling for one and I also know [[Put]] expands an object with new slots if they are not present which can be used to attack "sort of an API" argument; but I still hold to it*)
Whatever, if you still don't understand, don't matter. If I wasn't able to get the message through as of yet, I won't be probably able to do it by more tries anyway.
David
Herby
- This brings me to the idea of "weak prevent-extension" which maybe could be useful: disallowing [[Put]] on nonexistent slots but allowing [[DefineProperty]]. This could be especially useful with objects (that is, results of [[Construct]]), so their shape is "weakly fixed" - it is fixed with respect to assignment, but open to low-level tweaking when extending with some external mixin-like behaviour etc. But I can see that this would probably lead to just using [[DefineProperty]] everywhere, just in case. Which is not good.
It seems like you're indicating that changing a property to a value, presumably a primitive, is somehow different from setting it to a function. Regardless of anything else, that's not true even in the way you mean it because a function can have a thunk that contains state and accomplishes the same thing as setting primitive data type. It just can almost be used for other non-data things too like methods. There's no way to differentiate from a naive standpoint though.
Er "also", not "almost"
Brandon Benvie wrote:
It seems like you're indicating that changing a property to a value, presumably a primitive, is somehow different from setting it to a function.
I read Herby as arguing that overriding a prototype property is low-level, so must use low-level Object.defineProperty. Due to setters, assignment is in contrast high-level: if you assign to a try to create an own property, but there's a prototype setter with same name, the setter will be invoked.
I agree with Herby.
The problem for Mark Miller based on SES, not necessarily for David (who could change his code), is that extant JS libraries predate ES5 and use assignment. Perhaps they predate setters in the wild (first in SpiderMonkey over 12 years ago), or the author didn't think of setters.
Mark's SES workaround actually relies on setters not being overridden by assignment: before freezing common prototype objects that might (according to a clever heuristic) pose a problem, SES enumerates properties on them and replaces each with an accessor whose setter intentionally shadows (since setters receive the target |this|).
Anyway, I don't think function-valued vs. non-function-valued is the issue. It's override-of-data-property (vs. non-override-of-accessor).
the slide deck (PDF) I used at the meeting for reviewing class semantics is at t.co/PwuF12Y0 the deck (PDF) that reviews quasi literal changes is at bit.ly/PJAav0
Brandon Benvie wrote:
It seems like you're indicating that changing a property to a value, presumably a primitive, is somehow different from setting it to a
I never mentioned a primitive, please don't put into my mouth what I did not say.
function. Regardless of anything else, that's not true even in the way
It does not depend what the value is at all. Function is as good as number or plain object or array or whatever.
The distinction is whether the property is used to store (published) state (from the API PoV) (and that state can be anything) or it is more an infrastructure of an object. That is, what is the primary API of the property name:
-
To hold a (settable) state (so it is primarily read by a.foo and used afterwards in various ways)? Then it should be set by assignment.
-
To use otherwise (most often |a.foo(args)|, other such use is maybe |if (a.isAnimal)| defined in prototype)? Then it should be set by defineProperty; it is not meant to have "I am something you should be setting by =" API.
Most often 1. is enumberable and 2. is non-enumerable. It is more or less the same philosophical distinction: between "public API" and "private API".
Sorry, I had just completely misread what you read saying. My fault!
Thanks to Rick for his notes. The discussion on proxy-related issues went fast and touched upon a variety of issues. I complemented Rick's notes on the proxy discussion with some notes on the proxies wiki page, see: < harmony:direct_proxies#discussed_during_tc39_july_2012_meeting_microsoft_redmond
I'll clarify further in-line below.
2012/7/28 David Bruant <bruant.d at gmail.com>
Le 28/07/2012 01:58, Rick Waldron a écrit :
July 26 2012 Meeting Notes
getPrototypeOf trap
TVC: (introduction)
proto writable destroys invariant that [[Prototype]] link is stable
Frozen objects should continue to have stable prototype chain Frozen objects should continue to have stable [[prototype]]. You can't guarantee for the entire chain.
Indeed, the stable [[prototype]] invariant is local (per object), not per entire chain.
getPrototypeOf trap result should be consistent wth target object's proto
MM: if the proto can be changed, the proxy should…?
TVC: spec interceptable [[Prototype]] [[Prototype]] is currently an internal prop Would need to become internal accessor prop or split into [[GetProto]] / [[SetProto]] [[GetProto]] / [[SetProto]] would trigger traps for proxies
AWB/BE: This is good
YK: Do we want an analogous setPrototypeOf trap?
TVC: Yes This is inconsistant with below...
To clarify, I think we'd only need setPrototypeOf if proto would end up being specified as an accessor (to trap Object.getOwnPD(Object.prototype,'proto').set.call(proxy) )
AWB: If you have capability to set prototype ?
TVC: proxy.proto should just trigger the proxy's get trap
var p = Proxy(target, handler)
p.proto // => handler.get(target, "proto", p) p.proto = x // => handler.set(target, "proto", x, p) If there is a setPrototypeOf trap as said above, it should be handler.setPrototypeOf, no?
No, the purpose of these slides was to show that p.proto continues to remain a normal property get/set for proxy handlers, regardless of how we would end up specifying proto.
As mentioned above, setPrototypeOf would only be called to trap a call to the proto setter.
… Trapping instanceof
Function [[HasInstance]]
x instanceof Global answering true if x and Global live in separate frames/windows
var fp = Proxy(targetFunction, handler);
x instanceof fp // handler.hasInstance(targetFunction, x)
MM: Explains concerns originally raised on es-discuss list by David Bruant, but shows the cap-leak is tolerable I'm interested in the demonstration :-)
Mark acknowledged your concerns, but pointed out that currently almost no capability-secure JS code is out there that relies on the fact that instanceof doesn't grant access to the LHS. Even so, most of that code will be Caja code, which can be maintained to avoid the leak. In going forward, we can just explain instanceof as an operator that internally sends a message to the RHS, passing the LHS as an argument. In effect, the implicit capability "leak" would become an explicitly stated capability "grant".
…
DH: if hasInstance private name on instanceof RHS...
MM: What Object.prototype does private name inherit from? I assume s/Object.prototype/[[Prototype]], here?
Yes.
AWB: Probably null
BE: the E4X any (*) name had null proto in SpiderMonkey, was true singleton in VM
AWB: functions have home context, but no reason for objects to
DH: this is a new idea of value that is not really any object
OH: if it has no properties and no prototype
BE: cannot be forged.
Discussion about unforgeability.
DH: Trapping instanceof use case Does this line mean that DH asked for the use case? questioned it? reminded it? How did it relate to this discussion?
I can't remember. DH did not ask for the use case. The use case we discussed for trapping instanceof is the one previously raised on this list (allowing x instanceof Global to return true even if x originates from another frame).
Trapping Object.isExtensible
Currently Object.isExtensible doesnt trap same for isSealed isFrozen
var p = Proxy(target, handler)
Object.isExtensible( p ) => Object.isExtensible Are there new traps here? The conclusion of this part is hard to understand.
Yes: new traps "isExtensible", "isSealed", "isFrozen", to trap the corresponding Object.* methods. I wouldn't even describe them as "new", they were more of an oversight. Membranes need these traps to accurately reflect the internal extensibility/sealed/frozen state of their wrapped object.
Direct Proxies: "internal" properties
Issue raised by Jason Orendorff; auto unwrapping is dangerous if built-in methods return non-primitive values
Case:
var arr = [o1, o2, o3]; var it = arr.iterator();
var membraneP = wrap(it);
it.next.call(membraneP)
Solution (?)
Instead of auto-unwrapping, delegate to a nativeCall trap (which auto-unwraps by default)
I don't understand this use case and the problem that comes with it. Is it specific to generators?
It's not specific to generators. See the "nativeCall trap" section on the wiki page.
Proxies and private names
(...) DH: so name.public === name?
MM: I like that
MM: are unique names in?
DH: I think so
If they are, the .public part of private names could be retired with the following setting:
- Private names don't trigger proxy traps call and are not be reflectable at all. This is practically equivalent to calling a trap with a useless public counterpart from the caller perspective. From the proxy perspective, since the public part is useless, being called or not sounds like it would be more or less equivalent.
- Unique names would be trapped and passed as unchanged as argument to the trap (actually since name.public === name, passing the unique name or its public counterpart is equivalent). If the proxy wants the unique name not to be accessed, it cannot remove it from getOwnPropertyNames trap result. So proxies can emulate their own private names.
We still want proxies to intercept private names. It may be that the proxy handler knows about the private name, in which case it has the "capability" to usefully intercept access to it.
BE: are they actually distinguishable?
MM: have to be if name.public === name or name.public !== name distinction
DH: (named) boolean flag to Name constructor
If we have private and unique names, we might as well have 2 constructors : PrivateName and UniqueName. I find that more readable than "new Name(true)".
DH: do we have some way of reflecting unique names?
TVC: Object.getNames() ?
DH: ugh...
AWB: maybe a flag to Object.getOwnPropertyNames({ unique: true })
BE (editing notes): flags to methods are an API design anti-pattern
What's the conclusion of this part?
As I recall it, the discussion was inconclusive. As stated, I would favor a new operation (like Object.getNames) that makes explicit the fact that it returns name objects, rather than overloading existing methods.
Regarding the extra traps needed for private names: I scanned the list of traps and I think we need to duplicate each trap that takes a property name (a String) as argument, so we'll end up with:
get -> getName
set -> setName
has -> hasName
hasOwn -> hasOwnName
defineProperty -> defineName
deleteProperty -> deleteName
getOwnPropertyDescriptor -> getOwnNameDescriptor?
(the last three names are a bit inconsistent since I don't want to be known as the guy that inflicted getOwnPropertyDescriptorName upon the world ;-)
Tom Van Cutsem wrote:
> BE: are they actually distinguishable? > > MM: have to be if name.public === name or name.public !== name distinction > > DH: (named) boolean flag to Name constructor If we have private and unique names, we might as well have 2 constructors : PrivateName and UniqueName. I find that more readable than "new Name(true)". > DH: do we have some way of reflecting unique names? > > TVC: Object.getNames() ? > > DH: ugh... > > AWB: maybe a flag to Object.getOwnPropertyNames({ unique: true }) > > BE (editing notes): flags to methods are an API design anti-pattern What's the conclusion of this part?
As I recall it, the discussion was inconclusive. As stated, I would favor a new operation (like Object.getNames) that makes explicit the fact that it returns name objects, rather than overloading existing methods.
Yes, I thought we agreed (see "when in doubt, use brute force" ... "if it didn't work, use more brute force") to have more traps: for each trap taking a string identifier, add an optional trap taking a Name identifier.
However, we did not agree on avoiding a flag to overload Name to create private vs. unique name objects, or an options argument to Object.getOwnPropertyNames. But I sense some of us believe that "add a flag option" is undesirable based on experience, both JS-specific and language-neutral:
A. ECMA-262 warns implementations not to extend built-ins with optional implementation-specific parameters, for good reason. OTOH, optional parameters can bite later on their own, even if added by a later edition of the standard. See www.wirfs-brock.com/allen/posts/166.
B. See stackoverflow.com/questions/6107221/at-what-point-does-passing-a-flag-into-a-method-become-a-code-smell, stackoverflow.com/questions/7150987/python-better-to-have-multiple-methods-or-lots-of-optional-parameters, and others. Opinions vary but there's a general consensus that a flag should be avoided if possible.
So do we need UniqueName and PrivateName constructors? If yes, this would not imply that we need to split traps further, of course, since both UniqueName and PrivateName would create Name objects. Or we could use an optional flag parameter to one Name constructor and defy the anti-pattern wisdom.
Regarding the extra traps needed for private names: I scanned the list of traps and I think we need to duplicate each trap that takes a property name (a String) as argument, so we'll end up with:
get -> getName set -> setName has -> hasName hasOwn -> hasOwnName defineProperty -> defineName deleteProperty -> deleteName getOwnPropertyDescriptor -> getOwnNameDescriptor?
(the last three names are a bit inconsistent since I don't want to be known as the guy that inflicted getOwnPropertyDescriptorName upon the world ;-)
Alternatives, call your list 0:
-
Verbose but always end in "Name", requiring "By" in some cases: getName, setName, hasName, hasOwnName, definePropertyByName, deletePropertyByName, getOwnPropertyDescriptorByName.
-
Prefix uniformly FTW: nameGet, nameSet, nameHas, nameHasOwn, nameDefineProperty, nameDeleteProperty, nameGetOwnPropertyDescriptor.
My preference: 2 > 1 > 0. Other opinions?
Tom Van Cutsem wrote:
Hi,
Thanks to Rick for his notes. The discussion on proxy-related issues went fast and touched upon a variety of issues. I complemented Rick's notes on the proxy discussion with some notes on the proxies wiki page, see: harmony:direct_proxies#discussed_during_tc39_july_2012_meeting_microsoft_redmond
Thanks for the wiki update, really useful in my opinion. One note on your first bullet there:
"* If proto is specified in Annex B, consider adding a |getPrototypeOf| trap. This would simplify membranes. Writable proto already destroys the invariant that the [[Prototype]] link is stable. Engines already need to accomodate."
We agreed to require proto, so put it in the main spec, at the May meeting.
My memory and the meeting notes show that we agreed there to make proto a magic data property. Those favoring accessor were not voluble, but we talked about poisoning the setter reflection if proto ended up spec'ed as an accessor.
> getPrototypeOf trap result should be consistent wth target object's proto > > MM: if the proto can be changed, the proxy should…? > > TVC: spec interceptable [[Prototype]] > [[Prototype]] is currently an internal prop > Would need to become internal accessor prop or split into [[GetProto]] > / [[SetProto]] > [[GetProto]] / [[SetProto]] would trigger traps for proxies > > AWB/BE: This is good > > YK: Do we want an analogous setPrototypeOf trap? > > TVC: Yes This is inconsistant with below...
To clarify, I think we'd only need setPrototypeOf if proto would end up being specified as an accessor (to trap Object.getOwnPD(Object.prototype,'proto').set.call(proxy) )
That's another reason not to reflect proto as an accessor with a callable setter. Either magic data, or poisoned set (and get?) accessor. We should strive to agree on this, since SpiderMonkey went against the decision from May, joining JSC in reflecting proto as an accessor.
On Jul 28, 2012, at 5:37 AM, Herby Vojčík wrote:
...
To be precise, [[Put]] and [[DefineProperty]] are different intents. Dveelopers may not like it, because they used to [[Put]], but it is probably needed to distinguish them.
[[Put]] is high-level contract (a, update your 'push' facet with value), [[DefineProperty]] is low-level contract (a, add/update your slot named 'push' with value).
I am inclined to see [[Put]] used to shadow methods as an abuse of high-level interface to do low-level patching.
But of course, unless there is nice sugar, everyone uses [[Put]] since it's easier to write (and read).
I think there is a very important point here that I hope we don't loose in the weeds of this discussion. The distinction between assignment and definition (ie, between [[Put]] and [[DefineOwnProperty]]) was not very important when all ES had was data properties and there was no way for ES code to manipulate property attributes. In those pre-ES5 days, [[DefineOwnProperty]] didn't even exist and the installation of object literal properties were specified using [[Put]] semantics. In those days, it was fine to think of property definition as simply an assignment (ie the = operator or Put]]) to a unused property name.
However, as soon as we have things like accessor properties, pragmatically configurable attributes, methods with super bindings, real inheritance hierarchies, classes, etc. the distinction between assignment and definition comes much more important. Continuing to conflate them is going to lead to increasing confusion. The "override mistake" issues is just the first and simplest of the sort of issues that result. In post ES5, programmers really need to to learn and use the distinction between property assignment and property definition. To ensure this, we need to provide language features that guide them towards this understanding and proper usage.
Herby correctly identifies where we stand right now. ES developers need and want something that approaches the convenience of = for dynamically defining properties. As long as we only have a procedural API (Object.defineProperty) for dynamic property definition most won't learn the distinction and even those that do will frequently ignore it for the sake of convience. ES6 needs a concise and friendly way to dynamically define properties. The syntax needs to approach the connivence of = but it needs to bring emphasis to the distinction between assignment and definition. Without it, ES5+ES6 will have collectively result in a more confusing and error prone language.
More below...
a call to Object.defineProperty instead, my first reaction would certainly be "but why isn't a regular assignment used here?". A comment could be added to explain the [[CanPut]], but that's what I would call "boilerplate comment".
So far, to the general question "why is Object.defineProperty used instead of a regular assignment used here?", the only answer I find acceptable is "defining custom configurable/writable/enumerable", because these are things local to the code that have no syntax for them. In most cases, getter/setters can be defined in object literals. Adding "the prototype may be frozen, thus preventing shadowing" to the acceptable answers makes local code review harder.
:-/ But that is how it is, no?
Though not very constructive, I'd say this is the case where
a.{ push(elem) { ... } };
is definitely missing. I remembered that .{ semantics was a [[Put]] semantic, so it wouldn't solve the problem. Did I remember something wrong?
Of course. Mustache has the same semantics as extended literal, so it was [[DefineProperty]] with appropriate enum/conf/writ (and setting home context for methods, so in fact it did defineMethod).
I still think a dynamic property definition syntax can be based on something like mustache. Two months ago, there was interest at the TC39 meeting in further exploring mustache. Some of that interest was motivated by these definitional issues. However, our attempt to do this crashed and burned badly because we tried to accommodate a desire to make the same syntactic construct also serve as a "cascade". However, cascades require [[Put]]/[[Get]] semantics and this is in directly conflict with the requirements of dynamic property definition. Confusion about this is reflected in the quotes immediately above. We should have recognized before we even tried, that trying to combine those two semantics just won't work.
However, here is the sketch of a proposal for something that might work.
We introduce a new operator that is looks like :=
This is the "define properties" operator. Both the LHS and RHS must be objects (or ToObject convertible). Its semantics is to [[DefineOwnProperty]] on the LHS obj a property corresponding to each RHS own property. I does this with all reflectable own properties. It includes non-enumerable properties and unique named properties but not non-reflectable private name properties. It rebinds methods with super bindings to the RHS to new methods that are super bound to the LHS.
The above example would then be written as:
a := { push(elem) { ... } };
rather than, perhaps incorrectly as:
a.push = function (elem) { ... };
or, correctly but very inconveniently as:
Object.defineProperty(a, "push", {writable: true, configurable: true, enumberable: true, data:function (elem) { ... } } );
Note that while the above example uses an object literal as the RHS, it could be any object. So, := is essentially a operator level definition of one plausible semantics for a Object.extend function. Using an operator has usability advantages and it also makes it easier to optimize the very common case where the RHS will be a literal.
:= is used because it is suggestive of both property definition (the use of : in object literals) and of assignment (the = operator). := also has a long history of use as an assignment-like operator in programming languages. The visual similarity of = and := should push ES programmers to think about then as situational alternatives whose semantic differences must be carefully considered. The simple story is that one is used for assigning a value to an existing property and the other is used to define or over-ride the definition of properties.
I really think in a language where we have both [[Put]] and [[DefineOwnProperty]] semantics that we really need both = and :=
Finally, this discussion caused me to realize that I messed-up on an important detail when I prepared and presented the class semantics deck (t.co/PwuF12Y0) at the TC39 meeting.
In the deck, I incorrectly stated that I was proposing that the attributes associate with a property created via a concise method definition (in a class or object literal definition) should have the attributes {writable: true, configurable: false}. I had a hard time defending that choice at the meeting. There is a good reason for this, that attribute combination was never what I intended but I ended up trying to defend what I wrote rather than what I really wanted.
Instead, what I meant to say was {writable: false, configurable: true}. Brendan and perhaps others have in the past expressed that this is a strange combination because it disallows assigning a value but changing the value can still be accomplished using Object.definePropertry. The above discussion gives me the concepts necessary to explain the motivation for this design. It's simple, this attribute combination disallows using [[Put]] updates while allowing [[DefineOwnProperty]] updates, just as described above. A concise method definition should be conceptualize as defining an invokable method, not an assignable data property. That is exactly what my desired attribute combination supports. writable: false means that [[Put]] expressed via the = operator cannot be used to modify such a method, even if it is inherited. configurable:true says that Object.defineProperty or the := operator proposed above may be used to modify or over-rided the method definition. Consider a class definition such as:
class Stack { push(elem) {...} pop () {...} isEmpty() {...} constructor () {...} }
let stk = new Stack; ... stk.push=5;
The last statement probably is not doing what the user intended when they mind-farted that line of code. With my proposed attributes this assignment won't create a probably unintended instance specific over-ride of the method defined by the class. If this is strict code it will throw. If an ES programmer really wanted to do such an over-ride their intention is much clearer if they have to say:
stk := {push: 5}; //over-ride class provided method with an assignable data property
or
stk := {push(elem=null) {...}} //over-ride class provided method with an instance specific method
In summary, concise methods are new syntax for defining properties. To me, it makes a lot of sense that the properties they create do not allow =/[[Put]] updates. := is a better mustache that helps distinguish [[Put]]/[[DefineOwnproperty]] intents. We need both.
Allen Wirfs-Brock wrote:
I really think in a language where we have both [[Put]] and [[DefineOwnProperty]] semantics that we really need both = and :=
I can buy that, and I'm glad you mention := as it is not just an assignment operator (e.g. in Pascal or Ada), it's also Go's declare-and-init operator. It has the right characters, fuzzy meaning from other languages, and the critical = char in particular. I could see adding it as a winning and better user-interafce to Object.defineProperty and even Object.extend.
However, Object.extend is usable in old and new browsers, with polyfill. Object.extend is the cowpath trod by many happy cows. Requiring a transpiler is harsh. Should we entertain both := and Object.extend, or perhaps the better name to avoid colliding with PrototypeJS, Object.define or Object.update?
Finally, this discussion caused me to realize that I messed-up on an important detail when I prepared and presented the class semantics deck (t.co/PwuF12Y0) at the TC39 meeting.
In the deck, I incorrectly stated that I was proposing that the attributes associate with a property created via a concise method definition (in a class or object literal definition) should have the attributes {writable: true, configurable: false}. I had a hard time defending that choice at the meeting.
It wasn't so obviously wrong, it's what var binds outside of eval.
However TC39 favored configurable: true as well as writable: true, to match JS expectations, in particular that one could reconfigure a data property (which is what method definition syntax in a class body creates) with a workalike accessor.
That's in the meeting notes, and I still think it is the winner. More below.
There is a good reason for this, that attribute combination was never what I intended but I ended up trying to defend what I wrote rather than what I really wanted.
Without presenting :=, what you wanted had no precedent and is "weak integrity", the kind of half-measure MarkM and I decried.
Instead, what I meant to say was {writable: false, configurable: true}. Brendan and perhaps others have in the past expressed that this is a strange combination because it disallows assigning a value but changing the value can still be accomplished using Object.definePropertry. The above discussion gives me the concepts necessary to explain the motivation for this design. It's simple, this attribute combination disallows using [[Put]] updates while allowing [[DefineOwnProperty]] updates, just as described above.
It's not that simple, alas. Even if we add := and users come to understand and use it well alongside = (requiring transcompilation in the near term), do users increasingly opt into "use strict"? Not clear. Yet "use strict" is the better path to find failed assignment attempts to non-writable properties expressed using using =, and replace those with :=.
Without "use strict", assuming good test coverage, one has to noticed that = failed to update the LHS. Tests rarely flex their bindings after reassignment to check for non-writable silent non-strict failure, and real code is even less prone to doing so.
On balance, I like := as a complement to =, but I'm leery of new-version-only thinking that leaves out Object.extend or better-named use-cases. And I am skeptical that any of this means non-writable configurable is a sane attribute combo for methods.
On Sat, Jul 28, 2012 at 9:04 PM, Allen Wirfs-Brock <allen at wirfs-brock.com>wrote:
snip
I snipped, but I agree with all of your claims. While evangelizing our intention to try for a .{} that supported [[Put]] and [[DefineOwnProperty]].
Given something like this...
.{ a: "apple", b = "banana" };
The number one resistance to the mixed use use of ":" and "=" was that most developers did not realize there was a semantic difference and actually expected us to assume the burden of specifying the magic that would make this work correctly with just ":"
I submit the following survey results to support the above claim, docs.google.com/spreadsheet/ccc?key=0Ap5RnGLtwI1RdDN3dm92aVJwWEZCMEU3RUN5OTdRTWc
(The live form with the survey question is here: docs.google.com/spreadsheet/viewform?formkey=dDN3dm92aVJwWEZCMEU3RUN5OTdRTWc6MQ)
Pay specific attention to the comments where object literal syntax is frequently suggested as preferential.
...more below
We introduce a new operator that is looks like :=
I like this.
This is the "define properties" operator. Both the LHS and RHS must be objects (or ToObject convertible). Its semantics is to [[DefineOwnProperty]] on the LHS obj a property corresponding to each RHS own property. I does this with all reflectable own properties. It includes non-enumerable properties and unique named properties but not non-reflectable private name properties. It rebinds methods with super bindings to the RHS to new methods that are super bound to the LHS.
The above example would then be written as:
a := { push(elem) { ... } };
rather than, perhaps incorrectly as:
a.push = function (elem) { ... };
or, correctly but very inconveniently as:
Object.defineProperty(a, "push", {writable: true, configurable: true, enumberable: true, data:function (elem) { ... } } );
Is there a mechanism for customizing "writable: true, configurable: true, enumberable: true"?
Note that while the above example uses an object literal as the RHS, it could be any object. So, := is essentially a operator level definition of one plausible semantics for a Object.extend function. Using an operator has usability advantages and it also makes it easier to optimize the very common case where the RHS will be a literal.
:= is used because it is suggestive of both property definition (the use of : in object literals) and of assignment (the = operator). := also has a long history of use as an assignment-like operator in programming languages. The visual similarity of = and := should push ES programmers to think about then as situational alternatives whose semantic differences must be carefully considered. The simple story is that one is used for assigning a value to an existing property and the other is used to define or over-ride the definition of properties.
I really think in a language where we have both [[Put]] and [[DefineOwnProperty]] semantics that we really need both = and :=
As noted above, there I feel there is sufficient evidence that supports the existing confusion and I agree that a syntactic distinction would help reshape understanding as we move forward.
2012/7/28 Brendan Eich <brendan at mozilla.org>
Tom Van Cutsem wrote:
As I recall it, the discussion was inconclusive. As stated, I would favor a new operation (like Object.getNames) that makes explicit the fact that it returns name objects, rather than overloading existing methods.
Yes, I thought we agreed (see "when in doubt, use brute force" ... "if it didn't work, use more brute force") to have more traps: for each trap taking a string identifier, add an optional trap taking a Name identifier.
As I recall, we agreed that we needed at least extra traps for all traps that took a string identifier as an argument, but we did not talk through whether we also needed extra traps for all traps that return (an array of) string identifiers (that would be "keys", "getOwnPropertyNames" and "enumerate").
My sense is we'll need alternatives for keys and gOPN (listing only unique names?). There should be no need for a Name alternative to "enumerate" since the for-in loop only ever enumerates string-valued property names.
However, we did not agree on avoiding a flag to overload Name to create private vs. unique name objects, or an options argument to Object.getOwnPropertyNames. But I sense some of us believe that "add a flag option" is undesirable based on experience, both JS-specific and language-neutral:
We could go for Name.createUnique() and Name.createPrivate(). But with modules, maybe it's no longer an issue to just introduce 2 separate Name constructors.
[...]
So do we need UniqueName and PrivateName constructors? If yes, this would not imply that we need to split traps further, of course, since both UniqueName and PrivateName would create Name objects. Or we could use an optional flag parameter to one Name constructor and defy the anti-pattern wisdom.
Just to be clear: the unique vs. private split for Names is orthogonal to the name vs. string split for proxy traps. So indeed: splitting Names shoudn't cause further splits for proxy traps.
Regarding the extra traps needed for private names: I scanned the list of
traps and I think we need to duplicate each trap that takes a property name (a String) as argument, so we'll end up with:
get -> getName set -> setName has -> hasName hasOwn -> hasOwnName defineProperty -> defineName deleteProperty -> deleteName getOwnPropertyDescriptor -> getOwnNameDescriptor?
(the last three names are a bit inconsistent since I don't want to be known as the guy that inflicted getOwnPropertyDescriptorName upon the world ;-)
Alternatives, call your list 0:
Verbose but always end in "Name", requiring "By" in some cases: getName, setName, hasName, hasOwnName, definePropertyByName, deletePropertyByName, getOwnPropertyDescriptorByName**.
Prefix uniformly FTW: nameGet, nameSet, nameHas, nameHasOwn, nameDefineProperty, nameDeleteProperty, nameGetOwnPropertyDescriptor.
My preference: 2 > 1 > 0. Other opinions?
Ok, I'm swayed by your argument in favor of a consistent prefix or suffix. I'd prefer 1 (suffix) as the selectors read more naturally as requests rather than as commands.
We might still consider shortening getOwnPropertyDescriptorByName (30 chars!) to getOwnPropertyByName. After all, there's no corresponding Object.* method to match. A new matching method will live in the reflect module though.
Which reminds me: we can now stop adding more methods such as the hypothetical Object.getNames to Object and consider Reflect.getNames instead (probably Reflect.getUniqueNames).
(Note that we already have Object.getOwnPropertyNames which has nothing to do with "Names". We should think carefully about how to avoid confusion.)
On Jul 28, 2012, at 6:58 PM, Brendan Eich wrote:
Allen Wirfs-Brock wrote:
I really think in a language where we have both [[Put]] and [[DefineOwnProperty]] semantics that we really need both = and :=
I can buy that, and I'm glad you mention := as it is not just an assignment operator (e.g. in Pascal or Ada), it's also Go's declare-and-init operator. It has the right characters, fuzzy meaning from other languages, and the critical = char in particular. I could see adding it as a winning and better user-interafce to Object.defineProperty and even Object.extend.
However, Object.extend is usable in old and new browsers, with polyfill. Object.extend is the cowpath trod by many happy cows. Requiring a transpiler is harsh. Should we entertain both := and Object.extend, or perhaps the better name to avoid colliding with PrototypeJS, Object.define or Object.update?
I think we should view := as new syntax that is primarily intended to be used in combination with other new syntax such as concise methods, class definitions, super, etc. that also require transpiling for use in older versions. It is neither more or less harsh than any other new syntax. When we incorporate new syntax we are primarily making an investment for future ES programmers. Transition issues need to be considered but I think that for ES, the future is still much longer and larger than the past.
The problem with Object.extend is that it isn't a single cow path. There are multiple path leading in the same general direction but taking different routes. This was the case in 2008 when we considered adding Object.extend for ES5 and it is even more so now. We could add a completely new function such as Object.update, but I wonder if that is really needed. Frameworks seem to be dong a fine job providing their own variants of Object.extend-like functions that are fine tuned to match their own abstraction models and other requirements. A polyfill with semantics that are different from someone's favorite framework might just cause confusion, even if it uses a different name. Are things still going to work if I use Object.update instead of ProtoyypesJS's Object.extend in a PrototypeJS environment? Maybe not? Same for other frameworks and other extend-like functions. Rather than sowing confusion in the current ES3/5 framework world with a new polyfill, it might be better to simply leave things be WRT a standardized extends-like function. := would be a new syntactic ES6 syntactic form that works in combination with other new ES6 syntactic forms. Legacy code and frameworks with their own extend-like functions would all continue to work in that ES6. New ES6 code probably doesn't need a procedural form of := (or if they do they could easily define it: Object.update=(obj1, obj2) => obj1 := obj2; ).
Cowpath are important for telling us where the cows need to go but they are constrained by the current terrain. Introducing an syntactic operator such as := is like building an elevated freeway that goes straight to the destination above the current cowpaths. It allows the old cows to continue to follow their established paths for as long as they need to, but don't constrain future high-speed travelers to following those old paths.
Finally, this discussion caused me to realize that I messed-up on an important detail when I prepared and presented the class semantics deck (t.co/PwuF12Y0) at the TC39 meeting.
In the deck, I incorrectly stated that I was proposing that the attributes associate with a property created via a concise method definition (in a class or object literal definition) should have the attributes {writable: true, configurable: false}. I had a hard time defending that choice at the meeting. ... On balance, I like := as a complement to =, but I'm leery of new-version-only thinking that leaves out Object.extend or better-named use-cases. And I am skeptical that any of this means non-writable configurable is a sane attribute combo for methods.
I made my case above. I think this is a situation where new-version-only feature design is fine (but getting there requires thinking about old versions). The current framework writers have things under-control for current/old versions. Sure it would have been great if there had been a standard extend-like function prior to the creation of modern frameworks, but there wasn't. Rather than throwing ripples through the current frameworks it may be better for us to focus on how new things will be done with the new version.
On Jul 29, 2012, at 6:05 AM, Herby Vojčík wrote:
Brendan Eich wrote:
...
However TC39 favored configurable: true as well as writable: true, to match JS expectations, in particular that one could reconfigure a data property (which is what method definition syntax in a class body creates) with a workalike accessor.
I don't understand here.
- Method syntax does not create data property (well, technically, yes, but I already think in terms of configurable: true, writable: false which was present some time already in wiki / es-discuss).
- To replace data property with workalike accessor, you use Object.defineProperty, so why would you need writable: true? It is not needed there at all.
Yes, this is close to what I was thinking. While "concise methods" are implemented as "data properties" they should not be thought of as part of the mutable state contract of an object. Conceptually, they are not the same thing as a closure-valued "instance variable". I expect that future ES6 style guides with say something like:
Use concise method notation to define behavior properties of object whose modification is not part of the objet's contract. Use : data property notation to define state properties that are expected to be modified. Always use : data properties in cases where the dynamic modification of a function-valued property is expected and part of the object's contract. For example:
class Foo { report () {this.callback()} //a prototype method that is a fixed part of the Foo interface. constructor (arg) { this := { callback : () => undefined, //default per-instance callback. clients are expected to modify doIt () {doSomethingWith(this, arg)} // a per-instance method that captures some constructor state, clients are not expected to modify } }
let f = new Foo(thing); f.callback = () => console.log('called back'); //this sort of assignment is expected.
f.doIt = function () {...}; //this isn't expected. It is patching the class definition. Avoid this. f := {doIt () {...} }; //instead this is how you should patch class definitions.
The concise method attribute values I suggested were intended as a means of making this guideline a bit more than just a convention.
Allen Wirfs-Brock wrote:
On Jul 28, 2012, at 6:58 PM, Brendan Eich wrote:
Allen Wirfs-Brock wrote:
I really think in a language where we have both [[Put]] and [[DefineOwnProperty]] semantics that we really need both = and := I can buy that, and I'm glad you mention := as it is not just an assignment operator (e.g. in Pascal or Ada), it's also Go's declare-and-init operator. It has the right characters, fuzzy meaning from other languages, and the critical = char in particular. I could see adding it as a winning and better user-interafce to Object.defineProperty and even Object.extend.
However, Object.extend is usable in old and new browsers, with polyfill. Object.extend is the cowpath trod by many happy cows. Requiring a transpiler is harsh. Should we entertain both := and Object.extend, or perhaps the better name to avoid colliding with PrototypeJS, Object.define or Object.update?
I think we should view := as new syntax that is primarily intended to be used in combination with other new syntax such as concise methods, class definitions, super, etc. that also require transpiling for use in older versions. It is neither more or less harsh than any other new syntax. When we incorporate new syntax we are primarily making an investment for future ES programmers. Transition issues need to be considered but I think that for ES, the future is still much longer and larger than the past.
Yes, I agree with that (as stated; it doesn't help with balancing polyfillability or making the right call on configurable+-writable).
The problem with Object.extend is that it isn't a single cow path. There are multiple path leading in the same general direction but taking different routes. This was the case in 2008 when we considered adding Object.extend for ES5 and it is even more so now. We could add a completely new function such as Object.update, but I wonder if that is really needed.
The JSFixed project had Object.extend among its curated/moderated outcomes and I think it's a reasonable request. We rolled up Function.prototype.bind into ES5 in spite of several differences among the leading implementations (Dojo hitch, Prototype bind, etc.) and we changed the ES5 draft as we went.
Frameworks seem to be dong a fine job providing their own variants of Object.extend-like functions that are fine tuned to match their own abstraction models and other requirements.
This is not a sufficient argument on its face, since we got bind into ES5 in spite of variation.
Anyway, given the issue I raised about lack of writability being hard to test, or really: unlikely to be tested in practice, I don't think := (a good idea) motivates configurable+non-writable.
Le 28/07/2012 21:04, Allen Wirfs-Brock a écrit :
(...) We introduce a new operator that is looks like :=
This is the "define properties" operator. Both the LHS and RHS must be objects (or ToObject convertible). Its semantics is to [[DefineOwnProperty]] on the LHS obj a property corresponding to each RHS own property. I does this with all reflectable own properties. It includes non-enumerable properties and unique named properties but not non-reflectable /private/ name properties. It rebinds methods with super bindings to the RHS to new methods that are super bound to the LHS.
The above example would then be written as:
a := { push(elem) { ... } }; rather than, perhaps incorrectly as:
a.push = function (elem) { ... };
or, correctly but very inconveniently as:
Object.defineProperty(a, "push", {writable: true, configurable: true, enumberable: true, data:function (elem) {
I see the typo here ('data' instead of 'value') as one of the most brilliant and unexpected example of this inconvenience :-) And I'm not even talking about 'enumberable' which I also trip over almost all the time to the point of making this syntax (en)um-bearable!
... } }
);
(...)
I really think in a language where we have both [[Put]] and [[DefineOwnProperty]] semantics that we really need both = and :=
That's an interesting view on things. To me, it would make acceptable the idea of = being unreliable locally without prior knowledge (which, as noted, kind-of-already-is because of inherited setters) while := (which is more ':={}' actually, aka big-lips-guy) enables reliable local review without prior knowledge, proxy pathological cases aside.
Le 28/07/2012 15:16, Tom Van Cutsem a écrit :
> … > Trapping instanceof > > Function [[HasInstance]] > > x instanceof Global answering true if x and Global live in separate > frames/windows > > var fp = Proxy(targetFunction, handler); > > x instanceof fp // handler.hasInstance(targetFunction, x) > > > MM: Explains concerns originally raised on es-discuss list by David > Bruant, but shows the cap-leak is tolerable I'm interested in the demonstration :-)
Mark acknowledged your concerns, but pointed out that currently almost no capability-secure JS code is out there that relies on the fact that instanceof doesn't grant access to the LHS. Even so, most of that code will be Caja code, which can be maintained to avoid the leak. In going forward, we can just explain instanceof as an operator that internally sends a message to the RHS, passing the LHS as an argument. In effect, the implicit capability "leak" would become an explicitly stated capability "grant".
Interesting way of viewing the situation. I guess that's fine. Static analysis can give all "instanceof" occurences and likely help to easily spot unintended grants.
> # Proxies and private names > (...) > DH: so name.public === name? > > MM: I like that > > MM: are unique names in? > > DH: I think so If they are, the .public part of private names could be retired with the following setting: * Private names don't trigger proxy traps call and are not be reflectable at all. This is practically equivalent to calling a trap with a useless public counterpart from the caller perspective. From the proxy perspective, since the public part is useless, being called or not sounds like it would be more or less equivalent. * Unique names would be trapped and passed as unchanged as argument to the trap (actually since name.public === name, passing the unique name or its public counterpart is equivalent). If the proxy wants the unique name not to be accessed, it cannot remove it from getOwnPropertyNames trap result. So proxies can emulate their own private names.
We still want proxies to intercept private names. It may be that the proxy handler knows about the private name, in which case it has the "capability" to usefully intercept access to it.
But it may be that the proxy doesn't know the private name for the very reason that the name holders do not want any proxy to know it. In that situation, why would the proxy trap be called? The proxy cannot make any constructive use of the public part without the private counter part, apart maybe from storing all public names it can and wait for a private name leak. Not trapping offers security by default.
There is certainly the case of dynamic granting, where you give access to the name only later, but I can't think of a practical use case of that yet. In any case, if that's what is wanted, it could still be built on top of unique names and unintercepted private names.
I see 3 use cases involving names and proxies: (1) Everyone knows the name, so no need to do anything convoluted to hide the name from the proxy. Unique names solve this one elegantly. (2) By default, no proxy should have access to the name. The current proposal has the "trap the public counterpart" design where an attacker can wait for a leak; I say that if proxies shouldn't know, they might as well not trap at all. (3) Some proxies can know the name but not some others. That's a complicated use case that requires a complicated setting in any case. The current proposal for this use case requires a mixture of weakmap and private names (and their public counterpart). Here my counter-proposal needs a mixture of weakmap, private names and unique names which is more or less the same thing except maybe that you have to maintain the public <-> private mapping yourself in both directions while the current
proposal gives private -> public for free (you have to maintain the
other direction yourself anyway).
Like everyone on the list, I have no experience in using proxies and private names in code, but I have the feeling that use cases 1 and 2 cover 80% of the needs (maybe more?), so there is no need to make (2) more complicated and potentially less secure. Whatever the percentage of use case 3 is, it requires a complex setting anyway and as I showed, in both cases an equivalent amount of work.
My logic here is: give simple and safe constructs for simple use cases, give experts flexible tools to build arbitrarily complex settings.
Is there a disagreement on my analysis? Is there a benefit in the "public counterpart" design I'm forgetting?
On Jul 30, 2012, at 1:08 PM, Brendan Eich wrote:
Allen Wirfs-Brock wrote:
...
The problem with Object.extend is that it isn't a single cow path. There are multiple path leading in the same general direction but taking different routes. This was the case in 2008 when we considered adding Object.extend for ES5 and it is even more so now. We could add a completely new function such as Object.update, but I wonder if that is really needed.
The JSFixed project had Object.extend among its curated/moderated outcomes and I think it's a reasonable request. We rolled up Function.prototype.bind into ES5 in spite of several differences among the leading implementations (Dojo hitch, Prototype bind, etc.) and we changed the ES5 draft as we went.
Adding Object.extend is not totally comparable to adding Function.prototype.bind in ES5. The big difference is that the core semantics of the various framework provided bind functions and the ES5 bind were essentially identical. The differences only involved very obscure edge cases. This enables the ES5 bind to replace the frameworks' binds (and visa versa) with minimal disruption. There isn't anything like a universally accepted semantics for Object.extend. In particular, the semantics proposed [1] by JSFixed is different from the semantics defined for that same name by prototype.js [2]. Neither of them correctly deal with accessor properties.
I think the JSFixed proposal [2] (and its associated issue discussion [3]) is a strong indication that there is significant perceived utility in a feature that enables bulk replication of properties from one object to another. However, there is almost no discussion in [3] of the compatibility impact of standardizing a function named Object.extend. Given that and the semantic issues (handling of accessor, etc.) I don't think the the exact JSFixed proposal is particularly reasonable. More strongly I think that adding a standard function named Object.extend is likely to be disruptive. I don't really object to a polyfillable function that has the same semantics as that proposed for := as long as it does have a name that conflicts with widely used legacy code. I do, however, question the necessity of such a function in light of the current adequate support provided by frameworks.
Frameworks seem to be dong a fine job providing their own variants of Object.extend-like functions that are fine tuned to match their own abstraction models and other requirements.
This is not a sufficient argument on its face, since we got bind into ES5 in spite of variation.
I really think the situation is different this time. The commonly used semantics of ES5's bind did not differ significantly any other widely used implementation of a Function.prototype.bind method. so replacing one with the other wasn't disruptive. Object.extends and similar but differently named or located framework functions are not nearly as well aligned in their core semantics.
Anyway, given the issue I raised about lack of writability being hard to test, or really: unlikely to be tested in practice, I don't think := (a good idea) motivates configurable+non-writable.
yes, a separate issue. I still think configurable+non-writable is defendable (actually better) but it's a pretty minor issue that I won't loose sleep over.
/be
[1] docs.google.com/document/d/1JPErnYlBPG26chTuVSnJ_jqW4YkiQhvWn-FxwwsmkEo/edit [2] sstephenson/prototype/blob/master/src/prototype/lang/object.js#L72 [3] JSFixed/JSFixed#16
Allen Wirfs-Brock wrote:
The commonly used semantics of ES5's bind did not differ significantly any other widely used implementation of a Function.prototype.bind method. so replacing one with the other wasn't disruptive.
Could be, but there were differences:
I think you're on thin ice arguing this was so much less signfiicant than Object.extend (or let's say Object.update).
Object.extends and similar but differently named or located framework functions are not nearly as well aligned in their core semantics.
First, "differently named" applied to bind precursors, e.g. Dojo's hitch.
Second, here's a post from jresig years ago:
This is out of date, but note how for-in is used in all cases. There's a lot of common ground here, and some uncommon bits that look not a whole lot bigger or different-in-kind from the bind/hitch/etc. ones we overcame in ES5.
Le 28/07/2012 01:55, Rick Waldron a écrit :
Explanation of specification history and roots in newer DOM mutation mechanism.
AWB: Is this sufficient for implementing DOM mutation event mechanisms?
RWS: Yes, those could be built on top of Object.observe
Probably I must be misreading the proposal (again), but if you take a js DOM project where almost all attributes are handled via getters/setters, how can we observe something ?
You can still do useful things even without access to the public name though, as long as you can still forward to the target and get the result back. This allows you to instrument the action and to associate it with something unique even if you don't have a way to access the name and it's valuable arbitrarily outside of the trap. However, if there's no way to forward it correctly then the trap can't really exist at all anyway.
On Jul 30, 2012, at 2:56 PM, Brendan Eich wrote:
Allen Wirfs-Brock wrote:
The commonly used semantics of ES5's bind did not differ significantly any other widely used implementation of a Function.prototype.bind method. so replacing one with the other wasn't disruptive. Could be, but there were differences:
I think you're on thin ice arguing this was so much less signfiicant than Object.extend (or let's say Object.update).
Perhaps. I think the most important compatibility situation is when the old and new names are the same. For example, Function.prototype.bind or Object.extend. I understood that Function.prototype.bind was more common before ES5 than it may have really been. However, in my reading of the MooTools docs (mootools.net/docs/core/Types/Function#Function:bind ) it sounds very similar to ES5 bind
Object.extends and similar but differently named or located framework functions are not nearly as well aligned in their core semantics.
First, "differently named" applied to bind precursors, e.g. Dojo's hitch.
Second, here's a post from jresig years ago:
This is out of date, but note how for-in is used in all cases. There's a lot of common ground here, and some uncommon bits that look not a whole lot bigger or different-in-kind from the bind/hitch/etc. ones we overcame in ES5.
The JSFixed proposal uses getOwnPropertyNames rather than for-in and I can't imagine that we would adopt a semantics that copied inherited properties similarly to a for-in based implementation. Similarly, I can't imagine that we wouldn't correctly handle accessors. If a new method has to be polyfillable back to ES3 then its semantics needs to be more limited. A much better job can be done if you only have to polyfill for ES5. But that doesn't really provide anything new. If it is important, why isn't somebody in the community evangelize a sound de facto standard ES5 level extend-like function that all frameworks could adopt. TC39 isn't necessary for such a thing to be widely adopted.
Allen Wirfs-Brock wrote:
On Jul 30, 2012, at 2:56 PM, Brendan Eich wrote:
Allen Wirfs-Brock wrote:
The commonly used semantics of ES5's bind did not differ significantly any other widely used implementation of a Function.prototype.bind method. so replacing one with the other wasn't disruptive. Could be, but there were differences:
I think you're on thin ice arguing this was so much less signfiicant than Object.extend (or let's say Object.update).
Perhaps. I think the most important compatibility situation is when the old and new names are the same. For example, Function.prototype.bind or Object.extend. I understood that Function.prototype.bind was more common before ES5 than it may have really been. However, in my reading of the MooTools docs (mootools.net/docs/core/Types/Function#Function:bind ) it sounds very similar to ES5 bind
Read all the docs, not just MooTools. Dojo's hitch takes a string or a function (if string, it looks for that method "in scope"). PrototypeJS didn't forward new attempts to the target. Etc.
Object.extends and similar but differently named or located framework functions are not nearly as well aligned in their core semantics.
First, "differently named" applied to bind precursors, e.g. Dojo's hitch.
Second, here's a post from jresig years ago:
This is out of date, but note how for-in is used in all cases. There's a lot of common ground here, and some uncommon bits that look not a whole lot bigger or different-in-kind from the bind/hitch/etc. ones we overcame in ES5.
The JSFixed proposal uses getOwnPropertyNames
We were talking about precedent in libraries, not JSFixed, but ok.
rather than for-in and I can't imagine that we would adopt a semantics that copied inherited properties similarly to a for-in based implementation.
It may not matter. The rule has been "Object.prototype is verboten" and the pattern generally uses an object (literal, even), not an array whose prototype has been extended by assignment, as the source. So no proto-pollution occurs in practice.
So I suspect we would be fine spec'ing Object.getOwnPropertyNames. That is on the level of the changes made from progenitor bind-like functions, in reality (due to the best-practices mentioned above).
Similarly, I can't imagine that we wouldn't correctly handle accessors.
Right, but the precedents predate ES5 so this is no surprise. It's sort of like Prototype's not forwarding new, arguably "worse" but hard to say.
If a new method has to be polyfillable back to ES3 then its semantics needs to be more limited. A much better job can be done if you only have to polyfill for ES5. But that doesn't really provide anything new.
Now you're picking a fight. The point is not to provide something new if the use-case would be met by an API that can be polyfilled -- as many use cases can, since the call sites pass object literals.
What the API provides is the ability to do without a transpiler. That's a big deal.
If it is important, why isn't somebody in the community evangelize a sound de facto standard ES5 level extend-like function that all frameworks could adopt.
We did not insist on such a condition when we put bind into ES5. But now you are definitely rehashing something I thought we were past: we do not require one winner in detail to adopt something. If we did, nothing much would get adopted.
TC39 isn't necessary for such a thing to be widely adopted.
That applies to bind-like functions too and it's irrelevant.
On Mon, Jul 30, 2012 at 2:56 PM, Aymeric Vitte <vitteaymeric at gmail.com> wrote:
Le 28/07/2012 01:55, Rick Waldron a écrit :
Explanation of specification history and roots in newer DOM mutation mechanism.
AWB: Is this sufficient for implementing DOM mutation event mechanisms?
RWS: Yes, those could be built on top of Object.observe
Probably I must be misreading the proposal (again), but if you take a js DOM project where almost all attributes are handled via getters/setters, how can we observe something ?
The point wouldn't be to observe DOM changes directly via Object.observe() by user script. The DOM Mutation API is different in several ways from Object.observe() (different API surface area, different vocabulary of changes, etc...)
One approach would be have the DOM storage internally be graphs of simple data. The implementation can observe changes and then compute a transform of the data changes it receives into the necessary DOM mutations, which it then broadcasts.
I don't have an opinion of whether it would be a good idea to take this approach (my guess is that standard trade-offs of complexity & memory vs speed would apply). Allen's question was whether it would be possible.
2012/7/30 David Bruant <bruant.d at gmail.com>
Le 28/07/2012 15:16, Tom Van Cutsem a écrit :
We still want proxies to intercept private names. It may be that the proxy handler knows about the private name, in which case it has the "capability" to usefully intercept access to it.
But it may be that the proxy doesn't know the private name for the very reason that the name holders do not want any proxy to know it. In that situation, why would the proxy trap be called? The proxy cannot make any constructive use of the public part without the private counter part, apart maybe from storing all public names it can and wait for a private name leak. Not trapping offers security by default.
[analysis snipped]
Is there a disagreement on my analysis? Is there a benefit in the "public counterpart" design I'm forgetting?
I'm open to the idea of just not trapping private names (it would certainly simplify things), but like any part that proxies cannot virtualize, what would be the implications on self-hosting APIs? Of course, since private names don't yet exist, APIs such as the DOM do not make use of it. But we cannot guarantee that all APIs worth intercepting/virtualizing in the future will not make use of private names, only unique names.
On the other hand, if we automatically forward private name access (no trapping), we should be aware that such accesses would pierce membranes. One could argue that the private name should never leak through the membrane in the first place. If private name access can be trapped, membrane proxies can throw when an attempt is made to access a private name the membrane doesn't know.
You do seem to suggest that the current design unnecessarily elevates the risk of a private name leak by allowing trapping. A proxy can store all the .public objects it wants, that doesn't give it any more power. The confinement of the private name never rests on the confinement of the name.public property. I see no elevated risk of leaks in the current proposal due to a proxy hoarding public objects.
2012/7/31 Brandon Benvie <brandon at brandonbenvie.com>
You can still do useful things even without access to the public name though, as long as you can still forward to the target and get the result back. This allows you to instrument the action and to associate it with something unique even if you don't have a way to access the name and it's valuable arbitrarily outside of the trap. However, if there's no way to forward it correctly then the trap can't really exist at all anyway.
No, if a handler intercepts a private name access for a private name it doesn't know, it has no way of forwarding the access in such a way that it can still intercept the result. That would allow the handler to read or change the value of a private name it doesn't know.
The only way a handler can forward a private name access is by returning undefined from its getName trap. The forwarding at that point is done by the proxy, with no further interaction with the handler. The handler doesn't get to change the value returned from target[name]. This is crucial.
AFAICT, the only two useful things a handler can do when it intercepts a private name it knows nothing about is:
-
ask the proxy to forward
-
throw
If the handler does know of the private name, or the name is unique, then the handler can do the forwarding itself and intercept/change/decorate the result as usual.
It's definitely a concern of my that proxies have the necessary tools to allow for fully wrapping arbitrary object graphs. Is there any case where not being able to trap private names would prevent that goal?
The following was WRT [[Put]]/[[CanPut]] semantic issues:
On Jul 28, 2012, at 6:02 AM, David Bruant wrote:
Le 28/07/2012 14:37, Herby Vojčík a écrit :
... :-/ But that is how it is, no? That's what the spec says, but V8 has implemented something else (and I haven't seen an intention to change this behavior), so what the spec says doesn't really matter.
David
I have to disagree with David's sentiments here. Situations like this is exactly why we have standardized specifications. Different implementors can easily have differing interpretations about the edge case semantics of loosely described features. An important role of standards is to align implementations on a common semantics. Sure, an implementation can refuse to go along with the specification but that is quite rare, at least for ECMAScript where all major implementations seem to recognize the importance of interoperability. In particular, I haven't seen any indication that V8, as a matter of policy, is refusing to ever correct this deviations.
It's true that what the spec. says makes no difference to the browser bits that have already been shipped. It does make a difference over the long term. Single implementation deviations from the specification usually get fixed eventually. Conformance to the specs. is a motivator for implementors.
We really shouldn't foster the meme that specs don't really matter. they matter a lot.
David Bruant wrote:
That's what the spec says, but V8 has implemented something else (and I haven't seen an intention to change this behavior), so what the spec says doesn't really matter.
I missed this until Allen's reply called it out. It is both false (Google people at the TC39 meeting last week said it's a V8 bug that is going to be fixed), and a stinky statement of anti-realpolitik. In the current market, if we don't hew to a consensus standard, anything goes.
Not that everyone would make breaking changes, or any changes, just that from the presence of a bug or long-standing deviation (in this case copied from JavaScriptCore, which has since fixed the deviation!) does not mean that "what the spec says doesn't really matter."
Or were you snarking at V8?
On Sat, Jul 28, 2012 at 6:02 AM, David Bruant <bruant.d at gmail.com> wrote:
That's what the spec says, but V8 has implemented something else (and I haven't seen an intention to change this behavior), so what the spec says doesn't really matter.
We have a fix for V8 (--es5_readonly) but the Chromium bindings to the DOM still has bugs related to this flag. I plan to have this fixed in the coming weeks.
2012/7/31 Tom Van Cutsem <tomvc.be at gmail.com>
2012/7/30 David Bruant <bruant.d at gmail.com>
Le 28/07/2012 15:16, Tom Van Cutsem a écrit :
We still want proxies to intercept private names. It may be that the proxy handler knows about the private name, in which case it has the "capability" to usefully intercept access to it.
But it may be that the proxy doesn't know the private name for the very reason that the name holders do not want any proxy to know it. In that situation, why would the proxy trap be called? The proxy cannot make any constructive use of the public part without the private counter part, apart maybe from storing all public names it can and wait for a private name leak. Not trapping offers security by default.
[analysis snipped]
Is there a disagreement on my analysis? Is there a benefit in the "public counterpart" design I'm forgetting?
I'm open to the idea of just not trapping private names (it would certainly simplify things), but like any part that proxies cannot virtualize, what would be the implications on self-hosting APIs? Of course, since private names don't yet exist, APIs such as the DOM do not make use of it.
To some extent, we can say that they do. dom.js uses _properties to discriminate what's private. So, that's where they use private names if they had them. It certainly gives a good sense of where private names would be used in a self-hosted DOM. Other libraries use that convention (especially in Node.js from what I've seen), so that could give an idea.
But we cannot guarantee that all APIs worth intercepting/virtualizing in
the future will not make use of private names, only unique names.
true. Flexibility is an interesting and important argument.
On the other hand, if we automatically forward private name access (no trapping), we should be aware that such accesses would pierce membranes. One could argue that the private name should never leak through the membrane in the first place. If private name access can be trapped, membrane proxies can throw when an attempt is made to access a private name the membrane doesn't know.
You do seem to suggest that the current design unnecessarily elevates the risk of a private name leak by allowing trapping. A proxy can store all the .public objects it wants, that doesn't give it any more power. The confinement of the private name never rests on the confinement of the name.public property. I see no elevated risk of leaks in the current proposal due to a proxy hoarding public objects.
I didn't say that, I said that the outcome of a leak is bigger if a proxy can store the public parts and wait for a private name leak, but I realize I was mostly wrong on that point. Public parts do not increase the things one can reach in case of a leak in a meaningful way. It just make it more efficient to search for things.
2012/7/31 Tom Van Cutsem <tomvc.be at gmail.com>
2012/7/31 Brandon Benvie <brandon at brandonbenvie.com>
You can still do useful things even without access to the public name though, as long as you can still forward to the target and get the result back. This allows you to instrument the action and to associate it with something unique even if you don't have a way to access the name and it's valuable arbitrarily outside of the trap. However, if there's no way to forward it correctly then the trap can't really exist at all anyway.
No, if a handler intercepts a private name access for a private name it doesn't know, it has no way of forwarding the access in such a way that it can still intercept the result. That would allow the handler to read or change the value of a private name it doesn't know.
The only way a handler can forward a private name access is by returning
undefined from its getName trap. The forwarding at that point is done by the proxy, with no further interaction with the handler. The handler doesn't get to change the value returned from target[name]. This is crucial.
I think I missed the *Name trap design in the notes. Returning [name, value] looks very heavy to me. If you know a secret once and can prove it once, you can know it and prove it forever (and very likely will), so the API should take that property into account. One idea would be to have a particular property in handlers, like "knownPrivateNames" (which could smartly be expected to be an ES.next Set, or more accurately a WeakSet if this one ever gets mentionned in the spec) and whenever an *Trap returns for a particular private name, the after-trap checks whether you have the private name in your knownPrivateNames set. That should be enough to prove you know the secret. When you get to a new private name, put it in the knownPrivateNames set. Even in the "return [name, value]" design, one needs to store known private names somewhere anyway and it'll likely be on the handler anyway too :-) So it may be a good idea to make this storage "official" and make it a tool to communicate with the JS engine. Maybe the details I propose are not perfect, but I think there is a game-changer in the idea of a handler being able to share with the JS implementation which secrets it knows.
AFAICT, the only two useful things a handler can do when it intercepts a
private name it knows nothing about is:
ask the proxy to forward
throw
Interestingly, you do not mention the public counterpart here :-) Digging a bit deeper, from a trap point of view, if you get to know 2 unique names for which you don't know the private part, then I don't think you can make any use of this information. Can you do a more relevant choice (forward or throw) based on the different unique name identities? I can't think of any now. From a trap point of view, you just have 2 unique, unforgeable and useless tokens, you can differnciate them thanks to identity, but that's as far as it gets, so I agree with your analysis here. Certainly trapping for private names, if it's to offer these two choices, is valuable, so I take back the idea of not trapping for private names. But I think i would take a different direction for the trap design. Combined with the above idea of sharing a knownPrivateNames set with the JS engine, what could happen is the following:
- regular get/set/delete/... traps even for unique names and private names you have proven to know (since you have proven to know the private name, they are passed directly, no need for a public counterpart)
- *Name traps when you don't know the private name. This trap doesn't have the public part as argument (since there is no use for it) but still leaves you the 2 choices of asking to forward or throwing.
What do you think?
I think my message has been taken the wrong way, so I should clarify it: From a practical point of view, if 2 implementations differ on one aspect of the language, it means that there is no content relying on either of the 2 implementations for that aspect of the language, whether they follow the spec or even both diverge differently from it. Not having content relying on this aspect also opens the door to changing the behavior if it's considered as a mistake. This applies to the "override mistake", but every single test262 test failure on any major implementation could also be seen as an instance of such an open door to change the spec. "what the spec says doesn't really matter" seems very negative taken out of context, but within the context i said it, it meant something positive along the lines of "there is room to improve the specification on this part if necessary" That really is all what I meant, no more, no less.
Also... hmm... I wouldn't be spending that much time on standards mailing-list, this one included, if I didn't believe in standardization ;-)
Detailed answer below.
Le 31/07/2012 14:07, Allen Wirfs-Brock a écrit :
The following was WRT [[Put]]/[[CanPut]] semantic issues:
On Jul 28, 2012, at 6:02 AM, David Bruant wrote:
Le 28/07/2012 14:37, Herby Vojčík a écrit :
... :-/ But that is how it is, no? That's what the spec says, but V8 has implemented something else (and I haven't seen an intention to change this behavior), so what the spec says doesn't really matter.
David I have to disagree with David's sentiments here. Situations like this is exactly why we have standardized specifications.
I agree. But I'd like to add that situations like this also show the limitations of standardized specifications (more on that at the end)
Different implementors can easily have differing interpretations about the edge case semantics of loosely described features. An important role of standards is to align implementations on a common semantics. Sure, an implementation can refuse to go along with the specification but that is quite rare, at least for ECMAScript where all major implementations seem to recognize the importance of interoperability.
I do the opposite analysis: major implementations recognize the importance of interoperability due to market constraints, thus the need for a standard. Although almost no one talks about it these days, I think the most important part of HTML5 was specifying what's already in some browsers, making clear for the other browsers what to implement to be interoperable.
In particular, I haven't seen any indication that V8, as a matter of policy, is refusing to ever correct this deviations.
It's true that what the spec. says makes no difference to the browser bits that have already been shipped. It does make a difference over the long term. Single implementation deviations from the specification usually get fixed eventually. Conformance to the specs. is a motivator for implementors.
We really shouldn't foster the meme that specs don't really matter. they matter a lot.
I hope I have clarified that I don't buy into the meme that specs don't matter. I was only reacting to the fact the 2 major implementations differ on one aspect of the spec, making in practice what the spec says on that aspect useless.
Brendan Eich wrote:
I missed this until Allen's reply called it out. It is both false (Google people at the TC39 meeting last week said it's a V8 bug that is going to be fixed)
it's unfortunate this information wasn't on the meeting notes, but I'm glad to hear it :-)
and a stinky statement of anti-realpolitik. In the current market, if we don't hew to a consensus standard, anything goes.
Not that everyone would make breaking changes, or any changes, just that from the presence of a bug or long-standing deviation (in this case copied from JavaScriptCore, which has since fixed the deviation!) does not mean that "what the spec says doesn't really matter."
I guess I should have added "here" at the end of my sentence to clarify that I didn't mean that the whole spec doesn't matter, but only the part about [[CanPut]]/[[Put]] that's not interoperably implemented.
Or were you snarking at V8?
I was not.
More on the limitations of standardization I talked about above. As I said, I understand the importance of a standard and I don't buy in the idea they are useless. I also don't buy in the idea that standards should be seen written-in-stone documents. We all know that specs sometimes have mistakes in them and when it's necessary and possible, they are fixed. It was discovered that ES5 had such a mistake [1] and the standard has been consequently fixed. This change additionally to implementations following make that what was said in the spec about Object.prototype.toString before the fix did not matter (only the part that was controversial). The fact that it did not matter was actually a pre-requisite to being able to change it, because if it did matter, if content relied on that part, the specification couldn't have been changed.
Also, I think there is something much more important than the specification document itself which is the standardization process. What's happening here (es-discuss and TC39 meetings) is more important than writing a document. What's happening is that major stakeholders come to an agreement. This is far more important than the standard itself. The standard is just the codification of this agreement. For things that have been implemented, it's just a reminder. And for anything that hasn't been implemented yet, it's just an hopeful promise. If it happened that some feature made it to an ECMAScript 6 standard, but not in any implementation, would it matter? If implementations diverge, the standard will have to be fixed anyway, making the previous version not-so-standard. Actually that's a good question: is it planned to have in ES.next-the-standard features that wouldn't be implemented at the time the standard is released?
I often read on Twitter or blogposts people saying things like "TC39, hurry up!" or complaining that the next version of the standard takes too much time to be released. I think it's misguided. When the standard as a document is released doesn't matter. When implementations are ready is. And as we have seen with WeakMaps, it doesn't take a standard, it takes an implementor agreement.
I hope I have made my position more clear.
David
David Bruant wrote:
From a practical point of view, if 2 implementations differ on one aspect of the language, it means that there is no content relying on either of the 2 implementations for that aspect of the language, whether they follow the spec or even both diverge differently from it.
It's not that simple on the web. For instance, longstanding IE vs. Netscape/Mozilla forking, e.g.
if (document.all) { ... } else { ... }
can mean some divergence among the "else" browsers is ok because not relied on there, but not ok in the "then" case.
You're probably right, but we are not making data and accessors asymmetric in the sense that a non-writable data property and a get-only accessor on a prototype object both throw (strict) or silently fail to update the LHF (non-strict) on assignment that would otherwise create a shadowing property in a delegating object.
This was debated at last week's TC39 meeting. Between the desire to preserve this symmetry (not paramount, there are many dimensions and symmetries to consider) and the V8 bug being fixed (and the JSC bug on which the V8 bug was based already being fixed in iOS6), I believe we kept consensus to follow the spec.
On Wed, Aug 1, 2012 at 12:05 AM, Brendan Eich <brendan at mozilla.org> wrote:
(and the JSC bug on which the V8 bug was based already being fixed in iOS6)
Just to nitpick for those following along at home, the bug is fixed in the just-release Safari 6, and @ohunt declined to comment on future products or releases :)
2012/7/31 Brandon Benvie <brandon at brandonbenvie.com>
It's definitely a concern of my that proxies have the necessary tools to allow for fully wrapping arbitrary object graphs. Is there any case where not being able to trap private names would prevent that goal?
Well, yes if:
- proxies wouldn't be able to trap private names, and
- the private name is accessible to both sides of the membrane, then the private name could be used to pierce the membrane.
The current consensus is that proxies should trap private names, so 1) is false.
I've been thinking some more about whether we could prevent 2), but the more I think about it, the more it seems to me that private name values, though modeled as (immutable) objects, should be treated by membranes as primitive data and should be passed through unmodified (like numbers and strings). It doesn't make sense for a membrane to wrap a private name, since the only important thing about a private name is its (object) identity, and you lose that by wrapping. A wrapped private name would be useless.
This strengthens the argument that proxies should be able to trap private names.
Le 01/08/2012 00:05, Brendan Eich a écrit :
David Bruant wrote:
From a practical point of view, if 2 implementations differ on one aspect of the language, it means that there is no content relying on either of the 2 implementations for that aspect of the language, whether they follow the spec or even both diverge differently from it.
It's not that simple on the web. For instance, longstanding IE vs. Netscape/Mozilla forking, e.g.
if (document.all) { ... } else { ... }
can mean some divergence among the "else" browsers is ok because not relied on there, but not ok in the "then" case.
What an intricated case. By the way, I recall something I learned from @mathias. In Chrome:
console.log(document.all); // shows an object in the console
console.log(typeof document.all) // undefined 'all' in document // true console.log(!!document.all) // false
Such a thing cannot be represented in pure ECMAScript, not even with proxies. I don't think there is anything that can be done in ECMAScript to fix this, but it's worth sharing this information.
You're probably right, but we are not making data and accessors asymmetric in the sense that a non-writable data property and a get-only accessor on a prototype object both throw (strict) or silently fail to update the LHF (non-strict) on assignment that would otherwise create a shadowing property in a delegating object.
This was debated at last week's TC39 meeting. Between the desire to preserve this symmetry (not paramount, there are many dimensions and symmetries to consider) and the V8 bug being fixed (and the JSC bug on which the V8 bug was based already being fixed in iOS6), I believe we kept consensus to follow the spec.
That's fine. I was only noting that the door was open. There is no reason to be forced to take it. Interoperability is however a good reason to make a choice whatever it is.
2012/7/31 David Bruant <bruant.d at gmail.com>
2012/7/31 Tom Van Cutsem <tomvc.be at gmail.com>
I'm open to the idea of just not trapping private names (it would certainly simplify things), but like any part that proxies cannot virtualize, what would be the implications on self-hosting APIs? Of course, since private names don't yet exist, APIs such as the DOM do not make use of it.
To some extent, we can say that they do. dom.js uses _properties to discriminate what's private. So, that's where they use private names if they had them. It certainly gives a good sense of where private names would be used in a self-hosted DOM. Other libraries use that convention (especially in Node.js from what I've seen), so that could give an idea.
So there you have it. Good observation!
[...]
I think I missed the *Name trap design in the notes. Returning [name, value] looks very heavy to me. If you know a secret once and can prove it once, you can know it and prove it forever (and very likely will), so the API should take that property into account. One idea would be to have a particular property in handlers, like "knownPrivateNames" (which could smartly be expected to be an ES.next Set, or more accurately a WeakSet if this one ever gets mentionned in the spec) and whenever an *Trap returns for a particular private name, the after-trap checks whether you have the private name in your knownPrivateNames set. That should be enough to prove you know the secret. When you get to a new private name, put it in the knownPrivateNames set. Even in the "return [name, value]" design, one needs to store known private names somewhere anyway and it'll likely be on the handler anyway too :-) So it may be a good idea to make this storage "official" and make it a tool to communicate with the JS engine. Maybe the details I propose are not perfect, but I think there is a game-changer in the idea of a handler being able to share with the JS implementation which secrets it knows.
I don't like it. It introduces mutable state into the proxy-handler protocol, which is currently fully functional. The proxy makes a minimum of dependencies on the handler's behavior, and only interacts with it via property access of trap names (crucial for double lifting). Also, since a handler's properties may be mutable, you have to account for the fact that a trap can be updated, thus there is the potential issue of the handler's internal state growing out of date.
It may very well be that handlers will often need to resort to WeakMaps to track additional state, but I'd prefer that to be explicit in the code rather than buried implicitly in the Proxy API.
AFAICT, the only two useful things a handler can do when it intercepts a
private name it knows nothing about is:
ask the proxy to forward
throw
Interestingly, you do not mention the public counterpart here :-) Digging a bit deeper, from a trap point of view, if you get to know 2 unique names for which you don't know the private part,
Hold on, terminology check: unique names wouldn't have a private part. For any unique name n, n.public === n.
then I don't think you can make any use of this information. Can you do a more relevant choice (forward or throw) based on the different unique name identities? I can't think of any now. From a trap point of view, you just have 2 unique, unforgeable and useless tokens, you can differnciate them thanks to identity, but that's as far as it gets, so I agree with your analysis here. Certainly trapping for private names, if it's to offer these two choices, is valuable, so I take back the idea of not trapping for private names. But I think i would take a different direction for the trap design. Combined with the above idea of sharing a knownPrivateNames set with the JS engine, what could happen is the following:
- regular get/set/delete/... traps even for unique names and private names you have proven to know (since you have proven to know the private name, they are passed directly, no need for a public counterpart)
- *Name traps when you don't know the private name. This trap doesn't have the public part as argument (since there is no use for it) but still leaves you the 2 choices of asking to forward or throwing.
What do you think?
Part of the reason why we decided to fork the regular traps into additional *Name traps is that we wanted to keep the "type signature" of the existing traps unmodified. Your proposal 1) would change the type of the "name" argument from String to (String | Name). So a programmer might still need to do a case-analysis in the body of each trap.
With the split traps, the old traps continue to work fine with Strings. The *Name traps work with both private and unique names. If a *Name trap wants to distinguish between private/public names, it can. But the beauty of the |name.public === name| trick for unique names is that all unique names can be effectively treated as private names (Liskov substitutability), so a case-analysis is not always needed.
Le 01/08/2012 09:07, Tom Van Cutsem a écrit :
2012/7/31 David Bruant <bruant.d at gmail.com <mailto:bruant.d at gmail.com>>
2012/7/31 Tom Van Cutsem <tomvc.be at gmail.com <mailto:tomvc.be at gmail.com>> [...] I think I missed the *Name trap design in the notes. Returning [name, value] looks very heavy to me. If you know a secret once and can prove it once, you can know it and prove it forever (and very likely will), so the API should take that property into account. One idea would be to have a particular property in handlers, like "knownPrivateNames" (which could smartly be expected to be an ES.next Set, or more accurately a WeakSet if this one ever gets mentionned in the spec) and whenever an *Trap returns for a particular private name, the after-trap checks whether you have the private name in your knownPrivateNames set. That should be enough to prove you know the secret. When you get to a new private name, put it in the knownPrivateNames set. Even in the "return [name, value]" design, one needs to store known private names somewhere anyway and it'll likely be on the handler anyway too :-) So it may be a good idea to make this storage "official" and make it a tool to communicate with the JS engine. Maybe the details I propose are not perfect, but I think there is a game-changer in the idea of a handler being able to share with the JS implementation which secrets it knows.
I don't like it. It introduces mutable state into the proxy-handler protocol, which is currently fully functional.
I partially disagree. One of the reason I chose Set/WeakSet in my demonstration is that the after-trap code would only call Set.prototype.has (the built-in one, not the dynamic one for security reasons), leaving the API somewhat fully functional. knownPrivateNames could be made a function with signature Name ->
Boolean (I would prefer too), but if the after-trap code calls it with the private name as argument, it leaks the private name, so that cannot work... or maybe there is a way.
An idea to have knownPrivateNames or rather isPrivateNameKnown a function and make sure this function doesn't leak private names to the handler would be to enforce isPrivateNameKnown to be a bound function of the built-in Set.prototype.has. The after-trap can make sure of that by comparing [[TargetFunction]] (which cannot be faked by user code) and the built-in capability. As far as I can tell, it would work also with function proxies if the target is such a bound function, so this is membrane-proof.
The proxy makes a minimum of dependencies on the handler's behavior, and only interacts with it via property access of trap names (crucial for double lifting).
The "isPrivateNameKnown" property could also be only interacted with through property access of trap names.
Also, since a handler's properties may be mutable, you have to account for the fact that a trap can be updated, thus there is the potential issue of the handler's internal state growing out of date.
As you're saying below, handlers will often need weakmaps to tack additional state, so guarding internal state consistency is already a problem in the current setting.
It may very well be that handlers will often need to resort to WeakMaps to track additional state, but I'd prefer that to be explicit in the code rather than buried implicitly in the Proxy API.
I'm not sure I understand your point. If it's on the handler, it's not buried implicitely. I even argue that if there is an "official" place to put the state, it makes the code more consistent and more easy to read.
(Also, isPrivateNameKnown can be extended to WeakMap.prototype.has if necessary)
AFAICT, the only two useful things a handler can do when it intercepts a private name it knows nothing about is: 1) ask the proxy to forward 2) throw Interestingly, you do not mention the public counterpart here :-) Digging a bit deeper, from a trap point of view, if you get to know 2 unique names for which you don't know the private part,
Hold on, terminology check: unique names wouldn't have a private part. For any unique name n, n.public === n.
Sorry, i was a bit confusing here. I did mean unique name, but from a proxy point of view. When user code tries to [[Set]] a value to a proxy with a private name, the proxy only gets to know a unique name in the current design. That's what I meant by "get to know 2 unique names". "get to know 2 unique names passed by the before-trap code as a translation of the private name... for which you don't know the private part".
then I don't think you can make any use of this information. Can you do a more relevant choice (forward or throw) based on the different unique name identities? I can't think of any now. From a trap point of view, you just have 2 unique, unforgeable and useless tokens, you can differnciate them thanks to identity, but that's as far as it gets, so I agree with your analysis here. Certainly trapping for private names, if it's to offer these two choices, is valuable, so I take back the idea of not trapping for private names. But I think i would take a different direction for the trap design. Combined with the above idea of sharing a knownPrivateNames set with the JS engine, what could happen is the following: 1) regular get/set/delete/... traps even for unique names and private names you have proven to know (since you have proven to know the private name, they are passed directly, no need for a public counterpart) 2) *Name traps when you don't know the private name. This trap doesn't have the public part as argument (since there is no use for it) but still leaves you the 2 choices of asking to forward or throwing. What do you think?
Part of the reason why we decided to fork the regular traps into additional *Name traps is that we wanted to keep the "type signature" of the existing traps unmodified. Your proposal 1) would change the type of the "name" argument from String to (String | Name). So a programmer might still need to do a case-analysis in the body of each trap.
Why would a programmer do that? Will the built-ins ([[DefineOwnProperty]], [[Get]], etc.) do case-analysis to distinguish string and names? If they don't, I don't really see why the programmer would. In most cases, one will just forward to Reflect.trap(stringOrName, ...). It's actually very likely that in specifying the default *Name traps, they will have the exact same code than their string counterpart, the only difference will be that the passed values have different types. The argument I'm trying to make is that essential internal methods (as per terminology in [1]) will be polymorphic and there is no reason why traps shouldn't be.
With the split traps, the old traps continue to work fine with Strings. The *Name traps work with both private and unique names. If a *Name trap wants to distinguish between private/public names, it can. But the beauty of the |name.public === name| trick for unique names is that all unique names can be effectively treated as private names (Liskov substitutability), so a case-analysis is not always needed.
I agree it's a nice property.
David
David Bruant wrote:
By the way, I recall something I learned from @mathias. In Chrome:
console.log(document.all); // shows an object in the console
console.log(typeof document.all) // undefined 'all' in document // true console.log(!!document.all) // false
Such a thing cannot be represented in pure ECMAScript, not even with proxies. I don't think there is anything that can be done in ECMAScript to fix this, but it's worth sharing this information.
This originated in SpiderMonkey for Firefox 1.0, see
bugzilla.mozilla.org/show_bug.cgi?id=246964
There, I used a cheap heuristic bytecode analysis to distinguish undetected document.all uses, which some content featured (the authors assumed IE only; IE touched 95% market share in 2002), from property object-detected uses. The latter must be falsy but the former could be emulated for greater de-facto web compatibility.
Later, WebKit solved the same problem with a masqueradesAsUndefined flag set on certain objects, rather than code analysis. This is similar to how value objects (strawman:value_objects, bugzilla.mozilla.org/show_bug.cgi?id=749786) can be falsy.
But notice how value objects are immutable and so compare === by shallow-enough value. That is not how document.all works -- it's a mutable magic/live collection.
We might end up standardizing something for value proxies or value objects that allows JS to self-host this undetected document.all emulation hack. No promises, and no rush.
Why would a programmer do that? Will the built-ins ([[DefineOwnProperty]], [[Get]], etc.) do case-analysis to distinguish string and names? If they don't, I don't really see why the programmer would. In most cases, one will just forward to Reflect.trap(stringOrName, ...).
This is the angle I've been looking at it from. The expectation is that the default Reflect handlers will almost always be used to execute actions for proxies. Even if you want to modify the target or parameters that were given you still use Reflect to execute the ultimate action. The only times you don't use Reflect is when you are preventing the action or hosting some virtual object API. Given that fact, most of the time you don't even need to know what the parameters are at all.
2012/8/1 David Bruant <bruant.d at gmail.com>
Le 01/08/2012 09:07, Tom Van Cutsem a écrit :
2012/7/31 David Bruant <bruant.d at gmail.com>
[...]
Maybe the details I propose are not perfect, but I think there is a
game-changer in the idea of a handler being able to share with the JS implementation which secrets it knows.
I don't like it. It introduces mutable state into the proxy-handler protocol, which is currently fully functional.
I partially disagree. One of the reason I chose Set/WeakSet in my demonstration is that the after-trap code would only call Set.prototype.has (the built-in one, not the dynamic one for security reasons), leaving the API somewhat fully functional. knownPrivateNames could be made a function with signature Name -> Boolean (I would prefer too), but if the after-trap code calls it with the private name as argument, it leaks the private name, so that cannot work... or maybe there is a way.
I should have phrased that differently: currently, the handler can be a stateless/immutable object, and as a result, you can have a single handler handle many proxies. Associating a mutable (Weak)Set by default with each handler destroys that simple model.
[...]
The proxy makes a minimum of dependencies on the handler's behavior, and only interacts with it via property access of trap names (crucial for double lifting).
The "isPrivateNameKnown" property could also be only interacted with through property access of trap names.
True.
Also, since a handler's properties may be mutable, you have to account for the fact that a trap can be updated, thus there is the potential issue of the handler's internal state growing out of date.
As you're saying below, handlers will often need weakmaps to tack additional state, so guarding internal state consistency is already a problem in the current setting.
To me, the fact that this special "isPrivateNameKnown" property must be a built-in WeakMap, or some such, to guarantee that the private name doesn't leak, signals that we are just shifting the problem. Now we have yet another kind of interaction point with the handler where we must make sure the private name does not leak.
Also, the gains are not at all clear to me. Are there big savings to be had? In your proposed design, the "after-trap" still needs to verify whether the handler knows the private name, by doing a lookup in some associative data structure. But the trap will most likely already need to do such a lookup itself (the handler will itself likely need to check whether its knows the name). In that case, the handler just returns the private name as a result and the proxy readily verifies, no extra lookup required.
I can also imagine that some handlers will have a specific private name in scope and can just return it as-is, without ever needing an associative data structure to hold the private name. At that point, your proposed mechanism adds more overhead by requiring that handler to store the value in the map/set, and by making the proxy do an additional lookup.
[...] Part of the reason why we decided to fork the regular traps into additional *Name traps is that we wanted to keep the "type signature" of the existing traps unmodified. Your proposal 1) would change the type of the "name" argument from String to (String | Name). So a programmer might still need to do a case-analysis in the body of each trap.
Why would a programmer do that? Will the built-ins ([[DefineOwnProperty]], [[Get]], etc.) do case-analysis to distinguish string and names? If they don't, I don't really see why the programmer would. In most cases, one will just forward to Reflect.trap(stringOrName, ...). It's actually very likely that in specifying the default *Name traps, they will have the exact same code than their string counterpart, the only difference will be that the passed values have different types. The argument I'm trying to make is that essential internal methods (as per terminology in [1]) will be polymorphic and there is no reason why traps shouldn't be.
Ok, I buy the analogy to the internal methods. It simplifies things if you can think of proxy traps as corresponding 1-to-1 with the [[internal]] methods defined on Objects. If the signature of those methods changes, maybe proxy traps should follow suit. I'd like to hear Allen's opinion on this.
It's true that we could make the Reflect.* methods name-aware, at which point methods that operate on property names such as Reflect.defineProperty are no longer just aliases for the ES5 built-ins like Object.defineProperty (for which I doubt that the method signature will change).
I share your concern that proxy handlers will probably need to duplicate logic in the new *Name traps.
2012/8/1 Tom Van Cutsem <tomvc.be at gmail.com>
2012/8/1 David Bruant <bruant.d at gmail.com>
Le 01/08/2012 09:07, Tom Van Cutsem a écrit :
2012/7/31 David Bruant <bruant.d at gmail.com>
[...]
Maybe the details I propose are not perfect, but I think there is a
game-changer in the idea of a handler being able to share with the JS implementation which secrets it knows.
I don't like it. It introduces mutable state into the proxy-handler protocol, which is currently fully functional.
I partially disagree. One of the reason I chose Set/WeakSet in my demonstration is that the after-trap code would only call Set.prototype.has (the built-in one, not the dynamic one for security reasons), leaving the API somewhat fully functional. knownPrivateNames could be made a function with signature Name -> Boolean (I would prefer too), but if the after-trap code calls it with the private name as argument, it leaks the private name, so that cannot work... or maybe there is a way.
I should have phrased that differently: currently, the handler can be a stateless/immutable object, and as a result, you can have a single handler handle many proxies. Associating a mutable (Weak)Set by default with each handler destroys that simple model.
True. That's the reason I changed my proposal to have a property that's a function and not a set/map anymore: "An idea to have knownPrivateNames or rather isPrivateNameKnown a function and make sure this function doesn't leak private names to the handler would be to enforce isPrivateNameKnown to be a bound function of the built-in Set.prototype.has. The after-trap can make sure of that by comparing [[TargetFunction]] (which cannot be faked by user code) and the built-in capability. As far as I can tell, it would work also with function proxies if the target is such a bound function, so this is membrane-proof." It would preserve the statelessness and immutability property.
Also, since a handler's properties may be mutable, you have to account for
the fact that a trap can be updated, thus there is the potential issue of the handler's internal state growing out of date.
As you're saying below, handlers will often need weakmaps to tack additional state, so guarding internal state consistency is already a problem in the current setting.
To me, the fact that this special "isPrivateNameKnown" property must be a built-in WeakMap
It is a function, now.
or some such, to guarantee that the private name doesn't leak, signals that we are just shifting the problem. Now we have yet another kind of interaction point with the handler where we must make sure the private name does not leak.
With the constraint of being a bound function of some predefined built-ins that only return booleans, I think we're good.
Also, the gains are not at all clear to me. Are there big savings to be had?
We are in a situation where user code needs to prove its knowledge of some secret to the engine for it to procede with some operations. The current proposal asks the handler writer to do all the work him/herself. For that, the before-trap substitues a private name with a public name, the handler writer needs to do the opposite mapping back if it knows the secret and the after-trap can procede. When the same private name comes in, the before-trap has lost all track that the proxy knows this private name and redoes the private -> public subsitution and the trap has
to prove again something it has already proven. I agree that it works, but the fact that the handler has no way to say "hey, I know this private name already" seems unfortunate. My proposal suggests to move a bit of the work that the user has to do to a cooperation with the engine. The idea is to provide a way for the proxy to express its knowledge of secrets so that the private->public->private dance
is not necessary anymore. If you have a way to prove you know a secret before the trap is called, then the engine can just pass the private name direclty, no need for the public part.
The "big saving" is that the public->private mapping and return value
boilerplate code to be put in all name-accepting traps is entirely replaced by a one-liner in the handler definition. Something like "isPrivateNameKnown: Set.prototype.has.bind(privateNamesIKnow)" (privateNamesIKnow is either shared in handler function scopes or as a handler property or something else, left at the discretion of the author). It's likely to be a slightly bit more efficient in both time (no need for the after-trap private name checking, no need for public->private nor for
the 2-properties array) and memory (no need for public counterparts any longer). Also, public->private resolution and returning the right private name seems
more error prone than if you're being handed the private name directly as an argument. Maybe debatable.
One downside is that it's a bit more work for implementors. Also, enforcing a function bound to the built-in Set/WeakSet/Map/WeakMap.prototype.has capability is a bit strong on proxy authors, but I think it's an acceptable constraint if it removes the public counterpart and the need to prove things you have already proven.
From a different point of view, we currently have a communication protocol
between trap and "around-trap" that's scoped at the singular a trap-call level: knowledge on whether you know a name has a trap call lifetime. The engine cannot know that the same trap for the same handler has proven knowledge of a name. It even less knows if 2 trap calls of the same handler share knowledge of a given private name. My proposal suggests to push the scope to a handler level: knowledge on whether you know a name has a (potentially) infinite lifetime, but is still local to the handler. This knowledge scope seems more appropriate to how objects work. But it requires cooperation from the underlying platform.
In your proposed design, the "after-trap" still needs to verify whether the
handler knows the private name
It does not. The before-trap needs to verify, but after that, you're good to go. you've proven you know the name, it's passed as argument directly, you can play with the target and the private name directly, no need for after-trap mediation.
by doing a lookup in some associative data structure.
Function call, now.
But the trap will most likely already need to do such a lookup itself (the handler will itself likely need to check whether its knows the name).
If the before-trap checks whether you know the name, it can pass the private name directly as an argument. The trap being called with the private name as argument requires to know the name.
In that case, the handler just returns the private name as a result and the proxy readily verifies, no extra lookup required.
I can also imagine that some handlers will have a specific private name in scope and can just return it as-is, without ever needing an associative data structure to hold the private name. At that point, your proposed mechanism adds more overhead by requiring that handler to store the value in the map/set, and by making the proxy do an additional lookup.
If you've put the private name you know in the set, the before-trap is aware of it, can provide it as argument directly and you don't need the public->private decyphering. The engine does a lookup so that I don't need
to. The number of lookups seems equivalent.
On Tue, Jul 31, 2012 at 9:05 PM, Brendan Eich <brendan at mozilla.org> wrote:
This was debated at last week's TC39 meeting. Between the desire to preserve this symmetry (not paramount, there are many dimensions and symmetries to consider) and the V8 bug being fixed (and the JSC bug on which the V8 bug was based already being fixed in iOS6), I believe we kept consensus to follow the spec.
For the record, I continue to think this is a bad idea, and that we should lose the symmetry for gains elsewhere. So I'd say we failed to gain consensus to change the spec. Since consensus is needed to change the spec, the spec is likely to remain unchanged in this regard.
Mark S. Miller wrote:
On Tue, Jul 31, 2012 at 9:05 PM, Brendan Eich<brendan at mozilla.org> wrote:
This was debated at last week's TC39 meeting. Between the desire to preserve this symmetry (not paramount, there are many dimensions and symmetries to consider) and the V8 bug being fixed (and the JSC bug on which the V8 bug was based already being fixed in iOS6), I believe we kept consensus to follow the spec.
For the record, I continue to think this is a bad idea, and that we should lose the symmetry for gains elsewhere. So I'd say we failed to gain consensus to change the spec. Since consensus is needed to change the spec, the spec is likely to remain unchanged in this regard.
Fair enough -- sorry I didn't represent this accurately.
But this reminds me to ask: what do you think of Allen's := proposal as the better mustache? I realize it doesn't help the Caja vs. legacy problem.
For non-legacy code, given classes and triangle, I don't see the override mistake as much of a pain point. For co-existence of the override mistake with legacy code, the only reasonable choice we've come up with is code.google.com/p/es-lab/source/browse/trunk/src/ses/repairES5.js#347,
which, as you can see, is painful, slow, and unreliable. But I have to admit that it seems to work well enough in practice.
Le 01/08/2012 14:25, David Bruant a écrit :
The "big saving" is that the public->private mapping and return value boilerplate code to be put in all name-accepting traps is entirely replaced by a one-liner in the handler definition. Something like "isPrivateNameKnown: Set.prototype.has.bind(privateNamesIKnow)" (privateNamesIKnow is either shared in handler function scopes or as a handler property or something else, left at the discretion of the author). It's likely to be a slightly bit more efficient in both time (no need for the after-trap private name checking, no need for public->private nor for the 2-properties array) and memory (no need for public counterparts any longer). Also, public->private resolution and returning the right private name seems more error prone than if you're being handed the private name directly as an argument. Maybe debatable.
To follow-up on that part, here is a gist with the difference between what the current proposal is and the alternative proposal [1]. The gists focuses only on the parts that would differ from one proposal to the other. Specifically, I've omitted the update of the WeakMap/WeakSet which is necessary to do manually in both cases.
Besides the reduced boilerplate, the exercice of writing this made me realize that the setName and definePropertyName traps of the current proposal leak the value to be set to the target when the private name is unknown. This makes the protection of getName and getOwnPropertyDescriptorName return values somewhat ridiculous.
Also, I've changed my initial proposal a bit with regard to unknown private names. Now, there is just one trap that's called when that happens. No argument is passed, only the operation as a string ('get', 'set', 'hasOwn', etc.). As Tom noted, the only useful thing one can do when the private name is unknown is throw or forward, so this trap lets you decide based on which operation is being performed (this can probably be useful for read-only proxies)
David
[+samth]
2012/8/2 David Bruant <bruant.d at gmail.com>
To follow-up on that part, here is a gist with the difference between what the current proposal is and the alternative proposal [1]. [...]
Thanks for writing up that gist. Sometimes a piece of code says more than a 1000 words ;-)
Your observation that the value to be set leaks to a setName/definePropertyByName trap is spot-on. It's indeed the dual of protecting the return value in the getName trap. I can imagine a solution that involves the trap returning a callback that will receive the value after it has proven that it knows the private name, but this is really becoming tortuous.
Your proposed alternative is something to consider, although I'm still uncomfortable with the WeakMap.prototype.has.bind mechanic. We should also reconsider the simplest alternative of just not trapping private names on proxies.
Sam, if I'm not mistaken, Racket has both names and proxies (aka impersonators), can you shed some light on how those features interact? Do chaperones/impersonators need to treat names specially?
2012/8/2 Tom Van Cutsem <tomvc.be at gmail.com>
[+samth]
2012/8/2 David Bruant <bruant.d at gmail.com>
To follow-up on that part, here is a gist with the difference between what the current proposal is and the alternative proposal [1]. [...]
Thanks for writing up that gist. Sometimes a piece of code says more than a 1000 words ;-)
I had it clear in my mind, but felt my explanations weren't conveying it, so writing the code sounded like the right solution :-)
Your proposed alternative is something to consider, although I'm still uncomfortable with the WeakMap.prototype.has.bind mechanic.
I have to admit that it's a bit specific as a constraint. The problem is that for security reasons we can't have isPrivateNameKnown to be any function, because an attacker would just set a function and wait for it to be called with the private name. A bound function would guarantee security.
Certainly there are other directions to explore. The goal is to have a proxy-scoped (in "space") and potentially infinite-scoped (in time) shared knowledge of what private names a proxy knows. The knowledge is to be shared with the engine and no one else.
We should also reconsider the simplest alternative of just not trapping
private names on proxies.
You mentionned that if private names aren't trapped, it pierces membranes, so when you want to prevent access to objects in the membrane, you can't for private names. A softer option in the direction of "not trapping" would be to have a privateNameSink for known and unknown names. It leaves the opportunity to not pierce membranes (but I don't know if it's detrimental to other use cases)
2012/8/2 David Bruant <bruant.d at gmail.com>
2012/8/2 Tom Van Cutsem <tomvc.be at gmail.com>
We should also reconsider the simplest alternative of just not trapping
private names on proxies.
You mentionned that if private names aren't trapped, it pierces membranes, so when you want to prevent access to objects in the membrane, you can't for private names. A softer option in the direction of "not trapping" would be to have a privateNameSink for known and unknown names. It leaves the opportunity to not pierce membranes (but I don't know if it's detrimental to other use cases)
Private names would pierce membranes only if they would auto-unwrap the proxy and forward to the target by default. There is another option: attempts to get/set values via a private name on a proxy could just behave as they would on a non-proxy object (i.e. a proxy would have its own storage for private name properties).
This option is consistent with at least two other design choices we've made:
- WeakMaps don't interact with proxies either. You can associate private state with a proxy via a WeakMap without the proxy being able to trap this.
- From our discussion last week, I recall that Object.observe won't be able to observe updates to privately named properties.
Another way of looking at things is that private names combine the receiver's object identity with the private name's identity to produce a value. Since proxies have their own object identity, it follows that keying off of a proxy results in a different value than keying off of any other object.
Membranes would resist private name access by default, unless a membrane proxy explicitly copies its target's privately-named-properties onto itself [one caveat being that it won't be able to keep the copies in-sync, as Object.observe won't reflect updates to these properties].
On Thu, Aug 2, 2012 at 2:00 PM, Tom Van Cutsem <tomvc.be at gmail.com> wrote:
[reordering a little]
Sam, if I'm not mistaken, Racket has both names and proxies (aka impersonators), can you shed some light on how those features interact? Do chaperones/impersonators need to treat names specially?
In Racket, the entire class system, including private names, is built as a library, and doesn't provide any proxying mechanism itself, and the lower-level proxy system can't intercept message sends, since they're implemented abstractly from the perspective of the client. So there isn't a direct analogy here.
However, in Racket you can specify contracts on classes, including contracts on methods which are named by private names. This requires specifying the intercession on a per-method basis, and so to intercede on a private-named method, you have to have the private name to start with.
Your observation that the value to be set leaks to a setName/definePropertyByName trap is spot-on. It's indeed the dual of protecting the return value in the getName trap. I can imagine a solution that involves the trap returning a callback that will receive the value after it has proven that it knows the private name, but this is really becoming tortuous.
Your proposed alternative is something to consider, although I'm still uncomfortable with the WeakMap.prototype.has.bind mechanic. We should also reconsider the simplest alternative of just not trapping private names on proxies.
I agree that the current design is a leak, and that the callback approach is quite heavyweight. However, I don't think we should give up on trapping altogether.
Instead, we could consider some simpler options. Basically, the proxy creator should specify the private names that are to be trapped, and all others are simply forwarded to the target object. I can see a few ways of doing this.
- Add an optional array argument to Proxy.for, which contains private names to trap. If it's omitted, no names are trapped. This also means that for anyone who doesn't explicitly ask for it, the type signatures of the proxy traps remains simple.
- Same as 1, but the array is checked on every trapped operation, so that it can be updated.
- Similar to 2, but there's an operation to add new names to trap, instead of a live array.
1 is clearly the simplest, but doesn't support membranes properly, I think. 2/3 have the drawback that handlers now have state.
Le 02/08/2012 15:26, Sam Tobin-Hochstadt a écrit :
On Thu, Aug 2, 2012 at 2:00 PM, Tom Van Cutsem <tomvc.be at gmail.com> wrote:
[reordering a little]
Your observation that the value to be set leaks to a setName/definePropertyByName trap is spot-on. It's indeed the dual of protecting the return value in the getName trap. I can imagine a solution that involves the trap returning a callback that will receive the value after it has proven that it knows the private name, but this is really becoming tortuous.
Your proposed alternative is something to consider, although I'm still uncomfortable with the WeakMap.prototype.has.bind mechanic. We should also reconsider the simplest alternative of just not trapping private names on proxies. I agree that the current design is a leak, and that the callback approach is quite heavyweight. However, I don't think we should give up on trapping altogether.
Instead, we could consider some simpler options. Basically, the proxy creator should specify the private names that are to be trapped, and all others are simply forwarded to the target object.
I still think the proxy should be able to prevent such a direct access for unknown private names by throwing. Here is a situation where it's necessary: You give access to Alice and Bob (both untrusted) to an object o via a proxy using a membrane. You wish to allow both to communicate only through their access to o (and what results from that communication). When you want to cut their ability to communicate, you want to disable access to anything inside the membrane. If unknown private names to the membrane aren't trapped, then Alice and Bob can still communicate, defeating the goal.
How do they share such a private name if private names are also in the membrane?may you asj. The answer is that private names may not be in the membrane. Tom mentioned it that he wanted to make private names unwrappable values and I think that makes sense:
var n = new PrivateName();
var p = new Proxy(n);
Is p a private name too? I don't think so, but that's an interesting discussion to have.
I can see a few ways of doing this.
- Add an optional array argument to Proxy.for, which contains private names to trap. If it's omitted, no names are trapped. This also means that for anyone who doesn't explicitly ask for it, the type signatures of the proxy traps remains simple.
- Same as 1, but the array is checked on every trapped operation, so that it can be updated.
- Similar to 2, but there's an operation to add new names to trap, instead of a live array.
I love this idea. I think an array would be a mistake and a (Weak)Set would be more appropriate, because traversing an array has to be in linear time which has no reason to be a constraint in an ES6 compliant platform with WeakSets. YMMV. Maybe any collection could work. I'm inclined to go for 3 and the proxy creator has the possibility to add elements afterwards directly to the shared set via set.add. No need for a specific operation to add new names.
1 is clearly the simplest, but doesn't support membranes properly, I think. 2/3 have the drawback that handlers now have state.
The handler doesn't have state with your proposal, only the proxy does acquire a new piece of internal state. But that's fine I think. The target is a form of state that isn't problematic, I don't see why this would be.
What’s the best material for reading up on the “override mistake”? This? strawman:fixing_override_mistake
Yup. It's not very much, but since it seems hopeless it's hard to find time to write the rest.
I’m possibly repeating old arguments, but if the “mistake” was fixed in ES6, you could still get the ES5.1 behavior by introducing a setter that throws an exception, right?
yes. Or simply an accessor property without a setter.
Thanks for clarifying the Racket design, Sam.
I like the proposed refactoring where David's proposed "isPrivateNameKnown" property essentially becomes an extra argument to the Proxy constructor (let's call it the "name whitelist").
I do agree with David on two points:
-
if a name isn't on the name whitelist, the default should not be to forward (this pierces membranes).
-
if the name whitelist is to be an updatable (mutable) collection, it should probably be a Set (or WeakSet?). Now, the proxy will need to do a lookup of a private name on the whitelist, so you want to make sure that an attacker cannot provide a whitelist that steals the name during lookup. Two ways to achieve that:
- require that the whitelist be a genuine built-in WeakMap instance.
- don't turn the whitelist into an explicit collection, instead provide 2 built-ins: Proxy.enableName(proxy,name), Proxy.disableName(proxy,name) to implicitly control the whitelist. This gives implementors a lot more freedom in how they store/lookup known private names and sidesteps leaking names through user-defined whitelists.
Le 03/08/2012 04:03, Tom Van Cutsem a écrit :
Thanks for clarifying the Racket design, Sam.
I like the proposed refactoring where David's proposed "isPrivateNameKnown" property essentially becomes an extra argument to the Proxy constructor (let's call it the "name whitelist"). (... until we call it "moniker/gensym/symbol whitelist" :-p )
- if the name whitelist is to be an updatable (mutable) collection, it should probably be a Set (or WeakSet?). Now, the proxy will need to do a lookup of a private name on the whitelist, so you want to make sure that an attacker cannot provide a whitelist that steals the name during lookup. Two ways to achieve that:
- require that the whitelist be a genuine built-in WeakMap instance.
- don't turn the whitelist into an explicit collection, instead provide 2 built-ins: Proxy.enableName(proxy,name), Proxy.disableName(proxy,name) to implicitly control the whitelist. This gives implementors a lot more freedom in how they store/lookup known private names and sidesteps leaking names through user-defined whitelists.
I'm not sure there is a lot implementors can do with this freedom, but they'll tell us. From an author point of view, it's likely that for classes for instance, names will be the same for a lot of objects, so we could imagine code as follow:
var [healthp, strengthp] = [new PrivateName(), new PrivateName()]
// is there a way to use generator expressions to make this look
better?
class Monster{...}
var privateNameSet = new WeakSet(); // define and build the set once
privateNameSet.add(healthp, strengthp);
function MonsterProxy(...args){
var handler = {...};
return new Proxy(new Monster(...args), handler,
privateNameSet); // reuse the same set }
var m1 = new MonsterProxy();
var m2 = new MonsterProxy();
For all the use cases we can come up with (DOM included), I'm confident we can say that this kind of generic definition and reuse of private names will be the 80% case. Here, the set of names is created once and reused for each monsterProxy. Actually, I don't think an enable/disableName API can be as efficient in terms of memory, mostly because the engine has to rebuild the set internally (because it's expressed one name at a time) and say "hey, it's the same set, I can reuse memory", with the risk that the set changes for one instance and not the others and having to separate the set again. I think it's pretty much the story of hidden classes or shapes. It can work, but it requires a lot of work from JS engines.
Both WeakSet and enable/disableName could work, but WeakSet seems it would be more efficient for the majority of cases.
I think I buy this if we spec WeakSet and require it (and only it, not a trickster impersonator) as the optional third argument. And of course it's a live set.
Alternative: take any arraylike and treat it as a descriptor, not live, whose elements are copied into an internal weak set. Your example never adds to the privateNameSet after it is created. What is the live update use-case?
Le 03/08/2012 20:03, Brendan Eich a écrit :
I think I buy this if we spec WeakSet and require it (and only it, not a trickster impersonator) as the optional third argument. And of course it's a live set.
Alternative: take any arraylike and treat it as a descriptor, not live, whose elements are copied into an internal weak set. Your example never adds to the privateNameSet after it is created. What is the live update use-case?
It's the same use case than the one of adding new private properties to an object dynamically (not only at object creation time). I admit I have no specific use case in mind, but if it's possible to add new private properties to an object at any time, it should be possible to add new private names to the privateNameSet at any time. Otherwise, the unknownPrivateName trap is called and either the proxy forwards without trapping, potentially losing some bits of information or the proxy throws and distinguishes itself from an object.
On Fri, Aug 3, 2012 at 8:03 PM, Brendan Eich <brendan at mozilla.org> wrote:
What is the live update use-case?
I think you need live updates for membranes to work right. Imagine if one side creates a private name and passes it to the other side. After that, accesses using that name should go through the membrane, but it couldn't be in the initial set, since it was created dynamically after the membrane started work.
On 29 July 2012 03:58, Brendan Eich <brendan at mozilla.org> wrote:
Allen Wirfs-Brock wrote:
I really think in a language where we have both [[Put]] and [[DefineOwnProperty]] semantics that we really need both = and :=
I can buy that, and I'm glad you mention := as it is not just an assignment operator (e.g. in Pascal or Ada), it's also Go's declare-and-init operator. It has the right characters, fuzzy meaning from other languages, and the critical = char in particular.
There is a far longer tradition and a significantly larger body of languages that use = for definition and := for assignment (including all languages in the Algol & Pascal tradition). So going with an inverted meaning in JS sounds like an awful idea to me (as does using Go for inspiration about anything related to declaration syntax ;) ).
On Aug 14, 2012, at 4:20 AM, Andreas Rossberg wrote:
On 29 July 2012 03:58, Brendan Eich <brendan at mozilla.org> wrote:
Allen Wirfs-Brock wrote:
I really think in a language where we have both [[Put]] and [[DefineOwnProperty]] semantics that we really need both = and :=
I can buy that, and I'm glad you mention := as it is not just an assignment operator (e.g. in Pascal or Ada), it's also Go's declare-and-init operator. It has the right characters, fuzzy meaning from other languages, and the critical = char in particular.
There is a far longer tradition and a significantly larger body of languages that use = for definition and := for assignment (including all languages in the Algol & Pascal tradition). So going with an inverted meaning in JS sounds like an awful idea to me (as does using Go for inspiration about anything related to declaration syntax ;) ).
About as awful as using [ ] as the indexing operator when every FORTRAN programmer knows that ( ) is how you do subscripting. Not to mention what Smalltalk programmers think [ ] means.
There is value in using familiar looking symbols but I think it is unrealistic to expect common semantics among different languages.
Allen Wirfs-Brock wrote:
On Aug 14, 2012, at 4:20 AM, Andreas Rossberg wrote:
On 29 July 2012 03:58, Brendan Eich<brendan at mozilla.org> wrote:
Allen Wirfs-Brock wrote:
I really think in a language where we have both [[Put]] and [[DefineOwnProperty]] semantics that we really need both = and :=
I can buy that, and I'm glad you mention := as it is not just an assignment operator (e.g. in Pascal or Ada), it's also Go's declare-and-init operator. It has the right characters, fuzzy meaning from other languages, and the critical = char in particular.
There is a far longer tradition and a significantly larger body of languages that use = for definition and := for assignment (including all languages in the Algol& Pascal tradition). So going with an inverted meaning in JS sounds like an awful idea to me (as does using Go for inspiration about anything related to declaration syntax;) ).
About as awful as using [ ] as the indexing operator when every FORTRAN programmer knows that ( ) is how you do subscripting. Not to mention what Smalltalk programmers think [ ] means.
There is value in using familiar looking symbols but I think it is unrealistic to expect common semantics among different languages.
After more soak-time on this, I'm on Andreas's side.
Yes, symbols will be used differently by different languages. No, () for indexing is not expected in modern languages -- Fortran like Disco and the American Drive-In may never die, but it is rare to find in the wild or taught in universities.
Doug's confusion was not unique. We may want syntax for redefinition, but assignment is the dominant trope and it will still be even with := or <- or whatever the syntax might be. Perhaps syntax is not needed so much as Object.define and good docs for when to use it.
I wrote up a strawman that summarizes the discussion on proxies & private names in this thread: strawman:proxies_names
There are still some open issues though.
On Tue, Sep 18, 2012 at 10:12 AM, Tom Van Cutsem <tomvc.be at gmail.com> wrote:
I wrote up a strawman that summarizes the discussion on proxies & private names in this thread: strawman:proxies_names
There are still some open issues though.
I like it! Seems to work pretty well, and the fact that it allows us to actually pass the private name itself around is very nice and simple.
Changing to an unknownPrivateName() trap is interesting. It seems kinda weird to be a trap, rather than just a property on the handler object, though. Is there a good reason to have that be dynamic?
2012/9/18 Tab Atkins Jr. <jackalmage at gmail.com>
Changing to an unknownPrivateName() trap is interesting. It seems kinda weird to be a trap, rather than just a property on the handler object, though. Is there a good reason to have that be dynamic?
Well, you could indeed define it as a simple boolean-valued property (if a use case does require a dynamic check, it could still be implemented as an accessor). It's currently defined as a trap for general consistency (thus far, handlers define nothing but traps).
Le 18/09/2012 10:44, Tab Atkins Jr. a écrit :
On Tue, Sep 18, 2012 at 10:12 AM, Tom Van Cutsem <tomvc.be at gmail.com> wrote:
I wrote up a strawman that summarizes the discussion on proxies & private names in this thread: strawman:proxies_names
There are still some open issues though. I like it! Seems to work pretty well, and the fact that it allows us to actually pass the private name itself around is very nice and simple.
Changing to an unknownPrivateName() trap is interesting. It seems kinda weird to be a trap, rather than just a property on the handler object, though. Is there a good reason to have that be dynamic?
A proxy might want to throw on unknownPrivateNames for write traps, but not read traps. As I'm writing that, I realize that I had suggested to have the operation ('get', 'set', 'defineOwnProperty'...) as argument of the unknownPrivateNames trap (but not the arguments of the operation itself), but this isn't in the strawman. That would be the only reason I see to have the unknownPrivateNames as a trap.
Regarding resolvePrivateName+public part, the use case isn't clear. The whitelist allows for proxies to expresse knowledge a dynamic set of names, I don't really see how what more a "resolve" trap enables. As said in the strawman, now that we have the whitelist, we can get rid of the public part of private names, which is one less burden on the shoulders of implementors. In the worst case, if someone comes up with a use case that requires unique names, a public part and additional argument to unknownPrivateNames can be added in a later version of the spec.
Hello,
maybe I missed something, but how will you secure the whitelist itself? Malicious proxy knowing righteous one can steal its whitelist, afaict.
Le 23/09/2012 22:04, Herby Vojčík a écrit :
Hello,
maybe I missed something, but how will you secure the whitelist itself? Malicious proxy knowing righteous one can steal its whitelist, afaict.
I'm sorry, I don't understand what you're saying here. Can you be more specific and provide an example of an attack?
As far as I'm concerned, I consider the design secure, because it's possible to easily write code so that only a proxy (or it's handler to be more accurate) has access to its whitelist and nothing else.
2012/9/24 David Bruant <bruant.d at gmail.com>
Le 23/09/2012 22:04, Herby Vojčík a écrit :
Hello,
maybe I missed something, but how will you secure the whitelist itself? Malicious proxy knowing righteous one can steal its whitelist, afaict. I'm sorry, I don't understand what you're saying here. Can you be more specific and provide an example of an attack?
As far as I'm concerned, I consider the design secure, because it's possible to easily write code so that only a proxy (or it's handler to be more accurate) has access to its whitelist and nothing else.
Right. Perhaps what Herby meant is that the proxy might provide a malicious whitelist to steal the names being looked up in them. This will be prevented by requiring the whitelist to be a genuine, built-in WeakSet. The proxy will use the built-in WeakSet.prototype.get method to lookup a name in that whitelist, so a proxy can't monkey-patch that method to steal the name either.
Le 24/09/2012 10:04, Tom Van Cutsem a écrit :
2012/9/24 David Bruant <bruant.d at gmail.com <mailto:bruant.d at gmail.com>>
Le 23/09/2012 22:04, Herby Vojčík a écrit : > Hello, > > maybe I missed something, but how will you secure the whitelist > itself? Malicious proxy knowing righteous one can steal its whitelist, > afaict. I'm sorry, I don't understand what you're saying here. Can you be more specific and provide an example of an attack? As far as I'm concerned, I consider the design secure, because it's possible to easily write code so that only a proxy (or it's handler to be more accurate) has access to its whitelist and nothing else.
Right. Perhaps what Herby meant is that the proxy might provide a malicious whitelist to steal the names being looked up in them. This will be prevented by requiring the whitelist to be a genuine, built-in WeakSet. The proxy will use the built-in WeakSet.prototype.get method to lookup a name in that whitelist, so a proxy can't monkey-patch that method to steal the name either.
True. I think a lot of that part depends on how WeakSet/Set are spec'ed. It might be possible to accept proxies wrapping WeakSets (which is likely to be helpful with membranes) and perform the check on the target directly, bypassing the proxy traps. Or maybe consider the built-in WeakSet.prototype.get method as a private named method on the weakset instance and only call the unknownPrivateName trap.
David Bruant wrote:
Le 24/09/2012 10:04, Tom Van Cutsem a écrit :
2012/9/24 David Bruant <bruant.d at gmail.com <mailto:bruant.d at gmail.com>>
Le 23/09/2012 22:04, Herby Vojčík a écrit : > Hello, > > maybe I missed something, but how will you secure the whitelist > itself? Malicious proxy knowing righteous one can steal its whitelist, > afaict. I'm sorry, I don't understand what you're saying here. Can you be more specific and provide an example of an attack? As far as I'm concerned, I consider the design secure, because it's possible to easily write code so that only a proxy (or it's handler to be more accurate) has access to its whitelist and nothing else.
Ah, here was the confusion, the handler has the whitelist, so no attack possible. Sorry for false alarm.
July 23 Meeting Notes
John Neumann (JN), Luke Hoban (LH), Rick Hudson (RH), Allen Wirfs-Brock (AWB), Yehuda Katz (YK), Anne van Kesteren (AVK), Jeff Morrrison (JM), Sebastian Markage (SM), Paul Leathers (PL), Avik Chaudhuri (AC), Ian Halliday (IH), Alex Russell (AR), Dave Herman (DH), Istvan Sebestyen (IS), Mark Miller (MM), Norbert Lindenberg (NL), Erik Arvidsson (EA), Waldemar Horwat (WH), Eric Ferraiuolo (EF), Matt Sweeney (MS), Doug Crockford (DC), Rick Waldron (RW)
JN: Brendan will be here tomorrow
Introductions.
Agenda
(tc39/agendas/blob/master/2013/07.md)
Discussion about getting agenda items in earlier, general agreement.
AWB: Clarifying #5 Open Issues (previously presented as a slide deck, now easier to track)
Consensus/Resolution
Will continue to use Github as the agenda tool, with the constraints:
- Agenda should be "locked in" 1 week prior to meeting
- Agenda URL will be posted to es-discuss immediately following the current meeting
- Allen has running "Open Items" section.
Corrections:
- What to expect in the RFTG mode
- Add JSON document to #9 JSON
Agenda Approved.
JN: Welcomes new Facebook participants
4.1 ES6 Status Report
(Allen Wirfs-Brock)
AWB: Draft published, Revision 16
- Backed out the Symbols as non-wrapper types from Revision 15
- Section items renumbered for clarity
- Want to re-org/re-order the entire document
- Primarily re-order the lexical grammar and syntax (currently out of order by 3 sections)
LH: (asked for motivation)
WH: Noticed...
- Changed for-in to disallow left side assignment expression.
- Syntax for arrow doesn't propagate the NoIn-ness of grammar rule. A NoIn arrow grammar production expands into a seqence that ends with a non-NoIn expression. If we hadn't changed for-in to disallow left side initializers, this would break the grammar by allowing in's to leak into a NoIn expression. However, we have changed for-in to disallow left side initializers. Given that, the regular/NoIn syntax rule bifurcation is now pointless. We have an opportunity to simplify and regularize the grammar here.
AWB: Will look into removing the NoIn productions.
LH: This was discussed recently on es-discuss (need reference)
AWB:
- Further rationalization of alphabetizing Chapter 15
- Reminder that people should use the bug tracker for spec revision issues.
- Implementor feedback will be prioritized
LH: Re: initializer argument with an "entries" property will be used to create the entries in the Map (7.a)
"Let hasValues be the result of HasProperty(iterable, "entries")."
AWB: Explains the rationale of creating a new map from an existing map
var m1 = new Map();
var m2 = new Map(m1);
LH/DH/YK/RW: Should be:
var m1 = new Map();
var m2 = new Map(m1.entries());
EA: Should just use @@iterator
, which is entries
, but without
explicitly checking for a property called entries
.
DH: Advocate for uniform API, test for existance, assumes it's iterable, 2 element array-likes and initialize.
MM: Have we decided on the convention that an iterator of X is also an interable of X. A map.entries() gives you an iterator.
YK: map is already an iterable
DH: Should make sense to pass an iterator to Map
AWB: All the built in iterators are also iterables
DH: Agree, though this has been debated
WH: What happens...
new Map([ "abc", "defg", "hi" ]);
new Map([{ 1: 10, 0: 20 }]);
BE: The first one makes a map mapping "a" → "b", "d" → "e", "h" → "i". The second one makes a map of 20 → 10.
AWB: The algorithm for Map should check for entries to be Object
DH:
MM: I don't think we should special case for string
AR: Agree, but not with example
MM: Making a special case for String seems like an odd decision
AR: In the case of i18n where we can't change the code point... you can imagine having a string map, but if I can just pass in a string.
... Don't object, just exploring
AWB: Objecting. What use case can you imagine where programmers intend for strngs to be array-like?
MM: None reasonable
...
MM: Question about value objects. If the value object responds to Get(0) or Get(1)
WH: with Mark, don't want special tests for different types
LH: If I do...
new Map([ 1, 2, 3 ]);
I will get undefined, undefined, undefined
, which is a stronger case for
making the check
DH: +1
WH: Elsewhere, we've gone to detect duplicate errors
AWB: Checking for duplicates will duplicate the cost
MM: The impl of any hash table will require a test for duplicate keys
AWB: What about key, values that have been collected over time?
MM: There are use cases for duplicate key checks
LH: Historically, we make duplicate checks when it's syntactic, and this is the first time we're trying to apply duplicate checks to runtime semantics
MM: If something you didn't expect happens once, i'd much prefer an error
YK/RW: That's bad for the web
RW: Map would become a "try/catch" case
... mixed discussion about the precedent for loud or quiet handling
WH: Are there any other constructor that throw when created "incorrectly"?
RW: In non-strict mode, a program can create an object with all duplicate keys and never fail in production
...
MM:
AC: Creation can be what is the least requirement for what it takes to create a map. Taking an arbitrary structure and make a map and it's perfectly good semantics to
LH/MM: Offline conversation about what qualifies for extra defense.
DH: Select cases where there is easy to argue that there few legitimate uses, ok to have occassion sanity tests. In general, JavaScript does not go out of it's way to provide you with defensive mechanisms. It's hard to predict where people are going to get hurt, better to allow them to decide.
WH: Paying for consequences where with
doesn't protect against collisions.
AWB: Try to apply my model when writing these algorithms, please try to read the algorithms when they are published
Consensus/Resolution
-
Map contructor, accepts optional initializer
-
If initializer undefined, creates an empty map
-
If initializer defined, invoke the @@iterator to create the entries from.
- For each entry, check if non-null object, throw if not (If Type(map) is not Object then, throw a TypeError exception.)
- pull off the 0 and 1 property
- make 0 a key and 1 value
- No check for duplicate keys
-
Remove the explicit check for an "entries" property, this is implied by the check for "@@iterator"
UNRESOLVED
AWB: Will put a note in the spec: "Unresolved: how to handle duplicate keys"
WH: Don't yet have consensus on how to handle duplicates, would like to discuss computed properties
4.3 Array.prototype.values
(Allen Wirfs-Brock, Rick Waldron)
AWB: Ext.js uses a with(...) {}
function f(values) {
with(values) {
...
}
}
f([]);
YK: Means that we can't add common names for common objects?
RW: ...Explained that Ext.js fixed the issues, but face a commercial customer update challenge. In the meantime, it continues to break several large scale sites.
AWB: Brendan's workaround (from post on the list)
values() -> @@values();
keys() -> @@keys();
entries() -> @@entries();
Importing from a module...
values() -> values([]);
keys() -> keys([]);
entries() -> entries([]);
DH: Warming up to the idea of a general purpose protocol, implement your map-like protocol.
WH: But now you need an import
EA/AR/DH: Web breaking... but why not break
AR: Meta property, [[withinvisible]]
(Way too much support for this)
DH: This idea is fantastic
EA: Very useful to the DOM, may not need another bit on the object, maybe just a "whitelist".
MM: A very small list of names that "with" doesn't intercept
YK: Could, but semantically almost the same thing
EA: But without the extra bit on all objects
MM: Don't want to require a new bit for all objects.
DH: Need to fully explore the effects on the rest of the language..
- Blacklist for just Array or all objects?
EA: A blacklist that exists, string names, when you enter with(){}
, the
blacklist must be checked.
MM: If the base object is Array, if the name is on the whitelist
EA: Have an instanceof check? This problem happens in the DOM with Node
EA/YK/AR: We can actually use this for several use cases.
EA: The issue needs instanceof to check the prototype chain.
AWB: For objects you want to treat this way.
DH: The middle ground...
@@withinvisible, well known symbol
Array.prototype[ @@withinvisible ]
= [
"values",
"keys",
"entries"
]
AVK: Might have a more generic name, can be used with event handlers
DH: @@unscopable
?
Array.prototype[ @@unscopeable ]
= [
"values",
"keys",
"entries"
]
WH/MM/RW/YK: actual clapping
... Mixed discussion about performance. General agreement that penalties
for using with
is ok.
AWB: Code may override this, but at their own risk. For example
Consensus/Resolution
- @@unscopeable
- A blacklist, array of strings names that will not resolve to that object
within
with() {}
DH: This is about the extensible web ;)
3 Approval of the minutes from May 2013 (2013/029)
JN: Need to approve the notes...
Are there are any changes to approve?
(none)
Consensus/Resolution
- Approved.
9 JSON
(Doug Crockford)
DC: Gives background re: JSON WG and presents a proposed JSON standard to be submitted to Ecma.
- Please read tonight for review tomorrow
NL: Benefit from reading the JSON mailing list threads.
YK: Will be painful.
AR: This document seems completely unobjectional
DC: IETF claims abstract formats cannot work
Mixed discussion about consequences.
(Read and be prepared for meeting tomorrow)
4.2 Add fill and copySlice methods to Array.prototype and Typed Arrays
(Allen Wirfs-Brock)
AWB: The Chronos group want to add methods
- fill a span of a typed array
- move copy, with care for the overlap
Array.prototype.fill (Informal Spec)
Array.prototype.fill = function fill(value, start=0, end=this.length) {
/*
Every element of array from start up to but not including end is
assigned value.
start and end are coerced to Number and truncated to integer values.
Negative start and end values are converted to positive indices
relative to the length of the array:
if (start < 0) start = this.length-start
Reference to start and count below assume that conversion has already
been applied
If end <= start no elements are modified.
If end > this.length and this.length is read-only a Range error is
thrown and no elements are modified.
If end > this.length and this.length is not read-only, this.length is
set to end
Array elements are set sequentially starting with the start index.
If an element is encountered that cannot be assigned, a TypeError is
thrown.
Element values are assigned using [[Set]]
The array is returned as the value of this method
*/
}
Examples
aFloatArray.fill(Infinity); // Fill all elements with Infinity
aFloatArray.fill(-1, 6); // Fill all elements starting at index 6 with
-1
aFloatArray(1.5, 0, 5); // Fill the first five elements with 1.5
aUint8Array(0xff, -2); // Place 0xff in the last two elements
[ ].fill("abc", 0, 12)
.fill("xyz", 12, 24); // Create a regular array, fill its first
dozen
// elements with "abc", and its 2nd dozen
elements
Array.prototype.copySlice (Informal Spec)
Array.prototype.copySlice = function copySlice(target = 0,start = 0, end =
this.length ) {
/*
The sequence of array elements from start index up to but not including
end index are copied within
the array to the span of elements starting at the target index.
target, start, and end are coerced to Number and truncated to integer
values.
Negative indices are converted to positive indices relative to the
length of the array.
If end <= start no elements are modified.
If end > this.length a Range error is thrown and no elements are
modified.
If target + (end-start) > this.length and this.length is read-only a
Range error is thrown and no elements are modified.
If target + (end-start) > this.length and this.length is not read-only,
this.length is set to target+(end-start).
The transfers takes into account the possibility that the source and
target ranges overlap. Array elements are
sequentially transferred in a manner appropriate to avoid overlap
conflicts. If target <= start a left to right
transfer is performed. If target>start a right to left transfer is
performed.
If a target element is encountered that cannot be assigned, a type error
is thrown and no additional elements are modified.
Sparse array "holes" are transferred just like for other array functions.
The array is returned as the value of this method
*/
}
Examples
[ 0, 1, 2, 3, 4 ].copySlice(0, 2);
// [ 2, 3, 4, 3, 4 ]
[ 0, 1, 2, 3, 4 ].copySlice(2, 0, 2);
// [ 0, 1, 0, 1, 4 ]
[ 0, 1, 2 ].copySlice(1);
// [ 0, 0, 1, 2 ]
Int8Array.from([ 0, 1, 2 ]).copySlice(1); // RangeError
Int8Array.from([ 0, 1, 2 ]).copySlice(1, 0, 2); // Int8Array 0,0,1
Int8Array.from([ 0, 1, 2 ]).copySlice(0, 1, 2); // Int8Array 1,2,2
Moving data within an array, destructively on the calling array
AWB: Possibly rename copySlice
=> copyWithin
LH: Should Typed Arrays have the same surface as Array?
DH: Typed arrays better behaving and better performing since they guarantee density. (non-sparse)
- Notes concat as the only array method that expects explicit array-ness
RW: Do we have consensus
DH: Brendan had issue with fill
AWB: Brendan's issue was the similarity with copySlice
and had suggested
fillSlice
.
DH: Not sure I understand his objection...
Consensus/Resolution
- Agreement in room
- Would like Brendan's input
4.4 Consider deferring ES6 Refutable Matching.
(Allen Wirfs-Brock)
AWB: In March, we agreed to add refutable pattern matching; began working on adding this to destructuring and realized the work involved would be too great, considering the time frame remaining for ES6.
Propose to defer refutable pattern matching.
(whiteboard)
The current spec would attempt to do a ToObject(10); and would throw:
let { a, b } = 10;
What happens when you reference a property that doesn't exist on the object, will throw:
let { a, b } = { a: 10, c: 20 };
To avoid throwing:
let { a, b = undefined } = { a: 10, c: 20 };
YK: Removing the question mark breaks the consensus.
AVK: Is it hard to spec the "?" on the outside? Allowing only one level?
AWB: It wouldn't be hard, but it creates a weird language issue.
YK/AWB: It's easy to do in the grammar
LH: What was in the spec, solved 80% of the cases, we moved to a solution for 100% and this will set us back to 20%, which isn't acceptable.
AWB: What happens at the parameter list level?
YK: Ah, there is no place to put the out "?"
DH: Agrees... as long as we have a fail-soft, we're ok (YK/LH/RW agree)
YK: We could make the extra sigil mean refutable.
WH:
let [a, b] = "xyz";
YK: Why Andreas would have argued strongly against a refutable sigil?
DH: I think this will fail without inclusion of Brendan and Andreas
AWB: Andreas is fine with dropping refutable matching
DH: Are you sure?
Current spec is fail soft
As long as Brendan and Andreas are ok with it, we can fall back to fail soft.
AC: The fail soft is consistent with JS behaviour. If you want something stricter, then the problem should be on the right side, not the left side. Otherwise you need to introduce an operator for the left.
AWB: (reads back conversation from Andreas)
DH/YK: He doesn't seem to say anything about returning to fail soft.
LH: I think we've exhausted the conversation
WH: If we don't do it now, the behavior of refutable and regular rules will be inconsistent in the future; i.e., a trivial refutable rule that doesn't actually make use of the refutable features will behave inconsistently with a textually identical nonrefutable rule.
YK: But you'll be able to opt-in to the full set of "refutables"
WH: I think it's uglifying the future.
YK/LH: It is.
DH: There is precendence in Mozilla's destructuring, that doesnt have refutable matching.
LH: If we added the bang which is the strict mode for this and adds the bang in front, opts in.
AWB: The next part...
WH: The string example:
let [a, b] = "xyz";
Should there be implicit ToObject on the right side?
YK: We agreed new String()
solves the problem, if that's what you
actually wanted to do.
Consensus/Resolution
- No implicit
ToObject()
on the right side (eg. the string will throw)
x.x Review of Proposed Features
(Luke Hoban)
Function toString
MM: The one issue about Function toString, discovered since the strawman was written:
Since eval()uating a function declaration or function expression defaults to non-strict, a strict function must present the source code of its body as beginning with a “use strict” directive, even if the original function inherited its strictness from its context. This is the one case where the original local source code of the function is inadequate to satisfy this spec.
YK: Doesn't account for super, either
Discussion about identifiers captured from lexical environments.
Was the lexical environment strict?
Consensus/Resolution
Change wiki strictness included in notion of lexical context. Thus
- always adequte for toString to preserve the original source
- result behaviour equivalence does not require injecting "use strict"
Function name property
(Allen Wirfs-Brock) harmony:function_name_property
AWB: The spec doesn't have mechanisms for capturing the name based on the syntactic context.
LH:
let f = function() {}
...Needs to know "f".
AWB: It's not an insignificant amount of work.
...Conversation Moves towards prioritization.
Modules
LH: Need to know that modules are going to be spec'ed asap.
DH: This is my next item to work on
AWB: Modules are the next most important and significant to address in the spec.
High priority
Standard Modules
DH: Back off on standard modules for ES6, very few things.
Standard Modules:
- Math
- Reflect
YK: All of the built-ins.
RW: If I want to avoid using a tainted built-in, import { Array } from "builtins";
DH: What does this directly solve?
YK/RW: When you want to get a fresh, untainted _____.
AWB: Who will write out the standard modules?
EF/YK/RW can work on this
Mixed discussion about feature dependency.
DH: Luke and I can craft a dependency graph offline.
Binary Data
On track (wiki near complete)
High priority
Regexp Updates
harmony:regexp_look-behind_support
harmony:unicode_supplementary_characters
Low priority
DH: Optimistic that we can get Modules and Binary Data to green (in the spreadsheet)
Proper Tail Calls
DH: Just need to identify the tail position and finish the spec.
AWB: It's just a time consuming project. Property access in tail position? Not tail call.
DH: Safely:
- function call
- method call
Consensus/Resolution
- Plenty of work left.
4.7 Math
(Dave Herman)
DH: Introduces need for 64bit float => 32bit float and projecting back into
64bit float. If we had a way to coerce into 32bit
- Can be simulated with TypedArray (put in a value, coerced, pull out)
- To have a toFloat32
EA: Does JSIDL Need this?
YK: Not that I know of
MM: The real number it designates is a number that is designatable as 64bit
DH: (confirms) If you have a coercion, the implementation could do a 32bit operation
WH: Note that for this to work, you must religiously coerce the result of every suboperation to float32. You can't combine operators such as adding three numbers together.
Given x, y, z are all float32 values stored as regular ECMAScript doubles, the expressions
x+y+z float32(x+y+z) float32(float32(x+y)+z)
can all produce different results. Here's an example:
x = 1; y = 1e-10; z = -1;
Computing x+y+z using float32 arithmetic would result in 0. Computing float32(x+y+z) would not.
On the other hand, there is a very useful property that holds between float32 and float64 (but not between two numeric types in general such as float64 and float80), which is that, for the particular case of float32 and float64, DOUBLE ROUNDING is ok:
Given x, y are float32 values, the following identity holds, where +_32 is the ieee single-precision addition and +_64 is the ieee double-precision addition:
float32(x +_64 y) === x +_32 y
And similarly for -, *, /, and sqrt.
Note that this does not hold in general for arbitrary type pairs.
Here's an example of how DOUBLE ROUNDING can fail for other type pairs. Suppose that we're working in decimal (the issues are the same, I'm just using decimal for presentation reasons), and we compute a sum using arithmetic that has four decimal places and then round it to store it into a type that has two decimal places.
Let's say that the sum x+y is mathematically equal to 2.49495. 2.49495 (mathematically exact sum)
Then we get: 2.4950 (properly rounded result of invoking + on the wider 4-decimal place type) 2.50 (rounded again by coercion to narrower 2-decimal place type)
Yet, if we had invoked + on the narrower 2-decimal place type, we'd instead have gotten the result: 2.49 (mathematically exact sum rounded to narrower 2-decimal place type)
AWB: Is the proposal to expose a toFloat32?
DH: Yes and the Math object seems like the obvious place
RH: Also, toFloat16
DH: Long term, the solution will be value objects, but in the near term, this will have benefit much more quickly
WH: Found evidence that the optimizations are safe as long as the wider type is at least double the width of the narrower type plus two more bits: docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html . This is the case for the float32/float64 pair (they're 24 and 53 bits wide respectively), but not in general.
Consensus/Resolution
- Math.toFloat32()
More discussion about where (Math, Number.prototype)
4.8 Stable Array.prototype.sort
(Norbert Lindenberg)
esdiscuss/2013-June/thread.html#31276
NL: Does anyone know of performance issues that prevent going to stable sort.
DH: Tread lightly, no one wants regression
EA: Libraries implement stable sort today because they need it.
YK: D3
MM: If the answer is that performance is negligible, then we should mandate stable sort. Otherwise, we don't. We need to answer the question first.
Consensus/Resolution
- Deferred
4.9 Time zones 1: Bugs or spec issues?
(Norbert Lindenberg)
Discussion around the semantics of Date
AVK/MM: Be able to create a date instance with a timezone, other then the current timezone
MM: ES5 implies a live communication channel into the Date instance
AWB: It's part of the algorithms
MM: We could say that we're going to stand on the ES5 spec.
Consensus/Resolution
- Deferred
4.10 Time zones 2: Time zone as property
(Norbert Lindenberg)
NL: Dean Landolt proposed a property on Date.prototype for the timezone, that all the functions look for, born with the default timezone, but can be changed.
MM: Should be static, like Date.now()
RW: Otherwise there would be Date objects with different timezones.
Consensus/Resolution
- Deferred
Date/Timezone
Proposal 1 AVK: Having Date objects that have timezone as internal data instead of system data.
Proposal 2 NL: Pass time zone information separate from Date (as a parameter to methods)
Consensus/Resolution
- Write a strawman for ES7 (either)
July 24 Meeting Notes
John Neumann (JN), Luke Hoban (LH), Rick Hudson (RH), Allen Wirfs-Brock (AWB), Yehuda Katz (YK), Anne van Kesteren (AVK), Jeff Morrison (JM), Sebastian Markbage (SM), Paul Leathers (PL), Avik Chaudhuri (AC), Ian Halliday (IH), Alex Russell (AR), Dave Herman (DH), Istvan Sebestyen (IS), Mark Miller (MM), Norbert Lindenberg (NL), Erik Arvidsson (EA), Waldemar Horwat (WH), Eric Ferraiuolo (EF), Matt Sweeney (MS), Doug Crockford (DC), Rick Waldron (RW), Rafael Weinstein (RWS), Dmitry Lomov (DL), Brendan Eich (BE), Brian Terlson (BT)
4.6 Binary Data Update
(Dave Herman & Dmitry Lomov)
DH: (Introduces Binary Data) A mechanism for creating objects that guarantee a shape (Struct)
Use case that has become less important, I/O
Dmitry has given use cases where we still want control over endianness
MM: A little surprised by this direction (not objection)
If you have something like a class, do you an imagine something like an array buffer, per instance?
DH: it's still possible to overlay structs over larger sections
MM: if you want the instances to be gc'able...?
DH: they'd have to have separate backing storage, but those could be GC'd like normal
There's more work we've been doing with Rick and his group at Mozilla on parallelism patterns by describing the shape the data you're operating on. You still want the ability to do things like parallel iteration over arbitrary JS values, which means things like "object" and "any".
Those are high-fidelity drop-in replacements for JS arrays, but with no holes.
DH: once you introduce pointers, you have to bifurcate the design a bit; opaqe vs. non-opaque structs. Don't want to go into details, but will put that on the wiki.
MM: You can use structs to model more classical class object model where all the data is on the instance. (structural types)
YK: Do you imagine people using this in regular JS?
DH: Yes, but if they're writing regular JS, they'll profile and find that they want to use these sorts of structs in hot places.
function ThingBackingStore() {...}
function Thing() {
return new ThingBackingStore();
}
... Becomes something like...
var ThingBackingStore = StructType({
stuff: new ArrayType(object)
});
function Thing() {
var selfie = new ThingBackingStore();
selfie.stuff = ....;
return selfie;
}
WH: What kind of fields can be had?
DH: Type descriptor algebra, set of combinators. (atomic types) Uint8, Uint8Clamped, Uint16, Uint32, float32, float64, ObjectRef, StringRef, "any" new ArrayType(T), new ArrayType(T, )
WH: no uint64?
DH: no.
BE: I'm proposing it separately
WH: what's the difference between ObjectRef and any?
DH: ObjectRef can be a object or null, while any can be any valid JS value.
DH: SymbolRef is mostly an optimization over any.
EA: What about SymbolRef?
DH: Symbols should be objects. We'll have that discussion later.
AWB: Why isn't there a Uint16Clamped?
DH: That was just needed by canvas.
MM: Only Uint8Clamped?
DH: Yes, compatibility with canvas' ImageData.data.
AR: Y U NO CLAMPED EVERYWHERE!?
var Point = Struct({
x: uint32,
y: unit32
});
var p = new Point({ x: 1, y: 12 });
p.buffer;
p.byteLength;
WH: Can you replace the buffer
?
DH: No, {[[writable]]: false, [[configurable]]: false, [[enumerable]]: false}
WH: Can you write to the buffer?
DH: Yes
DH: V8 team also wants reflection so that they can check the type and the field offsets.
WH: what if there's an object field in the buffer somewhere?
DH: let me outline what I'm doing, mayble that'll clear it up
if I have:
var q = new Thing();
q.buffer == null; // true
q.byteLength == undefined; // true
One of the things we've talked about is being able to censor access to the buffer at various points.
Lets say you want some computation is working on a sub-view of a buffer, we need to be able to cordon off access between these parallel things.
We can imagine putting/removing things from the "opaque" state. p.buffer is originally some buffer object, but then I'd be able to set the opaque state. Having an access to the buffer will let me re-set that constraint later, ocap style.
WH: I'd like the type itself to denote if it's buffer-backed or not.
DH: once I can lock down the instances, that doesn't matter so much.
WH: it's one more thing I have to write out at creation time
WH: I'm looking at this as a backing store for classes
DH: my feeling is that it's more type and design consistent to not have this depend on having the property
AR: how do I know it's in the opaque state?
DH: I'm trying to avoid competing with the namespace for property names, so I haven't solved that yet
LH: I think that's weird. I don't think anything in the current draft indicates the per-instance-of struct type that goes along with the type.
BE: that's been diagramed.
DH: We should stratify buffer and byteLength.
WH: are there any other special names besides "buffer" and "byteLength" that you're thinking of adding?
DH: the other major ones are methods that do getting and setting for things like multi-hop getting that avoids intermediate alloc.
var S1 = { foo: , bar: }
var S2 = { ... s1: s1, ... }
var x = new S2();
x.s1.foo
x.get("s1", "foo")
// lots of discussion about structs, prototype chains, and confusion about how long this has been agreed
AR: don't worry about so much about this object/buffer split; it's showing up because DH is enamoured of return overloading
DH: the main feedback I'm hearing is that they don't want default stuff in the prototype chain
// AR: what's the controversy?
DH: getOwnPropertyNames() matters and should behavle as much as possible like a normal JS object that it's emulating. If it has ergonimic issues that we discover later, so be it, but that's the general approach.
MM: if we find a problem, I'm happy to deal with it -- but I want to take it seriously and not as a detail.
AWB: I think this is extremely promising and there are lots of ways we can leverage this. But I don't see how we can get this into the ES6 spec. So many details. So many interactions.
WH: so there won't be binary data in the spec?
AWB: no, there are TypedArrays.
We might be able to do this as a self-contained thing after ES6.
Speaking editorially, we probably need to accept we won't get this done for Dec.
BE: if we're going to discuss if this is ES6 or ES7, we can have that discussion, but that's not how this started.
LH: this began as a technical update.
AWB: Need to explore the interaction with classes, @@create, etc. to me it's hard to separate some of the tech discussions from the schedule discussions.
DH: Objection to the exxageration of "newness" of this conversation.
BE: (Right, we've seen this)
MM: First time I've understood, so first time I'm reacting. Where the buffer is used as an instance itself
AR: No, not the buffer.
MM: What are these called?
DH: Structs, always. These examples are just to show what you can do with them
MM: the idea that these things inherit from the struct-making function's prototype...was that always there?
BE: I'd like to intervene. We're mixing things up still. Can I be blunt? The thing that set mark off was the "I don't have a strong opinion about that". If it's too late for ES6, it's ES7. We need details and there's sentiment in favor of stratification.
...Champions should have a strong opinion about these aspects
These should look like objects as much as possible
DH: Fair, but these weren't on my agenda to talk about. My dismissal was out of unpreparedness for the question.
WH: I would much prefer for this to be in ES6. TypedArrays without Struct seems half-baked
DH: YEs, agree, but the specing of modules trumps the specing of binary data.
YK: IIRC, you've said that Struct is polyfillable
WH: What does "polyfillable" mean in this case?
YK: Does not need to be in the spec for user code to use.
DH: Polyfillable in the sense of perfect semantics, but not the level of performance.
DH: yes, but this enables a class of optimiztions
LH: it's not strictly enabling the perf optimizations...right?
DH: no, you can't infer that there's an invariant that it won't change.
WH: You cannot know that a property is uint8 for example. It's hard to efficiently allocate just a byte for it if you can't tell that no one will try to write a larger value to it.
DH: I want to make another point which I had to learn along the way: I came to understand that there are 2 large, distinct classes of use-case: 1.) efficient representations of memory within pure computation 2.) serialization and de-serialization The design constraints are very different. For I/O, you want a LOT of control. And JS is expressive for those things. Doing it in a built-in library is a waste of TC39's time. Where TC39 can add something programmers can't is the memory-internal set of use-cases. Polyfilling only gets you a little bit of benefit there, but if it's built in, the optimizations can happen.
BE: the typed-array use-cases motivated the structs work for efficient overlays.
WH: but why would you polyfill something that doesn't perform or only has TypedArray?
DH: it's better to put it in TC39 even if we don't get full-structs. I'm gonna do the best I can to get it into ES6, but it's important. No reason not to have TypedArrays if we can't get it done, though. Better for us to control the sharp corners than to leave it in Khronos.
DH: I've focused on the in-memory patterns and others probably haven't thought about these as hard. But i haven't even gotten to the stuff that the V8 team brought up: for webgl, since you can have structs and they can be overlayed with buffers, what's the alignment story? TypedArrays haven't had unaligned access. The safest thing to do here is to provide a universal padding scheme that's portable.
DH: gl programmers want the ability to use structs on the JS side, but on the GL side, want the ability to demonstrate exact offsets and optimize for specific systems. They want aligned fields, but they want complete control.
WH: do they want holes? if we use C-padding, they can re-order fields explicitly to get the least total size.
DH: I'm unclear what's specified and not in C
WH: The C language spec states only that the fields must be in ascending address order. However, in order to be able to link code and have it interoperate, every ABI must spec it. As far as I know, they all specify it as "naturally aligned" using greedy allocation: for each field pick the next position that has the correct alignment.
WH: This is sufficient control to allow users to minimize padding to the minimum possible if they want to. I sometimes do this to my C++ classes. All you need to do is to reorder your fields to avoid doing things such as interleaving bytes and float64's.
DH: the proposal from dslomov's group is that the struct ctor can specify offsets:
new ST([
[0, uint32, "le"],
...
]);
MM: dave, can you state the alignment rule?
DH: every leaf type in a struct has natural alignment
WH: this is like most C systems, if you make an array of structs, the elements cannot be misaligned. struct sizes are rounded up, padding them up to their largest alignment to avoid possibly putting them in a misaligned context and breaking the alignment. A struct {x: double, y:uint8} has size 16 instead of 9 so that alignment of the double field works if you create an array of such structs. On the other hand, the struct {x1:uint32, x2:uint32, y:uint8} would have size 12 instead of 9 because it only needs 4-byte alignment.
MM: I heard there was soemthing about a reflective operation for obtaining the layout?
DH: it's work I still ahve to do.
MM: what object do you get back?
DH: TBD. Tempted to say it looks like the thing you passed in originally.
WH: I dont' see bool...deliberate?
DH: yep.
WH: let me go on record as requesting a bool field.
?: Can't you use uint8 for bool? ?: Use object type for bool. ?: What about other types such as different kinds of objects?
WH: bool is different. I am only asking for bool here. A bool stored in a single byte, so using 8 bytes for an object reference for it would be a substantial cost. A uint8 is not a suitable replacement because one of the major use cases for structs is to use them as an efficient backing store for objects, and getting 0 and 1 instead of false and true would be a really annoying impedance mismatch.
MM: how many bits per bool does an array of bool's contain?
BE: needs to be exactly 1 byte
WH: Don't want to repeat the C++ vector<bool> mistake that tried to pack it
to 1 bit/bool and ended up breaking lots of things. The C++ committtee now regrets having done this.
9 JSON (Continued)
(Doug Crockford)
DC: Explaining updates made since yesterday
- Reordered Appendix
- Moved ECMAScript mention
The last big change is that Rick suggested removing the security section (10) and I think I agree
AWB: Those don't apply to a spec at this level
DC: Agree, I don't think it belongs
AWB: what's the start symbol of the grammar?
DC: unspecified.
RW: my suggestion was to re-order the appendix to ensure that Value comes first and that anything that is one starts the gammar
AWB: I think it should specify a start symbol.
DC: some uses of JSON won't want a bool at the top level
AWB: ..or array, or object. All this describes is the universally valid syntax, and that has to start somewhere.
DC: I don't think it does
AWB: then I don't know how to parse/validate.
YC: I think you do need a root symbol. I've had this exact issue.
AR: C++ experience backs that up. Lots of incompat.
RW: can't we just say that anything that's a value can begin the production?
MM: we have a historical conflict. As I read the ES spec vs. this, we see value vs. object/array.
DC: it should be value, then. How should we specify that?
AWB: just say that any input that can be parsed that uses that start symbol is valid JSON text
MM: we should decide if we want to be compatible with reality or the RFC. Given that we decided on value in ES, we have incompat. Shifting to value as the start symbol would be up-compat with reality and current practice.
AWB: doesn't say what a "reader" does with this grammar, so can allow/disallow however it wants. There's no universal JSON reader.
MM: JS might accept those things but not give you access to the literal content of the values
NL: this should be based on Unicode code points. We don't know how to convert between things if we don't.
DC: best practice might be to use Unicode, but this grammar doesn't need to.
NL: I differ. We care about computing in this group.
DC: if someone wants to use a 6-bit code to send JSON from a satellite, JSON is agnostic.
AWB/NL: The title says "interchange format", should probably just be a "grammar"
NL: Without reference to Unicode code points, we can't decide which characters Unicode escape sequences are equivalent to, e.g. \uxxxx ===? ö \uxxxx\uxxxx ===? ö
AVK/NL: current best practice is to base document formats on Unicode, e.g. HTML
WH: The description of where whitespace can go is broken. Currently it's described as only being allowed before or after one of the punctuation symbols. That means that "{3}␊" parses but "3␊" does not.
MM: Crock, what purpose does the document serve?
DC: ... i forgot what he said
MM: Comparison to RFC
AVK: Refers to ECMAScript's JSON, not the RFC
AWB: Wants other standards to make references to this normative standard and not some other. ...To avoid the inclusion of anything that isn't the grammar, eg. mime type
MM: The danger of the IETF is trying to go beyond the RFC.
AWB: This document to have the grammar over an abstract alphabet, normative Unicode mapping to that abstract alphabet.
MM: Ok, a two level grammar. Still a refactoring of the RFC and that's ok.
AVK: You don't have to refactor, just define the grammar as a code point stream
note differences between code points and character encodings
MM: Why retreat from the RFC?
DC: There is contraversy in that it tolerates unmatched surrogate pairs. Want a standard that avoid that contraversy
NL: You can avoid it by talking about code points. The transfer format can outlaw unpaired surrogate, e.g. utf-8.
AVK: Right, utf-8 does not allow lone surrogates, it can only encode Unicode scalar values.
[After the fact note, we neglected escapes here...]
BE: Try to avoid duplicating efforts
DC: This is meant to be a guard against other efforts going out of bounds.
BE/AWB: Need to be addressed:
- Goal symbol
- "e" production
- leading/trailing whitespace
- character sets
NL/AVK: Without describing the character encoding
Discussion about character => code point
AVK: Need to define a representation you're using
AWB: Define the alphabet
AVK: Use the Unicode alphabet, without talking about utf-8, etc.
AWB: DC might be rejecting the discussion of encoding.
DH: The purpose of code point is to avoid talking about encoding.
AR: Why is this so important?
AVK: In order to describe the grammar, you need to define an abstract alphabet, Unicode is sufficient.
MM: This differs the RFC is code units instead of code points
DC: I will add a sentense in the section about JSON text, that it's a sequence of code points
AWB: Unicode code points
Include an informative note that it doesn't imply a specific character encoding
Character sets:
"json text is a sequence of Unicode code points that conforms to this
grammar"
Start symbol:
value
MM: Move to declare consensus?
AWB/WH: Doug needs to finish an editorial pass
Consensus/Resolution
- Pending the remaining edits, to be discussed for approval tomorrow.
Test 262 Update
(Brian Terlson) Test262-ES6.pdf
Talk about moving test262 to github.
BT: What do we need to make that happen?
DH: Use CLA for pull requests.
DH: Lets not solve this now. Lets stick to the current process that only we can only approve PR from members of TC39/ECMA
Apporoved to move the source code to GitHub, keeping all the current process for approval of tests.
IS: We will need to have a way to download the tests from ECMA.
IS/AWB: Needs to be an update to the tech report. Describing what the different packages are good for.
BT: In due course backporting will be hard, but in the ES6 timeframe it should be okay.
MM: Are we anticipating backwards incompatible changes to the testing harness?
BT: There migth be some, we can probably avoid it.
MM: good, good.
BT: In ES6 a number of tests moved location. We'll create a map and move them programmatically.
AWB: I'm planning on making the reorganization within the next few weeks.
BT: We'll annotate the tests with previous and new locations.
BT: Norbert's proposal is to rename the folders to use English names and remove the multitude of subfolders.
[Discussed moved to organization of the spec.]
AWB: There could be an arbitrary number of chapters in the spec. It's somewhat convenient to be able to say "chapter 15".
BE: Core shouldn't depend on 15.
AWB: Trying to get to that situation.
MM: I don't object to a part 1 / part 2 organization, but I also don't see the point.
MM: Back to tests, I want test directories to match spec chapters.
BT: Agreed.
BT: Contributors: test262:coreteamNeed to expand.
DH: We could use github pages and make test262.ecmascript.org point to that.
DH: BT, you set up a mirror for the site. I will do the DNS switch, call me, maybe.
BT: Given the new algorithms, designating algorithm step is not always feasible. Proposal is to require identifying the section, and maybe the algorithm step.
BT: Ensuring full coverage becomes harder.
[Much discussion on algorithm step designation not minuted.]
YK: This should probably be dealt with at the champion level.
BT: Open issues: How to create cross-host-compatible collateral for realms and scripts?
MM: ??
MM: We might only be testing the slow path of JavaScript engines. Testing things in a loop will help.
[Insert poison attack.]
Consensus/Resolution
- move the repo and test262 web presence to github.com/tc39
5.2 Can computed properties name in object literals produce string prop
names? Duplicates? (Allen Wirfs-Brock)
esdiscuss.org/topic/on-ie-proto-test-cases#content-5
AWB: Latest revision include computed properties
{ [x]: 1, [y]: 2 }
-
Can x evaluate to a string or a symbol? The concern is that people hope to determine the shape of objects by looking at them (but engines also want to determine the shape statically)
-
Duplicates?
EA: I thought we allowed any value as a computed property
DH: Yes, and converted with ToPropertyKey
Discussion re: duplicate computed property keys
DH: Comes down to "how likely are there going to be consequences
WH: I draw a distinction between definition and assignment and I view this example as definition.
EA: If you call defineProperty twice with the same property you do not get an exception. We should have the same semantics for define property in an object literal (and class).
MM: Often enough, if you get a
MM: The importance of throwing an error at the point
YK:
MM: If these are symbols, the programer did not intend to override a previous property.
DH: JavaScript practioners consistently tell us they prefer fail-soft with opt-in. Having said that, we are deploying strict mode more widely.
BE: We don't know, people will write top-level functions.
DH: It's the direction we're heading in.
WH: Main motivation here is consistency. In strict mode we intentionally made having duplicate properties such as {a: 3, a: 4} be an error. That rule is simpler and easier to remember than a rule that constructs another exception such as it being an error unless one of the properties is specified using [].
[debate]
WH: This has nothing to do with allegations of trying to make the language less dynamic. I want the simplest, easiest to remember rules. Had duplicate textual properties been allowed in strict mode, I'd be campaigning to also allow them if they're specified using [].
...
Discussion re: strict mode checking vs non-strict mode checking
There was an assumption that computed properties would disallow duplicates in strict mode (throw)
MM: The failure will frustrate programmers, but the lack of a failure will frustrate programmers. You say some prefer fail soft, I know of some that want fail fast.
DH: Static checks are easy to reason about and easy to justify. Dynamic checks are harder to justify and lead to unpredictable results.
AWB: What happens when x is undefined?
BE: Let's stick to duplicates
It's unlikely to want duplicates
The most common case is that you write two lines like the above and that's what you want
We should not creep into B&D language with runtime errors
Quick poll:
- error, 7
- no error, 6
- abstain, 9
DH: I want to hear a response to Mark's point about being bitten by fail-soft
AR: In Google Wave, you'd get notices that the application "crashed"... the rest of the web doesn't behave that way. Fail soft is seen be some as "wrong" and others as "going along". I put more favor towards getting "going along".
AVK: In specifying web APIs, authors regularly requesting less errors. Back in 2000's, WebKit had asked for less errors...
YK: Identifies committee pathology
EA: Waldemar wanted symmetry with the iterable parameter to the Map constructor regarding duplicate keys.
LH: Do not agree that we need symmetry here.
BE: This is syntax, Map parameter is not symmetry.
Recapping Waldemar's position about consistency with duplicate string keys in Objects in strict mode.
Break.
Post break discussion, after the points made about @@iterator and @@create
Consensus/Resolution
- Strings allowed
- Strict Mode: Duplicates are an Error
- Non-Strict Mode: No error
5.3 Special syntax for __proto__
in an object literal
(Allen Wirfs-Brock)
AWB:
{"__proto__": foo}
MM: Uncomfortable with either semantics for this issues
YK: This is new syntax that looks like old syntax
...quotation marks and no quotation marks should do the same thing.
DH: But then there is...
{ ["__proto__"]: foo}
YK: So for dict, we want "proto" as a regular property?
MM: Yes
DH: Allows...?
MM: I'm ok with either decision, because equally uncomfortable
DH: What happens today?
{__proto__: foo }
{"__proto__": foo}
Same.
MM: Then the quoted case should remain as is.
BE: The
MM: computed form:
- no compat hazard
- new special case that requires checks
- too much overhead
- syntactically unclear re: outcome.
Consensus/Resolution
- proto: magic
- "proto": magic
- ["proto"]: no magic, just a string
- ["" + "proto"]: no magic, just a string
5.4 Are TypedArray insances born non-extensible?
LH: Pretty sure all are extensible.
DH: I'd like them to be not extensible. There's performance benefits. No reason for expandos. Put them on another object. Gecko is not extensible.
WH: Second
DL: Current implementation allows them to be extensible
AR: does anyone think they should be extensible?
crickets
tumble weed
Consensus/Resolution
- TypedArray instances are not extensible.
5.5 concat and typed arrays
(Allen Wirfs-Brock)
AWB: Anything that's an exotic array (ie. Array.isArray(a) === true), will "spread"
...Gets weird with Typed Arrays
Proposing:
Specifically for concat, we give Typed Arrays a new special concat that
auto spreads
MM: The only sensible approach has to be type compatible
DH: concat is badly designed
...If we have to do a new method, we shouldn't try to simulate bad behaviour
Conversation leans toward:
- Two new methods that represent the two behaviours of concat, but as single operations each
- Names need to be added to @@unscopeable
LH: Does not like a new method that does 90% of the same as another existing method
DC: can't we just lean on the new syntax? Use "..." for this:
new Uint8Array([...array1, ...array2]);
AWB: if this isn't heavily optimized, this is going to create a large intermediate object
DH: this might create issues until people get "..." and engines want to stack-allocate args
AWB: not an arg
// agreement
Mixed discussion about general purpose apis that are lacking in the language.
BE: how should we settle those issues?
AR: with science. Look around the world and see what's most common; then put those things in the std lib
Consensus/Resolution
- No concat on Typed Arrays
5.11 ToUint32(length) and Array.prototype methods
(Allen Wirfs-Brock)
AWB:
let len = Get(o, "length");
let len = ToUint32(len);
If "o" is a Typed Array, this above will
Can duplicate all the algorithms to handle Type Arrays
Don't want to do that
What is the impact of changing to:
let len = Get(o, "length");
let len = ToInteger(len);
53 bits
DH: Wrapping around?
AWB: Getting there, not done with points
For regular arrays, already there, constrained to uint 32
Only talking about Get, not Put
[].forEach.call({ 1: 1, [Math.pow(2, 32) - 2 ]: "pow", length: Math.pow(2,
32) -1 }, e => ...)
Visits only the first (nothing after, because holes)
[].forEach.call({ 1: 1, [Math.pow(2, 32)]: "pow", length: Math.pow(2, 32) 2
}, e => ...)
Visits the first and last (nothing in between, because holes)
[].forEach.call({ 1: 1, [Math.pow(2, 32)]: "pow", length: Math.pow(2, 32)
-1 }, e => ...)
Readers: That would be really slow and insane. Don't do that.
Propose ToLength()
- return length >= 0 ? Truncate to 2^53-1 : 0;
WH: Note that this only applies to the length property. In other places where the Array method take positions or lengths as parameters, they already call ToInteger, not ToUint32.
MM: Are there places where the operational meaning of -1 is changing?
BE: There's hope to make this change, sure that the only things that will be broken will be tests.
WH: [[Put]] still has a restriction?
AWB: Yes
Typed Arrays are not exotic arrays
BE: Typed Arrays are going to use 64 bit unsigned for the length, gonna be nice.
5.14 keys(), entries() return numbers for array index properties
(Allen Wirfs-Brock)
AWB: Arv filed a bug re: Array iterators, the keys() iterator (as well as entries()). Currently it is speced to use strings for the indexes. It would be better to use numbers.
RW: Agree, the string property would be unexpected.
General agreement
Consensus/Resolution
- keys(), entries() use numbers for array index properties
5.7 Does Object.freeze need an extensibility hook?
(Allen Wirfs-Brock)
AWB:
let y = Object.freeze([1, 2, 3]);
let x = Object.freeze(new Uint8Array([1, 2, 3]));
The second does not really freeze the underlying buffer. So, the following does not work as the array case:
x[1] = 1;
Discussion about the operational understanding of Object.freeze, clarifications by Mark Miller.
Lengthy discussion about Object.freeze
AWB: Proposes the freeze hook
@@freeze
Called before the freeze occurs
DH: I'm pro mops, but you have to be careful with thems.
LH: Don't think it's wrong that the second item (from above) doesn't freeze the data, that's not what Object.freeze does.
WH: Object.freeze is part of the API and should match what reading and writing properties does (at least for "own" properties). Having Object.freeze not freeze the data is bad design
LH: Object.freeze is bad design, Typed Arrays are bad design, we're stuck with them, so what should they do.
DH: (agrees)
MM: Object.freeze is named badly. Other than that, there's nothing bad about its design. Its purpose is to make an object's API tamper proof
LH: (agrees)
AWB: Method that creates frozen data?
Object.freeze(new Date())
Consensus/Resolution
- No @@freeze MOP hook.
5.4 Typed Array MOP behaviours (Continued)
AWB: Talking about the descriptors of the properties of TypedArrays
{value: ?, writable: true, enumerable: true, configurable: false}
MM: Doing a defineProperty on a single one, should throw. Doing a freeze on the whole thing, is allowed.
BE: Right now we throw from freeze
MM: Making these appear like properties in the MOP, throw on individual properties changes.
var b = new Uint8Array([1, 2, 3]);
Object.defineProperty(b, "1", {});
// Throws!
BE: This makes sense.
Consensus/Resolution
- Object.defineProperty on Typed Array will throw
- Object.freeze on Typed Array will throw
July 25 Meeting Notes
John Neumann (JN), Luke Hoban (LH), Rick Hudson (RH), Allen Wirfs-Brock (AWB), Yehuda Katz (YK), Anne van Kesteren (AVK), Jeff Morrison (JM), Sebastian Markbage (SM), Alex Russell (AR), Istvan Sebestyen (IS), Mark Miller (MM), Norbert Lindenberg (NL), Erik Arvidsson (EA), Waldemar Horwat (WH), Eric Ferraiuolo (EF), Matt Sweeney (MS), Doug Crockford (DC), Rick Waldron (RW), Rafael Weinstein (RWS), Dmitry Lomov (DL), Brendan Eich (BE), Ian Halliday (IH), Paul Leathers (PL),
5.6 Can let/const/class/function* in non-strict code bind "eval" and
"arguments" (Allen Wirfs-Brock)
AWB: Currently, only var and function have any rules: non-strict is not
YK: Reduce the refactoring hazards
MM: What happens in arrows?
EA: Formal params follow the strict rules (no duplicates, no param named arguments etc), but the bodies are not strict.
RW/BE: Confirm
AWB: If someone writes...
class eval {}
And later moves this to a module...
module "foo" {
class eval {}
}
This will blow up
RW: But the same issue exists if:
function eval() {}
And later moves this to a module...
module "foo" {
function eval() {}
}
MM, WH: We need to make sure that whatever rule we decide on, is the simplest and easiest to remember
BE: Recall the issue of micro-modes
BE: Based on the decision make Arrows non-strict, the same reasoning applies to params
EA: Strict formal parameters are an early error, strict function body have different runtime semantics so those are a refactorig hazard.
AWB: The spec draft uses StrictFormalParameter for ArrowFunction and MethodDefinition.
YK: Easy to get sanity, by opting into modules and classes
RW: The January notes include rationale regarding the boundary of module and class, but not arrow, there is no note about arrow params being implicitly strict mode
AWB: method names in sloppy mode (object literals) do not allow duplicat names.
YK: Seems OK. ... Code may exist that has methods called "eval" or duplicate params named "_"
MM:
- eval & arguments
- duplicate arrow & method params
- duplicate non-data names in object literals
LH: Agrees that these rules should be applied where code opts-in, not by layered addition of language features
MM: Agrees with LH, in terms of the memory burden (developer end). This wont be clear to anyone but us.
- If you're in non-strict, it should act non-strictly
BE/RW: Yes
Various: explored the consequences of allowing duplicate method parameters even in new-style parameter lists when in non-strict mode. That would be the simplest rule, but it would cause too many edge cases for duplicate parameter names in destructuring, rest parameters, etc., so we all agreed not to pursue that approach.
AWB: The rule that we agreed on, in that past is that when new syntax forms are involved.
- Depends on form of the parameter list
MM: We need to lower the memory burden
EA: This is going to make it greater
MM: Defending exception for new forms of parameter list.
AWB: More complex set of rules if you allow multiple names in simple parameter lists.
- Duplicate param names not allowed, except for function definitions (things declared with function) with simple parameter lists
MM: That's more complex
Consensus/Resolution
General Rule
- Non-strict code operates in consistently non-strict manner (This covers the let/const/function* cases)
- Exception:
- Only allow duplicate parameter names in simple parameter lists
- Simple parameter lists are defined by those that do not include rest or defaults or destructuring.
Consensus: The name of the ClassDeclaration/ClassExpression follows the strict rules for its name. So it cannot be named "eval" or "arguments". Just like for strict function names.
5.9 Semantics and bounds of Number.isInteger and Number.MAX_INTEGER
(Allen Wirfs-Brock, originally proposed by Doug Crockford?)
AWB: What is the value of MAX_INTEGER
WH; Whatever the largest finite double
DC: But there are two
WH: But I said "double"
DC: That's ambiguous
WH: No
MM: WH is not constraining to the contiguous range.
WH: If you want 2^53, call it something else
MM: Likewise with isInteger ...Propose:
Number.MAX_SAFE_INTEGER = 2^53-1
Number.isSafeInteger => n > -(2^53)
AWB:
2^53-1, 2^53, 2^53+2
2^53+1 === 2^53
After 2^53, you can add 2
WH: Alternate proposal:
Number.MAX_CONTIGUOUS_INTEGER = 2^53
Number.isContiguousInteger = n => n >= -(2^53) && n <= (2^53);
MM: Gives history of "isSafeInteger"
Caja had a Nat test that tested that a number was a primitive integer within the range of continguously representable non-negative integers. I used Nat in a small piece of security critical code, to ensure I was doing accurate integer addition and subtraction. Because I was using this definition, Nat admitted 2^53. This introduced a security hole, which escaped notice in a highly examined piece of code which has been published several times and has been the subject of several exercises to do machine checked proofs of some security properties. Despite all this attention and examination, no one caught the vulnerability caused by admitting 2^53. By excluding 2^53, we have the nice invariant that if
isSafeInteger(a) isSafeInteger(b) isSafeInteger(a+b)
are all true, then (a+b) is an accurate sum of a and b.
WH: OK
DC: Want to call this Integer
WH: Can't call this integer. 2^54 is an integer, just not inside of the contiguous range. Like the concept, but not ok to name it "isInteger", as 2^100 also happens to be an integer.
BE: Agrees with Mark's "Safe"
YK: Easy to explain that Integers outside of the range
AWB: Current spec checks for mathematical integer
...toInteger makes use of internal ToInteger
MM: Makes sure there is no fractional part?
WH: Yes
WH: If we have toInteger, then we need isInteger or isSafeInteger
AWB:
isInteger isSafeInteger
MM:
MAX_SAFE_INTEGER = (2^53)-1
isInteger
- Infinity => false
- NaN => false
- value !== truncated value => false
- -0 => true
isSafeInteger
- -0 => true
toInteger
- Does not guarantee a safe integer
ToInteger
- Does not guarantee a safe integer
WH: The only place where ToInteger is divergent is +/-Infinity
WH: We already have Math.trunc, which does the same thing as ToInteger would. Don't need Number.toInteger.
5.8 Number.prototype.clz or Math.clz?
WH/AWB: Is an instance operation.
WH: If it's on Math.clz(), it will return the wrong answer if we have different value objects in the future
WH: In particular, this specifies that the value is 32 bits wide, which makes it inappropriate as something in Math. Consider what happens if we add a uint64 type. Then we'd want Uint64.clz to count starting from the 64th bit instead of from the 32nd bit. We can do that if it's Uint64.clz. We can't (without creating weirdness) if we use Math.clz for both.
AWB: Then it belongs on the instance side.
Any objections?
Consensus/Resolution
- Number.prototype.clz
AWB: What about the following:
Number.isInteger
Number.isSafeInteger
Number.isFinite
Number.isNaN
Number.toInteger
Consensus/Resolution
Remove Number.toInteger (already exists as Math.trunc)
(Reference: rwldrn/tc39-notes/blob/42cf4dd15b0760d87b35714fa2e417b589d76bdc/es6/2013-01/jan-29.md#conclusionresolution-1 )
5.13 Which existing built-in properties that are
read-only/non-configurable do we want to make read-only/configurable? (Allen Wirfs-Brock)
AWB: Previously, we've discussed setting data properties as {writable: false, configurable: true}
One of these built in properties that discussed is the length property of function
MM: Points about function properties, eg. the prototype property
EA: Classes are a constructor and the prototype, can't use function for the argument to how classes behave
MM: Don't think this is a question that should be addressed for ES6, it's too late.
AWB: Not too late, we've discussed this
AWB: The "prototype" property of the class constructor object is configurable, non-writable
AWB: {writable: false, configurable: true} allows enough control
EA: We also discussed this for methods
YK: This is part of the refactoring hazard I mentioned earlier.
MM: Don't want to consider a change of that magnitude this late in the game
AWB: All of the existing properties from ES5, should we address the whole list?
When define a class:
(Foo.prototype) -C-> <-P- (Foo)
AWB: Foo.prototype.constructor property {writable: false, configurable: true}?
MM: This hazard:
function Bar() {}
Bar.prototype = Object.create(Foo.prototype);
Bar.prototype.constructor = Bar;
Code that exists like this, once Foo gets refactored to a class, if constructor is non-writable, the above breaks.
AWB: @@create
Array[@@create]
Recap:
@@create sets the prototype property of the new instance, but
referencing the prototype property of the constructor itself.
MM: With regard to function.name and function.length and making them "tamper resistant", but mucking around with the built-in prototype chain has unknown implications and it could be addressed in ES7.
This change allows the actual Array.prototype to be changed.
WH: When does @@create get called?
AWB: when new
is used.
Consensus/Resolution
{writable: false, configurable: true}?
- length property of functions: yes
- prototype property of functions: no
- new properties, ie. @@create: yes
TC39 + W3C
Discussion joint meeting with W3C at TPAC, Nov 11-15, in Shenzhen, China.
5.1 Symbol primitive value or object? One more time.
(Allen Wirfs-Brock)
EA: There is discontent that there isn't private state. Symbols don't cover this. Unique Strings solve the uniqueness case
Proposal: Postpone Symbols to ES7
BE: The reason we seperated private and unique was exposure in Reflection modules
YK: You don't need unique symbols when you can just expose private symbols.
MM: The @@iterator symbol must be transitively immutable
In the relationships case, the WeakMap
BE: There are classes that outlive any instances
Why can't we just have (private) Symbols
MM: Two subsystems that aren't supposed to be able to communicate with each other should be able to share anything that is transitively immutable.
BE: Can we unwind the split between private and unique?
YK: (fill this in)
AWB: We deferred private symbols
Private state should not be mixed up with Private Symbols
Symbols are guaranteed uniqueness, wrong way to go for private state.
BE: We aren't going to resolve this now, need to take it to es-discuss
AWB: For the spec, how do I spec Symbols?
Strings won't guarantee
MM/BE: (quick discussion about uuid strings)
WH: What you're saying is that we need a gensym?
AWB: Essentially, what we need is a gensym
BE: Andreas implemented Symbol
AWB: Dug in against wrapper objects for Symbols
-
(did someone catch this one)?
-
Unique objects, unforgeable, can't set or access properties. Are actually objects.
BE: ARB says that v8 internal architecture makes it hard to add new
Consensus/Resolution
- Leave the spec as it is now
- Postpone until next f2f
5.12 Should we remove [[Construct]] from the MOP and Proxy handler API?
(Allen Wirfs-Brock)
AWB: recapping @@create changes...
new C(...args);
Essentially breaks down to:
[[Construct]] =>
let obj = C[@@create]();
return C.[[Call]](obj, ...args);
YK: This means that [[Construct]] will always call [[Call]].
AWB: The way the built-ins work, override @@create, eg. Date, creates a private data slot for the time
function String(value) {
if (!(this has an unitialized instance)) {
return "" + value;
}
this.value = "" + value
}
String[@@create] => { value: uninitialized instance }
WH: Disapproves having String when called as a function do different things based on this. This breaks the invariant that String(x) always returns a primitive string.
WH, MM: Also concerned about adding a new uninitialized String instance type as a specing helper but which becomes reified and user-visible. Someone could call String's @@create directly, obtain one of these values, and cause mayhem. Too much surface area of potential problems here, and this is unnecessary complexity.
YK: Objects to removal [[Construct]]
AWB: A Proxy trap?
BE/YK: Keep
Consensus/Resolution
- [[Construct]] remains.
Anti-Pattern to call a constructor without new
(Allen Wirfs-Brock)
AWB: In ES6, with class, it will be an anti-pattern... Don't call without "new"
BE: This is style/convention
Promote the use of new
with classes
MM: Might want a constructor to refuse to initialize an instance of that class if the call object is not the
EA: Three browsers have implemented Map, Set, WeakMap, WeakSet and all are
allowed to be called without new
, which breaks subclassing
General agreement that this is bad.
AWB/MM: Function.prototype.@@construct
MM: If it implies runtime overhead that is not easily optimized, that would be a perfectly valid argument against. Does it?
In general, wherever we can replace a [[Foo]] internal property with an @@foo unique symbol named property, without penalty, we should. Especially if proxies would otherwise need a special trap for [[Foo]].
YK: Need to be careful when we change the MOP since other specs refers to the mop methods.
Consensus/Resolution
- Giving up on the convenience of calling constructors without new, with any expectation
- Throw when Map, Set, WeakMap, WeakSet are called without
new
JSON
Any objections to sending the JSON draft 7 version to the general assembly
DC: Made changes. Specifiy code point. Removed summary of grammar. It was redundant. As well as the whitespace issue.
JN: Send proposal to ???. If you don't reply to this thread then it is an implicit approval.
6.2 Interfacing ECMAScript & HTML/DOM Event Loops
(Rafael Weinstein)
RWS: (A single slide) How does ES inegrate with the rest of the specified environment with regard to scheduling tasks.
-
Enqueue A Task
-
The environment must run the task at some point in the future
-
The task must be run after all previous enqueued tasks
-
The task must be run on an empty stack.
-
Enqueue A Microtask
-
The environment must run the microtask at some point in the future
-
The microtask must be run before all previously enqueued tasks
-
The microtask must be run after all previously enqueued microtasks
-
The microtask must be run on an empty stack
WH: Note that this defines a total order.
MM: We need to decide how tasks or microtasks that originate from EcmaScript behave
MM: No nested event loop?
General agreement that the ES spec not support nested event loops. If the browser specs require them, i.e., for running JS code while a modal dialog is blocked, then the browser specs would need to state that this is an intended violation of the ES event loop model.
YK: Timing is another issue
MM: Promise scheduling, fifo
Discussion re: the host vs.
w3c bug...
Consensus/Resolution
- Needs more offline discussion
Value Objects Update
(Brendan Eich) ValueObjects.pdf
BE:
Use Cases:
- Symbol
- int64, uint64 (53 bits not enough)
- Int32x4, Int32x8 (SIMD)
- float32
- Float32x4, Float32x8 (SIMD)
- gignum
- decimal
- rational
- complex
Overloadable Operators
- | ^ &
- ==
- < <=
- << >> >>>
-
-
-
- / %
- ~ boolean-test unary- unary+
Preserving Boolean Algebra
- != and ! are not overloadable to preserve identities including
- X ? A : B <=> !X ? B : A
... Too fast, request slides.
www.slideshare.net/BrendanEich/value-objects
"complex and rational cannot be composed to make ratplex"
AVK: Multiple globals will cause issues.
BE: That is not an issue with this proposal. It is an issue with multiple globals. ... we need literal syntax for readability. ... no solution for user defined literal suffixes.
BE: Some have requested mutable value objects in order to represent small tuples and be able to do updates on them in a loop.
WH: This no more requires value objects to be mutable than incrementing a loop counter requires integers to be mutable. It's the variable that holds the integer 3 that's mutable and an be changed to refer to a different integer; you can't change the integer 3 itself to be 5. If the value is a small tuple and the source and destination are the same, it's easy enough for a compiler to transform a functional-style tuple update into imperative code if it likes.
WH, MM: Don't want mutable number literals/objects. No new Float32x4(a, b, c, d)
. This would break === (which would then need to do identity matching
instead of same-value matching).
typeof x == typeof y && x == y
<=>
x === y
0m === 0
0L == 0
0m == 0L
BE: typeof become advisory
AWB: You can register typeof result once during registration. That way we can enforce that it does not changes.
Consensus/Resolution
- NaN requires separately overloadable <= and < [Slide 5]
- Intersection means function identity matters, so multimethods can break cross-realm [Slide 9]
- Mark objects that I or i as bignum suffix conflicts with complex [Slide 11].
- Always throw on new -- value objects are never mutable and should not appear to be so, even if aggregate [Slide 12]
- Need to work through any side channel hazard of the typeof registry [Slide 13] and the multimethod dispatch "registry"
6.5 Parallel JavaScript (River Trail)
(Rick Hudson) ...need slides
RH: We have to go parallel to keep up with other langauages
YK: Don't want to fallback into sequential
Various: Debate about what happens when the parallel computations have side effects that introduce dependencies between them. Options are either devolving into sequential computation or throwing an exception.
RH: The code behaves the same way as sequential code but goes faster if there are no side effects.
WH: What happens if there are no side effects but some of the computations throw exceptions? Which exception do you get?
RH: Any of them. There are also other implementation options here.
WH: If it's any of them, then this is not like sequential code.
WH: What exactly is a side effect? How would a programmer know that some ECMAScript construct has an internal side effect?
WH: In particular, suppose that I want to use a parallel loop to fill a big matrix with random numbers. Is calling a random number generator considered to be a side effect or not? If the answer is yes (it is a side effect), then how would one fill a big matrix with random numbers in parallel, as that is something that one would reasonably want to be able to do?
Consensus/Resolution
- Throw instead of falling back to sequential.
- Focus on concurrency/scheduling in ES7. Make sure it fits with other concurrency constructs (promises/event queues)
- Discussion/Acceptance in ES7 process.
RWS Proposal For Specification Process (for ES7 process)
Consensus/Resolution
- Go forth
7 Internationalization
NL: Implementations of ECMAScript Internationalization API: - Microsoft has shipped it in Internet Explorer 11 beta - Opera has shipped it in Opera 15 (based on Chromium)
Future Meetings
Sept 17-19, Boston Nov 19-21, San Jose
July 29 2014 Meeting Notes
Brian Terlson (BT), Dmitry Lomov (DL), Waldemar Horwat (WH), Allen Wirfs-Brock (AWB), John Neumann (JN), Rick Waldron (RW), Eric Ferraiuolo (EF), Jafar Husain (JH), Jeff Morrison (JM), Mark Honenberg (MH), Caridy Patino (CP), Sebastian Markbage (SM), Istvan Sebestyen (IS), Erik Arvidsson (EA), Brendan Eich (BE), Mark Miller (MM), Sam Tobin-Hochstadt (STH), Domenic Denicola (DD), Peter Jensen (PJ), John McCutchan (JMC), Paul Leathers (PL), Eric Toth (ET), Abhijith Chatra (AC), Jaswanth Sreeram (JS), Yehuda Katz (YK), Dave Herman (DH), Brendan Eich (BE), John-David Dalton (JDD)
Introduction
JN: (Welcome and host details)
Introductions.
Agenda: tc39/agendas/blob/master/2014/07.md
JN: Agenda approval?
Approved.
JN: Minutes from June 2014 approval?
Approved.
4.1 Review Latest Draft
(Allen Wirfs-Brock)
rwaldron/tc39-notes/blob/master/es6/2014-07/rev26-summary.pdf
AWB:
Slide 1
- "Task" => "Job"
- Generator oject return method/for-of/in loops use return method on generatorsGetMethod, now treats null and indefined equiv as meaning no method available
- Eliminated the ability of Proxy handlers to extend the set of property descriptor attributes thet expose vis [[GetOwnProperty]]
- Added invariant checks for Proxy [[OwnPropertyNames]] internal method
- Added an informative generator function based definiton for ordinary object [[Enumerate]]
- Another round of updates to 9.2.13 FunctionDeclarationInstantiation to fix various scoping bugs.
- Eliminated duplicate property name restrictions ob object literals and class definitions
- Revisited @@unscopabe support in Object Environment Records
RW: Clarification about #3
MM: Prevent the ability to lie about decriptor values; prevent leak "time of check to time of use" (ToCToU) vulnerability
(re: #5) WH: Informative implementation may cause readers to incorrectly assume that anything that doesn't conform to to that informative implementation is wrong. In particular, worried about users assuming that implementation behaviors that the informative implementation doesn't do can't happen.
(discussion re: normative vs informative prose in spec)
WH: "Note" sections?
AWB: Sections marked "Note" are informative, making them normative would be redundant
WH: Too many "Note" sections appear to be normative rephrasings of other normative text, which we labelled as informative only because we're afraid of redundancy. Then when we have one which describes something that behaves quite differently from the normative text, it can be misleading.
Slide 2
- For-of now throws if iterable value is null or undefined (also reverted comprehension to throwing for that case)
Date.prototype.toString
now uses NaN as its time value when applied to an object without a [[DateValue]]- Function poison pill caller and arguments properties are now configurable
General concern about whether you could take the sloppy-mode and add it to a strict-mode function via reflective APIs.
Turns out that the sloppy mode behavior is implemented in all current engines as a magic value property, so this will not be possible (phew!).
await
is a FutureReservedWord when parsing and the syntactic grammar goal symbol is Module- Better integration of
Object.prototype.toLocaleString
andString.prototype.toLocaleString
with ECMA-402 - Added
name
property for bound functions inFunction.prototype.bind
. Fixed bugs in generating length property inFunction.prototype.bind
- Tweaked Script GlobalDeclarationInstantiations to deal with error situations that could arise from misusing proxies for the global object.
- Changed handling of NaN from a sort comparefn to match web reality ( ecmascript#2978)
MM: (re: #6) What is the bound name?
AWB: "bound ..."
(per previous resolution—find and link)
AWB: (re: #8, sort behavior when comparison function returns NaN) the change adds: "If v is NaN, then return +0. "
WH: Note that that bug contains other examples where sort would still be inconsistent. WH: The issue here is that +∞ - +∞ is NaN, which means that using a-b as a sort function allows you to compare +∞ with -∞, but +∞ is not equal to itself. WH: I wrote this wording in ES3 and I'm fine with this change. It will fix the behavior of sorting arrays containing ±∞ but not NaN's. With NaN's you'll still get an inconsistent sort order (because the ordering relation is not transitive) and hence implementation-defined behavior.
MM: Worried that implementation-defined behavior could do bad things such as violate memory safety. Should state that it doesn't.
MM: (filing bug to make language more specific to avoid memory safety violations)
AWB: There was never a concern about violating memory safety because we don't define memory safety
WH: I tried to formalize it in ES3 but it was too much trouble. I wanted to state that the sort always produces a permutation, but that doesn't apply to sorting oddball input objects containing things such as holes with prototype properties showing through, read-only properties, getters/setters, proxies, .... Can't write "sort doesn't violate memory safety" without formalizing what memory safety is.
MM: Will write concrete suggestion for handling and submit as part of bug
Slide 3
- Updated Symbol conversions:
aSym == "not a symbol"
produces false.var s = Symbol(); s == Object(s)
produces true."foo" + aSymbol
oraSymbol + "foo"
throws TypeError.- Symbol
@@toPrimitive
returns the wrapped symbol value. ToNumber(aSymbol)
throws.
- Spread now works on strings
var codeUnits = [..."this is a string"]
yield *
now works with strings:function * getchars(str) {yield * str}
- Annex B support for function declarations in IfStatementClauses
- Annex B (and 13.12) support for legacy labelled FunctionDeclarations
- Updated Annex C (strict mode summary) WRT ES6 changes and extensions
EA: (re: #2) Removed the Object check?
BE: Andreas didn't want values on the right of a destructuring (number, etc)
EA: Definitely want to spread strings
BE: Agreed, we should revisit.
(Added: tc39/agendas/commit/370e3029d01659620e0ca03bf370eb5beefca45e )
Re: #4, the resolution: rwaldron/tc39-notes/blob/master/es6/2014-06/jun-6.md#block-scoping-issues
BE:
(function f() {
console.log(g);
L: function g() {};
})();
DH: The grammar: inside a LabelledStatement, can't start with "function"
BE/AWB: Clarification of Statement and Declaration
DH: Suggest: we can deal with this post-ES6. It's a useless thing that happens to parse and not worth our immediate attention.
AWB: We need to decide if this is a function declaration, does it hoist?
DH: But won't affect the web, existing can't be relied on. The existing work has been done, but no additional work
WH: Treat this the same as function declarations inside of statements: ie.
if (true) function f() {}
. Do we allow while (true) function f() {}
?
YK/BE: Let's take this offline.
WH: Let's keep it as it is in ES5
AWB: Whoever maintains web specs...?
BE: Not all browsers do the same:
(results of above code)
- SpiderMonkey: ReferenceError
- V8: function g() {}
- JSC: function g() {}
- Chakra: function g() {}
AWB: Spec is up to date, without the modules work.
4.6 Unscopables
(Erik Arvidsson)
rwaldron/tc39-notes/blob/master/es6/2014-07/es6-unscopables.pdf
Object instead of Array
Array.prototype[Symbol.unscopables] = {
...
};
(with null prototype)
Walk The [[Prototype]] Chain
For
- HasBinding
- GetBindingValue
But nor for:
- SetMutableBinding
AWB: essentially replicating the prototype lookup algorithm in two additional places. Realized a third.
EA: ...
Setter Issue
SetMutableBinding ignores @@unscopables so we can get a mismatch:
with (object) {
x = 1;
assert(x === 1); // can fail
}
YK: Needs to be written?
AWB: More that wasn't considered. Proxy issues in bug (above)
EA: The problems arise when your prototype has getter or setter or proxy. The result of HasBinding can return true for a property further down the prototype chain, but then Set got invoked using a setter that was black listed in HasBinding.
AWB: Proposal is, do what we've done and leave setting as is.
Only apply unscopable at the local level, don't walk the prototype chain
Any binding resolution operation, on a with
env record:
- looks up unscopables, doesn't matter where in the prototype
- checks the
name
against unscopables (has, not hasOwn) - if found continue up to the next level
Only applies to "with environments"
STH: Should unscopables affect things in the prototype chain
WH: Does it apply to both reads and writes?
AWB: Yes
STH: Looks only at the object with unscopables as own property?
AWB: No, it's on the prototype, to not own
STH: Should this apply to all spec algorithms?
YK: Everyone agrees?
STH: AWB's proposal says no
AWB: Only object environment records for with
STH: it should apply to SetMutableBinding?
AWB: Yes.
EA: The reason for not doing [[Get]] was because you might have instance properties on the object
YK: Can link the unscopables
EA: Agreed. And don't do hasOwn
STH: can break existing programs that use instance properties
BT/AWB: Discussion about compatibility.
EA: What about Globals & unscopables? The global object is an ObjectEnvironment too. Do we plan on adding unscopables?
Generally, no.
YK: Are we sure there is no case to use unscopables on the global object
RW: Could we specify no unscopables on global now and relax it later?
EA: for ES7
AWB: If this only applies to with
environments, then that's not part of
the global object
MM: Unless you do: with(global object) {...}
Confirm.
AWB: Is this a function of with
or the environment?
Conclusion/Resolution
- @@unscopables only works inside of
with
object environment records, not global object environment records. - Revert to the previous algorithm:
- looks up unscopables, doesn't matter where in the prototype
- checks the
name
against unscopables (HasProperty, not HasOwnProperty) - if found continue up to the next scope level
4.8 Consider if Object.assign should silently ignore null/undefined
sources
(Sebastian Markbage)
(Request slides)
SM:
Object.assign({}, undefined);
This throws, but propose that it shouldn't.
SM/AWB: Object.keys
was relaxed
Object.assign(undefined, {});
This should still throw
DH: undefined and null are probably treated the same by ==
Do we want to treat null and undefined the same? Probably not.
DD: The mental model should be for-of + default arguments, not for-in
MM: use case for tolerating the null is in JSON data. JSON has no way to represent undefined, except for null
JH: or omission
SM: Covered existing libraries to use Object.assign
, feedback almost
always included the undefined case.
JM: Did you distinguish null and undefined?
SM: No
YK: We should distinguish or we have two nulls
Conclusion/Resolution
- do not throw on undefined
- will throw on null
Short discussion about making generator.return() throw a special
exception.
DH: Want to bring up Andy Wingo's preference (discussed on es-discuss) for modeling return() as an exception rather than a return.
General opposition.
Conclusion/resolution
- keep as is: return() method produces a return control flow, not an exception
AWB: In the process of for-of, if a throw occurred, does that turn into a throw to the iterator?
NO.
Yield *
AWB: Does an internal throw
When a generator.throw()
is called and the generator has a yield* the
spec currently calls throw
in the yield* expression
DH: Call return() on the outer generator, delegates calling return() to the
delegated yield *
BE:
function* g() {
yield* h();
console.log(42);
}
function* h(bound) {
try {
for (let i of range(bound)) {
yield i;
}
} finally {
console.log("h returned");
}
}
let it = g();
it.next(); // returns {value: 0, done: false}
it.throw(); //
AWB: If it.return()
we would send a return to the h
instance.
Confirm.
AWB: if it.throw()
do we send a return to the h
instance?
MM: we would do a throw
?
AWB: we wouldn't. Think of the yield*
as a for-of. h() doesn't know what
its client is.
The problem:
the resumption from the yield is throw, back in the yield*
, what to call
on h()
DH: Propagate, that's the point of yield*
, it should behave as if the
inner generator is inline and anything it does propagates.
AWB: My mental model of yield*
is that it expands to a for-of { ... yield
... }
MM: You should not think of it that way.
DH: You should not base your mental model off of an expansion; you should
base it off of what yield*
is meant to be used for.
The desugaring into for-of is not at all straightforward.
AWB: the desugaring in the algorithms in the spec is not actually that complex...
DH: the way to think about this is to directly inline the body of h
into
the body of g
, not as a generator equivalent. This is the generator
analogue of beta-equivalence.
AWB: why don't you do that for any function then?
DH: well, if we had TCP, we would have beta-equivalence.
YK: (Saw that one coming...)
DH: the important refactoring property to have is that you can extract out
some generator logic into a helper function and then call it with yield*
.
Ben Newman was talking about a similar/related thing. It is very important
that the throw to the outer generator get delegated as a throw through the
inner generator.
MM: What is the model that the user has: who is the throw
complaining to?
AWB: And who has control? Is the generator calling out and getting something back or into another generator.
DH: for-of and yield*
diff roles: for-of is a consumer and generator
stops there. yield*
is consuming and producing by composition.
JH: Two models: consuming and emitting. yield*
is a stream fusion,
STH: yield*
compensates for the shallowness of yield
DH: It allows composition of generators
JS: You could expand these to for-of
YK: Those that work in C# find this to be a natural way to think of this, but others may not
JS: My concerns are speed
DD: Think about how you could desugar and see where it falls down
YK: disagreement
DH: Fall over in more cases
MM: Would it be plausible to have the throw
propagate to the inner
generator as well?
DH: check out PEP-380. The desugaring is mind-bogglingly complex, but the refactoring principle is very straightforward. Refactoring should not introduce corner cases where it behaves differently.
MM: consensus that yield* delegates throw?
JS: no objection
Conclusion/Resolution
yield*
delegatesnext()
,return()
andthrow()
- for-of propagates abrupt completion outward, calls the iterator's
return()
AWB: Another question... when we do the return
, that may return a value
and currently throwing that value away. There is no way to override the
normal loop termination.
DH: If we do a throw that causes us to call return()
on a generator, and
that returns a value, the value is dropped on the floor, which is
consistent.
AWB: If the return
call on the iterator is an abrupt return (normally
means an exception)...
MM: The call to return itself completes abruptly?
AWB: Yes
MM: it's "finally-like", that supersedes
AWB: In one sense, a dropped exception
BE: Let's have a smaller group with the champions look at the final specific details.
4.11 Consider adding "attribute event handlers" to ANNEX B
(Allen Wirfs-Brock)
AWB: Add to Annex B the semantics of defining an attribute handler so that the HTML spec can get out of the business of spec'ing a distinct form of ES function.
MM: An internal function for other specs?
AWB: If you're implementing a browser, you'd follow this specification.
YK: Isn't this just with
?
DD: No, there is more there (see kangax.github.io/domlint/#5 for details)
EA: Why can't this be in the HTML living spec?
AWB: That's the problem, Hixie is using internal APIs, in some cases incorrectly.
MM: Is this for ES6?
BE: Not important enough for ES6
AWB: not a lot of work.
No support for ES6
DD: I don't think we should push for ES6
RW: Last meeting pushed back 6 months, this isn't that valuable.
Conclusion/Resolution
- Scheduled for ES7 Annex B
4.9 Arguments/caller poisoning on new syntactic forms - Arrows,
Generators
(Brian Terlson)
BT: All function-like things agree on having arguments object
DH: What's wrong with having the poison properties?
BT: The motivation for having those properties may not apply to those new syntactic forms.
MM: Keep them and they are there and configurable, or mandate w/o caller and arguments properties
EA: Too much weight on edge cases
MM: Born without extra properties would be fine
AWB: Do we even need poison pills?
BT: Can we get rid of it?
AWB: Can't add properties called "caller" and "arguments" to strict mode functions
MM: New forms, the properties are absent.
AWB: Are these properties implemented as own properties or inherited?
MM: And we agreed that Function.prototype
remains a function
Conclusion/Resolution
- Get rid of all poisoned caller and arguments, except for the poisoned caller and arguments on Function.prototype
- All functions born of non-legacy function syntactic forms do not have caller and arguments properties
4.10 Signaling stability of existing features
(Domenic Denicola and Yehuda Katz)
YK: Problem: ES6 signaling is too fuzzy. ES6 is a monolithic thing. Three stages:
- Seeking the happy-path semantics
- Find the happy-path semantics
- Finalize edge cases, done.
Need to
AWB: What are we doing that sends the wrong message?
YK: eg. when we said were pushing for a 6 month extension, people assume this means all features are unstable
RW: (relaying additional experience re: above)
DD: Proposed stages:
-
Locked
-
Stable
-
Almost Stable
-
Stabilizing
(need to fill in descriptions from proposal document)
AWB: The problem is that some things that are "locked" become "unstable"
JM: It's possible to be "unstable" until spec is published
STH: And publication isn't even the end either.
WH: Any time someone proposes something like this, I want to ask if this would've correctly predicted the results had we done it some time ago. For example, had we done this, say, in January then comprehensions would have been in the Locked stage, but then we took them out.
WH: Math functions are listed in the Locked stage in your proposal but at the same time we have important discussions at this meeting about their precision.
?: Math function precision could be a different feature.
WH: That's weasel wording — when you want to change some aspect of a feature, you just move the goalposts to make that aspect a separate feature.
DD: "we're" not good a the PR of specification churn
MM: Not sure what this proposal is really addressing. The community has a way exxagerated sense of instability, and over-reacts to any change. So what?
JM: Won't implement modules at FB because of churn.
AWB: Does this change the model for ES7?
DD/YK: No.
AWB: So this is for the next 5 months of ES6?
MM: Not enough community feedback because the feedback is limited to only those that are willing to accept churn?
YK: Yes
DH: Priorities: getting feedback for ES6 is low, because it's too late in the game. Focus feedback priority on ES7. Despite the inclusion of more practitioners in the TC, there are still broad misunderstandings about TC39 and ES6.
DD: The perception is that ES6 is the new ES4, except that we all know this isn't true.
AWB: Two things... concerns about how you're defining these stages. Who is going to do this work? I don't want to say 5 months from now that the spec is "unstable" in its entirety.
Mixed discussion about implementor opinion of feature.
AWB: We don't want uninformed feedback that we have to filter
DH: It's really bad to not talk to the community, because people think the worst.
YK: A vast majority of ES6 is stable
MM: How we should be messaging as individuals. TC39 should not be spending time
PL: This is all too hard to quantify and assess because change will happen.
DH: Stability chart is
Conclusion/Resolution
- Individual evangelism, feedback and outreach
Postpone Realm API to ES7
MM: Can we?
DH: I'm ok with this, but don't want to be in a situation where we're permanently postponed while waiting for a security review. Let's reach out to other security reviewers.
DD: I can implement a draft implementation in node for the purpose of review.
AWB: The modules spec depends on realms
DH: Only the ability specify the Realm in user code needs to be removed.
MM: let's pull Realm from ES6, if there are issues we can address them.
AWB: The Realm API cleaned up how you go about eval'ing things.
DH: This clean ups can stay as-is, now ready for the reflection to come in ES7
AWB: Not going to have anyway for user code to eval in Loader.
DH: w/o Realm no ability to virtualize eval. Doesn't effect utility of Loader. The specification is detailed and complete, should continue moving forward.
MM: And any issues that are encountered can be addressed.
DH/YK: Agreement that test implementation in node is ideal (vs. browser)
Discussion re: security issues created by implementations in browsers.
MM: The security implications and risks are greater for Realm because this is the sandbox api.
DH: Agree.
Conclusion/Resolution
- Realm postponed to ES7
Revisit Object.assign()
JDD: The issue currently: if we allow undefined
then null
is the only
value not allowed. I don't see anything distinguishing.
It's strange that null
is singled out like this. When null
is used
correctly, it makes sense here.
DD: Then the argument is that it should also throw
JDD: No, it shouldn't throw.
YK: Should throw on numbers, booleans, etc.
JDD: Should affect Object.keys
as well
YK: Doesn't have to
JDD: There shouldn't be special casing for null and undefined
DD: undefined
triggers the default parameter, null
doesn't.
YK: The mental model is: undefined
is missing, null
is not
AWB: Mentions the relaxation of rules for Object.keys
YK: We should enforce the difference between null
and undefined
SB: (details about a study in FB code re: how null
and undefined
are
being used)
DH: We need to decide whether there is a useful programming model for these
cases: null
and undefined
JDD: I think the boolean, number, string values are a side effect because
they are just treated as empty. Propose to treat both null
and
undefined
the same way.
JM: Sounds like a better argument against boolean, number, string.
AWB: (example of a number object to be extended)
SB: The differnce is target vs. source, null
and undefined
throw for
target.
Mixed Discussion
DH: To avoid rehashing, guiding principle:
null
represents the no-object object, just like NaN represents the no-number numberundefined
represents the no-value value
Conclusion/Resolution
- Overriding previous resolution:
Object.assign
does not throw onnull
orundefined
- Adhere to the guiding principle stated above
Test 262 Update
(Brian Terlson)
BT: CLA is now online, fully electronic. Lots of contributions, specifically awesome help from Sam Mikes.
- Improvements to the test harness
- Repeat contributors
- Converting ES5 to ES6
- Converting Promise test inbound
- Massive refactoring commit
Discussion about Promise testing
JN: Work with Istvan to write a press release for this?
BT: Yes.
DD: Node runner?
BT: MS has been using a node runner internally, I've pulled out the useful pieces and pushed to github: bterlson/test262-harness
Conclusion/Resolution
- announcement effort
July 30 2014 Meeting Notes
Brian Terlson (BT), Dmitry Lomov (DL), Waldemar Horwat (WH), Allen Wirfs-Brock (AWB), John Neumann (JN), Rick Waldron (RW), Eric Ferraiuolo (EF), Jafar Husain (JH), Jeff Morrison (JM), Mark Honenberg (MH), Caridy Patino (CP), Sebastian Markbage (SM), Istvan Sebestyen (IS), Erik Arvidsson (EA), Brendan Eich (BE), Mark Miller (MM), Sam Tobin-Hochstadt (STH), Domenic Denicola (DD), Peter Jensen (PJ), John McCutchan (JMC), Paul Leathers (PL), Eric Toth (ET), Abhijith Chatra (AC), Jaswanth Sreeram (JS), Yehuda Katz (YK), Dave Herman (DH), Brendan Eich (BE),
RFTG Admin: ES6 Opt-out period.
(Allen Wirfs-Brock)
rwaldron/tc39-notes/blob/master/es6/2014-07/ecma-262-6-optout1.pdf
AWB: This is the opt-out period: Aug. 11, 2014 - Oct. 11, 2014
Final opt-out window: March 16, 2015 - May 18, 2015
Read the policy, speak with Istvan Sebestyen for further information.
The opt-out version of the spec is: ECMA-262 6th Edition, revision 26, document: tc39/2014/031
4.4 Instantiation Reform (Review @@create design rationale and possible
alternatives)
(Mark Miller, Allen Wirfs-Brock, Dmitry Lomov, Tom Van Cutsem. Based on Claude Pache proposal )
rwaldron/tc39-notes/blob/master/es6/2014-07/instantiation-reform.pdf
AWB: Currently:
new Foo(arg)
Foo[[Construct]](arg)
::=let obj = Foo[@@create]()
foo.call(obj, arg)
The actual constructor method typically does all of the initialization and setup on the instance object.
Issues:
(DOM centric)
- If instances aren't sufficiently initialized by @@create, then instance objects could leak (e.g. via a nefarious decoupling between the @@create allocation process and the constructor-function initialization process)
- @@create could be called directly
DH: Do we have concrete examples of this problem?
JM: Dealing with legacy code where you want to subclass from legacy
constructors, need to set up state, this
uninitialized
AWB: Gives Date
example, where [[DateValue]]
isn't setup until super()
DH: The initialization of the internal field cannot happen until the super constructor is called to create that field.
YK: This is caused by the @@create not accepting the constructor arguments.
AWB: Yes. Propose: Remove @@create, replace it with @@new, which will make the arguments available to
YK: When you subclass, it's normal to call super and pass the arguments.
AWB: @@create is a property of the constructor, not the instance.
Creating a subclass, you may want to do something with the arguments before
passing to super
JM: or adjust some state on the subclass instance before calling super
AWB: There is a complication that built-ins have: different behaviour when
new
'ed or call
'ed
No internal distinguishing mechanism. No way to indicate that a constructor
was called or newed.
YK: Don't think we need to be concerned with handling
AWB: A subclass will create a "call" up to the super
Explanation of current spec handling.
JM: Issues: were we called, or newed. One deals with intermediary state initialization.
AWB: The issue is coupling between state and the instance. Do we agree that there's a problem?
(no one says no)
JM: There are scenarios where a subclass wants to initialize state before calling super()
YK: It seems like a feature, not a bug
WH: What are you calling the intermediary state
YK: The fact that you can observe the creation There has to be brand check
DH: The simplest way of saying is that all need brand checks.
YK: Can do: foo.call(any)
and there is obviously a check there.
AWB: Internal slots are initialized in an atomic unit,
YK:
DL: TypedArrays missing creation information
AWB: You can move all the logic [to @@create or @@new or something akin], but you've just created another constructor
WH: Why not do everything from @@create in the constructor
DL/AWB: Jason Orendorff's proposal.
DH: (not-quite-right summary of Jason's proposal)
AWB: One way: reify [[Construct]] to the user
DH: When new Foo(args)
, calls Foo[@@new](args)
... ?
DL: Just pass the args to @@create
and change @@create
to @@new
NM: But then the subtype must have same signature
AWB: 2 viable options for ES6
- Live with what we have, @@create is grungy, but not unsafe
- Alternative, originating with Claude Pache
Bring back to constructor with atomic invocation. I'm for this approach and it's reasonable for ES6
(Mark presenting...)
MM:
Goals
Subclass exotics Avoid un (or partially) initialized exotics
ES5 compat (aside from "rcvr") ES6 class compat (aside from @@create) Reliable test for "am i called as a constructor?" Support base-creates-proxy scenario
class Derived extends Base {
constructor() {
// TDZ this, on "new Derived..." etc.
super(...otherArgs); // this = what super returns
// this is initialized.
}
}
//
function Base(...otherArgs) {
// implicit this = Object.create(mostDerived.prototype, {});
}
AWB: The super()
call is calling the super class constructor as a
constructor when new Derived()
—that's important.
WH: When constructor() is called as a function, super is called as a function too?
MM: Yes
WH: What causes the TDZ to appear? The statically visible presence of a super call in the body of the constructor?
AWB: Yes
WH: What if the super call is inside an arrow function?
BE: If Derived
called without new
?
AWB: super()
is called a non-constructor.
WH: super cannot appear in a nested function?
AWB: they can appear, but... (trails off)
JM: A related use case is being able to set up stuff on the instance before calling super()
AWB: Show me code that does that, to make sure we don't break that.
BE: code that doesn't actually use super() won't break, and there is no such code yet
MM: Base is an example of a function that doesn't have a super call (because it can't). On entry, before user code, implicit init this of fresh newly created object. This is a difference from ES5. The "mostDerived" prototype ...?
AWB: this actually isn't a difference from ES5, because there is no super() in ES5
MM: you are correct
MM: how do people feel?
JM: It's not an issue with ES5 -> ES6 legacy, it's an issue with ES6 class
designs that evolve over time
YK: my concern is the pedagogy of this approach.
MM: the pedagogy is as shown in this slide.
DH: No! It cannot be taught this way.
BE: let's just let Mark present.
MM:
From Claude Pache
F.[[Construct]](args, rcvr)
- Distinguish functions-which-call-super
- Vanilla function at end of super-call-chain is base (instantiation postponed to base entry)
**Modifications to Claude's proposal **
F.[[Construct]](args, rcvr)
- mod: Only MOP signature change
- Distinguish functions-which-call-super
- mod: call-super-as-a-function
super()
, but notsuper.foo()
- Vanilla function at end of super-call-chain is base (instantiation postponed to base entry)
- mod: instantiation postponed to base entry
YK: What about subclass constructors that don't include a call to
super()
.
AWB: Throw when new
'ed
Agreement.
JM: I still have issues with state initialization
YK: Issues
BE: Concern about setting properties on the instance before super()
JM: Code patterns exist, they won't just go away.
AWB: Can get around it with super.constructor()
BE: Lose the static analysis of super(
(right paren intentionall omitted)
MM:
[[Call]] Traps
F(...args) -> F.[[Call]](undefined, args)
Derive.[[Call]](const this, args)
super(...other) -> super.special_name(...other)
WH: What is the special name?
MM/AWB/DD: (to Waldemar) This is the ES6 spec
WH: Explain?
AWB: methods that ref super are bound to an object where the super ref takes place. that binding is the current inst. two bound values object where look up starts and the method name.
MM:
[[Construct]] Traps
new F(...args) -> F.[[Construct]](args, F)
Base.[[Construct]](rcvr, args)
entry -> const this = [[Create]](rcvr.prototype)
Derive.[[Construct]](args, rcvr)
entry -> TDZ this
super(...other) -> const this = super.[[Construct]](other, rcvr)
Remaining Requirements
Am I called as a constructor?
What is the original's constructor's prototype?
How do I provide alternate instance to the subclasses?
Am I called as a constructor?
function F(...other) {
let constructing = false;
try { this; } catch(_) { constructing = true; }
super(..);
}
Base instantiates proxy scenario
function Base(...other) {
return new Proxy(... this.prototype ...);
}
Kill two birds with "new"
function Date() {
let now = $$GetSystemTime();
if (new*) {
let obj = Object.create(new*.prototype);
// obj at now = now; // private “now” state
return obj;
} else {
return ToTimeString(now);
}
}
MM: Proposing a new special form (shown as new*
above) whose value is the
most derived otherwise undefined.
The test being: reliably check if I am called as a constructor.
WH: Unless the most derived receiver is falsy. Is there a way to create such a thing?
AWB: Yes, you can invoke the reflection trap and specify a falsy value for the receiver.
MM: Modified the above example to:
if (new* !== void 0) ...
AWB: We could fix this by throwing if reflection is used to invoke a constructor with undefined as the receiver.
Reflection and Proxies
Reflect.construct(F, args, rcvr)
(throw on undefined)- construct trap:
construct: function(target, args, rcvr)
YK: How does this work in ES5 classes?
AWB:
YK: Is conditional super a hazard?
MM: Yeah
AWB: New power, new complexity
YK: Exposing something that was implicit into the main path. Calling super in a constructor conditionally?
EA: Bug, can be fixed
AWB: (re: Date example) Where it shows Object.create...
DL/AWB: If you conditionally forgot to call super()
, [[Construct]] will
have to check at exit and throw.
YK: With @@create you had to know what you were doing. With this you could tread on weird cases without knowing it.
BE: Lets park that discussion for now.
DL: The sign that TypedArray is giving us is a sign of what user code might do as well so they will have the same issue.
AWB: Better direction. Don't go another decade where implementations can have private slots, but user code cannot.
MM: The direction I've presented is what I prefer. What I'm actually proposing is that we allow Allen to be the champion and work out the details remaining. Objection?
None.
BE: No objection, but I want to make sure Allen works with YK, JM and Boris Zbarsky
Conclusion/Resolution
- Agreement to MM proposal: Allen to be the champion and work out the details remaining
(This did not gain final consensus, as follow up was necessary)
... On to JM objections
JM: Start with a class never meant to be subclassed. Later you want to
re-use aspects of this class, but need a way to hook in to the subclass
this
before super()
class.
DH: eg. an initialize
method that just sets up properties and state
AWB: If it's just state that it doesn't need to know about, it doesn't matter? If it's state that does need to know about, what the channel? Seems very tenuous at best
JM: An example, we want to re-write some of the dom before calling the parent constructor.
DL: How is dom related?
WH: Are you unable to munge parameters to constructor?
AWB: Consider a scenario where the DOM mutation is contained in a method of the super class that must be invoked, for side effect, with no dep on object instance state, but is an instance-side method. The way around is to access your prototype or original prototype and invoke the method on the instance
Discussion of legacy scenarios and validity.
AWB: More of a refactoring issue
YK/JM: Agreement that we need more real world cases.
MM: Need a very concrete example, showing: the code written that wasn't intended for subclassing and the newer code that's attempting to subclass.
YK: There are issues created by memoization
Discussion re: subclassing in general.
MM: Need to do the concrete example exercise, and before the end of this meeting.
AWB: The fallback is that we just keep what we have.
DD: Worried about @@create, that it won't be possible to subclass because there is negative feedback
MM: Break on this discussion until JM has adequate examples.
5.2 SIMD.JS
(Peter Jensen and John McCutchan)
rwaldron/tc39-notes/blob/master/es6/2014-07/simd-128-tc39.pdf
Other slides: peterjensen.github.io/html5-simd/html5-simd.html#
JMC: (introducing SIMD, Single Instruction Multiple Data)
Slide presentation
Proposing a Fixed 128-bit vector type as close to the metal while remaining portable
- SSE
- Neon
- Efficient scalar fallback possible
Scales with other forms of parallelism
WH: Why fixed 128, given that x86 SIMD is now up to 512-bit vectors?
DH: Plenty of real world use cases for this, video codecs, crypto, etc.
STH: Wider widths?
JMC: Yes.
AWB: Works with IBM PowerPC?
JMC: Yes, overlapping instruction sets.
Proposing, specifically:
- SIMD module
- New "value" types
- Composable operations
- Arithmetic
- Logical
- Comparisons
- Reordering
- Conversions
- Extension to Typed Data
- A new array type for each
float32x4, 4 IEE-754 32-bit floating point numbers int32x4, 4 32-bit signed integers float64x2, 2 IEE-754 64-bit floating point numbers
Float32x4Array, array of float32x4 Int32x4Array, array of int32x4 Float64x2Array, array of float64x2
Object Hierarchy
SIMD -> int32x4 -> add, sub, ... -> float32x4 -> add, sub, ... -> float64x2 -> add, sub, ...
DH: Introduce new value types, but does not depend on user created value types
JMC: Examples...
var a = SIMD.float32x4(1.0, 2.0, 3.0, 4.0);
var b = SIMD.float32x4.zero();
MM: Why is zero() a function instead of a constant?
JMC: It could be a constant.
... additional examples. See Slides.
STH: How much of the difference is SIMD and single precision?
JMC: I don't have numbers, but would say SIMD
MM: Do SIMD instructions preserve denormals or flush?
JMC: ARM flush to zero. SSE you can select
Inner Loop
JMC: All high level JS can be stripped down in the JIT
Shuffling
(copy from slide)
JMC: Compiles down to a single instruction
WH: There are 256 of those constants defined?
JMC: Yes.
Branching
(copy from slide)
WH: What data type used to represent that?
JMC: int32x4
WH: Any kind of 4-bit data type for the mask?
Q about displayed data on slide
WH: Is select
bitwise?
JMC: Yes
WH: Cool. It's data type agnostic and lets you slice into smaller bit slices.
WH: Also, those of us who do NaN-coding will need to beware, because this can both view and manufacture arbitrary NaN's.
How does VM optimize for SIMD
(copy from slide)
**Firefox implementation Status
(see slide)
Chrome/v8 implementation status
(see slide)
YK: is Chrome interested in these patches?
JMC/PJ: They want confirmation from TC39
DL: This is v8, not chrome. v8 team is fairly conservative.
Emscripten Implementation Status
(see slide)
JMC: Much of these operations are used in real world platforms written in C++
V8 SSE Benchmarks (Early 2014)
(see slide)
MM: How can you get faster than 4x faster with 4-way SIMD?
DH: Float 32
SpiderMonkey SSE Benchmarks (Early 2014)
(see slide)
Dart VM NEON Benchmarks (Early 2014)
(see slide)
MM: Why are the relative speed ups across the vms are so different?
JMC: Different output from different code
Why Fixed Width and not Variable Width Vectors
(see slides, 1 & 2)
STH: A problem bigger than variable width vectors. If we wanted 256 bit widths, on 128 bit vector platforms, observable differences.
JMC:
WH: Why is intel building hardware with 128 bit vectors
-- Dmitry Lomov (DL) will fill in details of discussion here.
JMC: this will expose differences hardware
JMC: no implicit conversions, 1 + <float32x4> will do string concatenation
MM: why? JMC: too much magic DH & JMC: overloading operators is ok, no lifitng or implict conversions
WH: It's bad that you can do -<float32x4> but not 2*<float32x4> and instead
have to splat the 2 into its own vector first.
JMC: like asm.js, have to be clear about what types you're operating on.
YK: Don't have to make the ergonomics good
JMC: Don't have to, they never will be.
Planned Features 1
- SIMD and value objects/types
- float32x4 and friend will be value objects
- overloaded operators (+, -, ..._ will be mapped to SIMD.<type>.<op>
equivalents
- Additional data types (int8x16 and int16x8)
- Looking at VP9 encode/decde for justification
AWB: The top bullet has a lot deps, but the bottom not, are these near term?
JMC: Yes
WH: Why not int64x2?
JMC: Support is not universal
MM:
- Universal across processors
- something that has compelling algorithm
Unsigned integers don't fall in the second?
WH: Why not unsigned integers?
JMC: Not widely used
WH: uint32 perhaps, but uint16 and uint8 are heavily used in graphics to represent pixel values.
JMC: tried, issues encountered
- Extracting kernels and analysing the algorithms they're using and finding the instruction set overlap
- start with smaller scope, can expand later on. can add x8, x16 later. Surveyed internal teams
- 128 SIMD and expand from there.
MM: What's being saved, given the already exposed information?
JMC: time, complexity, etc.
AWB: How would you specify "slow", "fast", etc.
DH: Leave it undefined. "High recommended if supported, etc"
AWB: worried about gaming.
DH: same
Planned Features 2
(see slide)
DH: Risk:
- Some content is written such: if optimized, do this, if not, throw an error
- Browser doesn't want to be left out, will fake the optimized flag.
YK: The only reason to do the check is if you know you have a faster scalar implementation for systems without the SIMD feature; path of least resistance is to use polyfill and do no check at all. So maybe risk is not so great.
WH: Flip side also could be an issue: Web site has code for the optimized case which is not present on common platforms, somebody changes it and doesn't test it properly, it later breaks on optimized implementations, so browsers don't want to set the optimized flag.
JMC: (confirmed awareness of gaming)
BE: Some won't fake for fear of the performance cliff. See WebGL precedents.
Discussion re: risk, generally: some risks worth taking.
WH: instead of boolean, maybe a value that indicates speed level?
AWB: Application could do a mini benchmark as a test?
Stage 1 Ready?
(see slide)
AWB: Sounds like it is ready for stage 1. Can it be its own independent standard?
WH: It creates new primitive data types. Don't want specs outside creating new types
AWB: Do you expect every runtime to implement this?
JMC: Yes. They will run to implement this!
BE: Some embedded systems have trouble ith regex and unicode, it's expect that there will be "code commerce" among distinct device classes' embedded runtimes.
MM: We need a general framework for new value types
AWB: Without the value types, it's fairly clear cut.
MM: Preserving reference identity makes it prohibitively expensive
DD: Per the ES7 model, the feature can progress without being in another spec.
Discussion of the spec process.
STH: Back to MM statement, what does typeof have to do with reference identity?
- Could be implemented by memoizing the identity, not that you'd implement that way
MM: (example of using a weakmap)
- Logically, if they're a reference type, we have to admit them to WeakMaps, if they are a value type we can reject them. I hadn't considered the memozation
AWB/DH: (clarification of coupling and timing issue)
DH: Needs to be the same semantics as value types, if we ship this sooner and learn that we made a wrong call, then we have to deal with deciding whether or not we apply the mistake or break with SIMD.
Conclusion/Resolution
- Moves to stage 1
4.3 Function parameter/let declaration name conflict rules
(Allen Wirfs-Brock)
rwaldron/tc39-notes/blob/master/es6/2014-07/parameter-scoping-7-14.pdf
Current spec, Controversial:
function(x) {
var x;
let x; // early error
}
function(x) {
let x; // early error <--------
}
try {
} catch(x) {
let x; // early error <--------
}
AWB: Andreas wants consistent handling
DH: The mental model is that let is block-bound,
DH: var
is "I assert there is a binding in this scope, but that can be
re-asserted as much as I want". let
is "I have one unique declaration, I
don't allow redeclaration".
YK: If you say var x = 42
half way down the function, you can use the
original parameter x
until that point. With TDZ, if you had let x = 42
half way down, you couldn't mean anything with x
DD: (points about let and const protecting from mistakes)
BE: (channeling Andreas) Worried that there will errors when you want to shadow.
DH/YK: The shadowing is meaningless.
MM: I was indifferent, but side with Dave's points about refactoring
STH: Generating code via macros, introduces non-local restrictions that could break
DH: Just have a notion of parameter bindings and block bindings, distinct
from the surface syntax, and latter can't shadow former; easy workaround
for code generators is to add an extra pair of { }
.
MM: (example of user blindly changing var
to let
)
STH: This isn't a language issue, it's a non-semantics preserving change.
DL: (on behalf of Andreas)
For Do-Expressions:
() => {} = () => do {}
DH: Doesn't hold b/c left is statement body, right is expression body, not equivalent.
AWB: (revisting decisions about duplicate declarations in same contour)
AWB: Need to ensure that lexical declarations are disjoint sets, there spec mechanics there.
STH: Proposing
RW: The refactoring hazard only exists for the one time the code is run
after the change from var
to let
and the refactorer is shown the early
error and immediately knows to fix the bug. Removing these errors is
unfortunate
YK: It's not clear what the program does when there is no early error.
RW: What is Sam's position?
STH: Why do we have these errors? What do we gain from them?
RW: Arguably, JavaScript could use some degree of "nannying" if it has positive results.
MM: No way to explain that function declaration initializes the parameter?
BE: It doesn't. Andreas just wants let
to behave like var
re:
redeclaration
MM: Strict programming should be understandable in terms of lexical scope.
- Parameters and body are two scopes
- If explain as two scopes, can't unify.
- One scope
- Has to be an early error.
BE: Good argument, but not sure it depends on strict.
MM: In sloppy mode, functions are crap as well.
STH: He's just trying explain the semantics of let
, w/r to block scope
alone.
MM: A var-less
strict program should be understandable in terms of
lexical scope.
BE: var
is huge turd that should be recalled into some lesser demon's
bowels.
- We want the error.
Conclusion/Resolution
- Status Quo
- DDWIDM: "Don't Do What I Didn't Mean"
4.7 Revisit Comprehension decision from last meeting.
(Allen Wirfs-Brock)
AWB: There are a lot of TC members and non-members concerned that this was not a wise decision and that we should revisit. Included link to Andy Wingo
RW: if I had been here at the last meeting I would've objected to the removal, but as I told Dave offline, I trust him and his plans for syntax unification. I just wanted to see progress in that regard.
BE: I want to say: I was the champion for years, but letting go. I want to see the comprehensions laziness addressed.
DH: I did this exercise, the sudoku solver in:
- pythonic
- linq style
- no comprehensions
JH: I'd like to know if there are objections still, to deferral
AWB: Objecting to the late removal of a complete feature.
RW: Same objection, reached independantly, but again I trust Dave to see through the syntax unification.
DH: First, laziness is not a problem; you just need a way to construct a
lazy sequence either from an eager value (array.lazy()
) or from whole
cloth (lazyRange(0, 1000)
).
DH: Second, the fact that comprehensions only do a subset of operations you want means you end up mixing code styles (comprehensions + methods), and it gets syntactically awkward.
DH: When I did the exercise with three styles, I found the generalized comprehensions nicer but no comprehensions at all nicest.
BE: The affordance of generator expressions and comprehensions is that you don't have to write a call
DH: (Gives walk through of solver.linq.js, solver.pythonic.js)
- The exercise shows a need for new methods of iterators, flatMap, filter, etc.
DH: I said last time that we need an Iterator.prototype object and we agreed to defer since it probably wouldn't break code, but we forgot that hurts polyfills that want to patch the prototype with upcoming standard methods. So we should add the empty prototype object in ES6.
WH: In expressions such as foo.lazy().map(...function1...).every(...function2...), what shuts down (i.e. calls the return method of) the foo.lazy() generator?
DH: The call to every will shut down the generator if it reaches its decision early.
DD: The minimal Iterator.prototype is empty, but available. The long term is a constructor Iterator with blessed apis.
DH: Confirm
BE: An actual Iterator is means Duck typing isn't the preferred way, just create a real Iterator
MM: Using an actual functional style, the function names you're using are
BE: The oop style prevails, desired chaining API, adapter full of goodies.
Discussion of generators and observables
WH: How would you represent a 2-d comprehension like (for (x of xs) for (y of ys) if (x % y) x+y)?
xs.flatMap(x => ys.filter(y => x % y).map(y => x+y))
WH: OK. A bit less pretty than the comprehension in this case, but acceptable.
MM: after seeing this code I will never use comprehensions
YK: raises arms in triumphant vindication
BE: who will own explaining to Andy Wingo and es-discuss?
DH: I will
BE: "You keep what you kill" - Richard P. Riddick
Conclusion/Resolution
- Add a prototype for iterators, but do not expose a global Iterator
constructor for ES6 (leave that for ES7)
- Between Object prototype and Generator prototype
- Initially empty, but accessible
- Comprehensions in general deferred to ES7
4.12 Revisit spread and destructuring of string
(Erik Arvidsson , Brendan Eich)
EA: We're using ToObject in spread and all other iterable forms. Should we do the same for destructuring?
- This would allow destructuring strings and other non-objects.
// Should allow:
let [first, ...rest] = "foo";
first; // "f"
rest; // ["o", "o"]
STH: ToObject breaks pattern matching because you couldn't match on a number.
YK: But we agreed to a future irrefutible matching, which would be the basis of pattern matching.
DH: Array vs. Object cannot have the same semantics here in what we want from pattern matching
- if I used an array
EA: Uses iterator
DH: Not even self-evident that pattern matching syntax will work in JS
YK: (to Sam) Do you think it will it should fail for strings to destructure?
More discussion of pattern matching.
DH, BE: match must mean a different pattern language, ships have sailed for destructuring and implicit ToObject
Conclusion/Resolution
- Destructuring does ToObject
4.5 Import-into-namespace syntax (Dave)
(Dave Herman)
request slides
DH: (recapping the last meeting and the findings of the breakout group; and the fall out)
DH:
Resolution
- Changed syntax for clarity
(need slides)
Module Context 1
-
Existing systems provide contextual metadata:
- module.id
- __filename
- __dirname
-
What is the dynamic analog of relative import?
import helper from "./helper";
Module Context 2
- no implicit namespace pollution, plz
- JS has a dedicated contextual variable:
this
- Solution: inital
this
binding is a context object
DD: How is this different from adding global variables, eg. Reflect
STH: The difference is that the value depends on where it is; unlike
Reflect
, which is the same thing.
DH: We should use this
in top level of a module
AWB: what does that mean? this
at the top level of a module?
DH:
Module Context 2
- Relative import:
this.import("./helper").then(...);
- Space for host-specific contextual metadata:
this.filename
(This is where platforms can put its host properties and objects)
- Cross-talk about
eval
Reflect.global
BT: indirect eval?
DH: Will give you the global object
DD: object to relying on this
outside of a method
RW: Workers already to the above
MM: We can't even poison this
for ES6
YK: if you says it's module context, you have to say how it got that value
DH: No new scoping rules. This construct just implicitly binds something.
AWB:
import filename from this;
// which basically: import filename from here;
DD: Like this for relative
DH: Completely amenable to this
YK:
import * as me from here;
me.import; // `me` is the context object
Conclusion/Resolution
- the api is right direction
- each module gets its own version of that object
- need some sort of access to the module contextual object
- some sort of declarative form to get at
- static contextual information about the module
"Then, during the Third Reconciliation of the Last of the Meketrex Supplicants, they chose a new form for him, that of a giant Sloar! Many Shubs and Zulls knew what it was to be roasted in the depths of a Sloar that day, I can tell you!" ―Vinz Clortho[src]
July 31 2014 Meeting Notes
Brian Terlson (BT), Dmitry Lomov (DL), Waldemar Horwat (WH), Allen Wirfs-Brock (AWB), John Neumann (JN), Rick Waldron (RW), Eric Ferraiuolo (EF), Jafar Husain (JH), Jeff Morrison (JM), Mark Honenberg (MH), Caridy Patino (CP), Sebastian Markbage (SM), Istvan Sebestyen (IS), Erik Arvidsson (EA), Brendan Eich (BE), Mark Miller (MM), Sam Tobin-Hochstadt (STH), Domenic Denicola (DD), Peter Jensen (PJ), John McCutchan (JMC), Paul Leathers (PL), Eric Toth (ET), Abhijith Chatra (AC), Jaswanth Sreeram (JS), Yehuda Katz (YK), Dave Herman (DH), Brendan Eich (BE), Ben Newman (BN)
Notes from secretariat
IS: ES6 delay accepted, but please don't delay this again.
- TC52 working in a similar way and process to TC39's ES7 approach: Frequent releases of incremental versions of standards. They also use the same kind of RF policy.
- TC52 is looking at how TC39 is proceeding
- TC52 are more polite
IETF and Internet Architecture Board liaison.
- JSON work and looking for liason. We published Ecma-404 and they are publishing their standard and have asked for review/comment. Need to nominate someone as liaison.
ITU liaison.
- Using in JSON for communication standard
Meteor group has joined Ecma
JN: Recommend putting out a call for liaisons, including the information you have. List roles and expectations, we'll put it on the next meeting agenda to establish appointment.
AWB: There has been a notification for these roles.
JN: Is anyone here prepared to volunteer now? Or at the next meeting. Need someone to at least collect the communications out of those organizations.
Conclusion/Resolution
- John Neumann to stand in as liaison
9.1-8 Date and place of the next meeting(s)
JN: Need to fill in the venues.
DH: January 27-29, 2015 at Mozilla (Downtown SF, CA)
EF: March 24-26 2015 at Yahoo (Sunnyvale, CA)
JM: May 27-29 2015 at Facebook (Menlo Park, CA)
ET: November 17-19 at PayPal (San Jose, CA)
RW/YK: Will decide on Sept. 2015
Conclusion/Resolution
- John Neumann will update the agenda and schedule.
4.4 Follow up: Instantiation Reform (@@create)
JM: Found cases where we set up state before calling super. I'm convinced that there are sufficient workarounds (via calling <<SuperClass>>.call() in
the legacy style).
gist.github.com/jeffmo/bf30e7154ab3c894b740 -- "#_Before.js" is an example of a pattern that exists now, "#_After.js" is an example of how one might fix this pattern
JM: (gives examples that amount to two step initialization)
WH: C++ doesn't allow this kind of bottom-up construction (can't initialize a subclass instance before the superclass instance is initialized), and use cases like this arise once in a while. The usual workaround is to pass through Options objects.
JM: (example of type ahead classes in FB code base -- see 2_Before.js and 3_Before.js in the above gist)
AWB: (draws model of two phase allocation)
YK: The problem is allocatuion vs. initialization, in your model mutates before calling super.
- Need to make sure the allocation has happened before you get into the constructor.
- It looks like the only way to fix this is to have "two constructors", which we can't do
SB: We don't want to support this pattern, but there is nothing to stop user code from doing this.
AWB: "Fragile Base Class Problem"
- Start at derived class
- super'ed up to base class
- Base class invokes method that's defined on the subclass
- The problem is that the object isn't set up yet.
This is a bug.
AWB/YK: (Further discussion of how to avoid this pattern)
JM: Special case refactoring in subclasses isn't trivial.
- Both direction have down side:
- TDZ approach negates certain cases
- Non-TDZ approach allows for decoupling of allocation/instantiation
YK: Lifting the TDZ doesn't solve the problem. It happens to work in this case because the base class doesn't allocate.
AWB: There is way to do this in the new design, use the construct method
SM: Foresee a tooling solution (e.g. linting for properly placed calls to super())
AWB: Will always come to a place where a problem can't be solved with your existing inheritance model and you'll simply need to refactor. It's not that inheritance has failed, just that the class heirarchy needs to be refactored.
JM/SM: Refactoring is the correct approach, but it can be idealistic in some scenarios. Imagine a TypeaheadBase class that has been subclassed 100s of times. It's not until the 101th time that you realize you need to refactor the base class (and, thus, all pre-existing subclasses)
Discussion about subclasses that require two phase construction (with instance side initialization methods)
Mixed discussion about allocation phases.
MM: Do we have consensus on the instantiation reform we agreed to yesterday?
yes.
[JM agrees on the grounds that there are at least legacy style workarounds
for that 101th subclass and the rest of the patterns he found a la
<<SuperClass>>.call()
]
YK: will not be ok with this solution if it switches on the instantiation of the constructor
...Need help covering this...
WH: I insisted on having a syntactic switch for function-vs-generator rather than switching on the presence of a yield. The reason was that functions and generators are radically different and it makes sense for a trivial generator to have no yields. In this case I'd mildly prefer to have a syntactic switch as well, but it's not as crucial because I haven't seen any good examples of where apparently problem-free code would unexpectedly go wrong. If you don't call the superclass constructor, you'll miss out on superclass initialization, which would be a problem even if the presence of a super call didn't statically switch modes, so the mode switch hasn't created a problem where there wasn't one. Inserting or deleting an "if (false) super()" does change things, but I don't see why one would be likely to do that. [I suppose that you could stylistically mark inheriting constructors whose super() calls are deeply buried with an "if (false) super()" at the top :).]
MM: I agree there is a smell with super; would I adopt syntactic marking? I want to see them first.
WH: On the fence, suppose we do this, what if you call super in a method not marked? What does that do?
DH: It's ok for class body to have more syntactic distinctions than outside functions acting as constructor
AWB: You could imagine that I have a splat that indicates "I do not want allocation for this class"
Conclusion/Resolution
- Agreement to MM proposal: Allen to be the champion and work out the details remaining
ES7 Items?
DH: We should focus on ES6 items, we have limited time in face to face.
DD: I don't think we should de-prioritize ES7, given the train model.
AWB: We have no choice but to prioritize ES6
5.7 Array.prototype.contains
(Domenic Denicola)
Follow up from RW's Stage 0 in
DD: Presents: domenic/Array.prototype.contains
DH/RW: Parity with String
RW: Most arguments were noise and Domenic's proposal addresses them all.
AWB: One of the previous objections: string.contains is looking for a substring, not an element.
EA: Same thing with indexOf
JM: Can you give an example where this is a problem?
MM: Consistently inconsistent parity with indexOf
Conclusion/Resolution
- Advance to Stage 1
DD: Would like to do Stage 2 and 3 asynchronously due to the simplicitly of this proposal.
AWB: I'd like the process to be respected.
MM: If Stage 2, 3, and 4 are complete by next meeting, then we can advance to Stage 4.
Discussion re: the ES7 process.
AWB: Concerns about lack of review that results in more work later in the game.
MM: We don't have a mechanism to come to consensus outside of these meetings. These meetings are the time that we're able to work on these issues. This is my allocated time
5.1 Math.TAU
(Brendan Eich, Rick Waldron)
gist.github.com/rwaldron/233fd8f5aa440c94e6e9
BE: Math.TAU = 2PI
WH: OK, but only if it's called Math.τ :-)
MM: Opposed: one letter shorter, not well known, not well taught. PI is known, taught and ubiquitous.
Conclusion/Resolution
- Rejected.
Exponentiations operator
(Rick Waldron)
gist.github.com/rwaldron/ebe0f4d2d267370be882
RW: All other languages have it. Why can't we?
RW: Needs hihger precedence than multiplication
MM: Right associative?
RW: Yes, same as all other languages.
BE: **?
MM: Want to make sure that it does the same as the built in %MathPow% and not not any overload.
RW: Confirm
DH: Wants to point out that adding syntax does have a cost. But thinks it is fine and there is a lot of precedent.
Conclusion/Resolution
- Approved for Stage 0, in 6 minutes.
Precision of Math trig functions
(Dave Herman)
DH: V8 made the result less exact in the name of performance. Are we going to have a "race to the bottom"?
DH: People are now starting to implement these algorithms in js since they cannot depend on the built ins.
DL: Java lies. Implementations do not follow the Java rules.
WH: Looked at the posted data and found it pretty compelling that the status quo free-for-all is not good. Some functions that should be monotonous aren't. Some results are significantly off.
WH: fdlibm should be the lower bar for precision. It almost always gets things exact or within 1 ulp.
DH: Talked to an expert (Dan Gohman) and he offered to come to this meeting; I told him he could wait and see how the conversation goes and maybe come in the future.
WH, DL: We need to invite experts to get the right answer to this.
MM: When doing the sputnik tests we just looked at the results and based the precision on what browsers did when the tests were written.
AWB: We need a champion.
DL: V8 is planning to fix this and make the results more accurate.
Conclusion/Resolution
- Need to bring in experts.
On 5 Aug 2014, at 18:30, Rick Waldron <waldron.rick at gmail.com> wrote:
- Spread now works on strings
var codeUnits = [..."this is a string"]
The code example implies it results in an array of strings, one item for each UCS-2/UTF-16 code unit. Shouldn’t this be symbols matching whole Unicode code points (matching StringIterator
) instead, i.e. no separate items for each surrogate half?
On Wed, Aug 6, 2014 at 7:27 AM, Mathias Bynens <mathias at qiwi.be> wrote:
On 5 Aug 2014, at 18:30, Rick Waldron <waldron.rick at gmail.com> wrote:
- Spread now works on strings
var codeUnits = [..."this is a string"]
The code example implies it results in an array of strings, one item for each UCS-2/UTF-16 code unit. Shouldn’t this be symbols matching whole Unicode code points (matching
StringIterator
) instead, i.e. no separate items for each surrogate half?
Spread is done using iterators so the StringIterator will be used.
I would like to register a gripe. I have been unhappy with my interactions with TC39 since things started happening more in F2F meetings and private mails than online. I wish that people would do things more on es-discuss. As an example:
On Tue 05 Aug 2014 18:30, Rick Waldron <waldron.rick at gmail.com> writes:
Short discussion about making generator.return() throw a special
exception.
DH: Want to bring up Andy Wingo's preference (discussed on es-discuss) for modeling return() as an exception rather than a return.
General opposition.
Conclusion/resolution
- keep as is: return() method produces a return control flow, not an exception
No reason, no response on the list to the salient points, no response from Dave Herman; too bad.
Yield *
AWB: Does an internal throw
When a
generator.throw()
is called and the generator has a yield* the spec currently callsthrow
in the yield* expressionDH: Call return() on the outer generator, delegates calling return() to the delegated
yield *
This kind of proposal is particularly egregious as a way-past-the-last-minute semantic change that was proposed only at a F2F meeting without any other possible input, even from generators "champions" (if I am that any more). The discussion has people participating that never post on the list.
Note that in this particular case the semantics of a return() are fully specified with the old agreed semantics if return() is implemented as an exception.
Grumpily yours,
Andy
Andy I don't have answer for your second concern but please consider the work Rick is doing every meeting is unbelievably amazing and I think it might happen that he couldn't manage to put every single word down.
I am just saying that I'd rather not assume that consensus was reached without even thinking/discussing it but surely makes sense to ask more here about such consensus.
Best
On Fri 08 Aug 2014 15:00, Andrea Giammarchi <andrea.giammarchi at gmail.com> writes:
Andy I don't have answer for your second concern but please consider the work Rick is doing every meeting is unbelievably amazing and I think it might happen that he couldn't manage to put every single word down.
My gripe is not with the meeting notes, which are excellent and truly appreciated.
Andy
Hi,
I would like to register a gripe. I have been unhappy with my interactions with TC39 since things started happening more in F2F meetings and private mails than online. I wish that people would do things more on es-discuss.
I sympathize.
I have also noticed that solid technical arguments posted to es-discuss are often ignored by TC39 members and feature champions.
[off-list]
It's even worse for modules (my interest area). At the latest F2F, there was a breakout session on modules, and certain members decided it would be best to not even publish the notes from that session.
oops : )
I'm sorry for what looks to you like black-holing. I admit Rick's notes are terse but that shouldn't be taken for "brusque", and where they need supplementation, I try to post followups to es-discuss.
However, in this case I don't think the charge is fair to TC39 in full. I'm a long-time champion (probably erstwhile at this point) of generators, and I did reply to es-discuss about the history and rationale for forcing a return instead of using an exception.
I think Dave lacks time to keep up with es-discuss, but you can always reach out to him by email. Anyway, you don't need him here, you've heard from at least Allen and me on this list.
Brendan Eich wrote:
However, in this case I don't think the charge is fair to TC39 in full. I'm a long-time champion (probably erstwhile at this point) of generators, and I did reply to es-discuss about the history and rationale for forcing a return instead of using an exception.
My message was held up by Postini spam quarantine, but it made it through to the mailman archive and to esdiscuss.org -- here's a link:
I hope you got it via mailman, but it's possible it was spam-trapped at your end.
July 24 2012 Meeting Notes
Present: Yehuda Katz (YK), Luke Hoban (LH), Rick Waldron (RW), Alex Russell (AR), Tom Van Cutsem (TVC), Bill Ticehurst (BT), Brendan Eich (BE), Sam Tobin-Hochstadt (STH), Norbert Lindenberg (NL), Allen Wirfs-Brock (AWB), Doug Crockford (DC), John Neumann (JN), Oliver Hunt (OH), Erik Arvidsson (EA), Dave Herman (DH)
10:00-11:00am
Discussion of proposed agenda.
Determine participants required for specific subjects.
July agenda adopted
May minutes approved
4.1 AWB Presents changes resulting in latest Draft
Draft related bug filing Increased community participation, a good thing Issue with numbers not matching duplicate filings, be aware
Quasi Literal added to specification Spec issues have arisen, will review
Initial work defining tail call semantics (still need to define tail positions in 13.7) What defines a "tail call" in ES Existing Call forms need to be specified in how they relate to tail positions. (call, apply, etc)
STH: Important that call and apply be treated as tail calls
YK: and accessors
STH: Agree.
…discussion of examples
AWB: Differences between accessor calls as they apply to proxy call traps, not definitively identifiable at syntax level. The function call operator and the call trap.
TVC: Proxy trap calls currently can never be in a tail position (except "apply" and "construct" traps)
STH: call should be in tail position. Clarification of known call site syntax, per spec.
Summary: Anything that could invoke user written code in a tail position to act as a tail call.
call, apply, accessors, quasi (interpolation), proxy calls
a start by DH on harmony:proper_tail_calls which uses an attribute grammar, but the current spec draft leaves this blank.
Filed: ecmascript#590
4.5 RegEx "Web Reality"
(strawman:match_web_reality_spec)
Introduction to discussion by Luke Hoban
LH: Attempted to write a guide to make regex specification match current implementation wherein order of production matters. See 15.10.1 Patterns in above link.
…Gives specfic examples from 15.10.1
Discussion between AWB and LH re: semantic annotations and redefinition.
YK: Do non-web implementations match current spec or web reality?
AR: Are there any non-web implementations?
YK: Rhino?
BE: matches reality because based on SpiderMonkey circa 1998
Test cases? Yes.
BT: Yes, cases exist in Chakra
LH: (Refers to examples)
NL: Do these affect unicode? We had agreement at previous meeting that web reality changes would not be applied in Unicode mode (/re/u).
LH: This is what regex is in reality… Waldemar did not want to specify because it's too hard to specify, but now the work is done
AWB: Too hard is not an excuse to not specify, good that the work is now done.
Discussion of "\u" in existing regex - \ug or \u{12} is interpreted, but differently than planned for Unicode mode
Trailing /u flag?
Makes grammar more complicated to have \u{...} only if /u flag used.
AWB: Three things to address: Web reality, Unicode support, new extensions
LH: /u the only way to opt-in to Unicode escapes with curlies, with Unicode extensions.
NL: need to reserve backslash with character for new escapes in the future, e.g. \p for Unicode character properties
OH: Fairly substantial regex in wild all created with RegExp constructor.
YK: Moving forward: Evangelize using Unicode and tacking "/u" onto all new regex?
BE, OH, AR: yes.
Decision: LH and NL to collaborate on integrated proposal
4.7 Adding forEach to Map and Set
harmony:simple_maps_and_sets
Deferred, got to it on third day
4.9 getClassNameOf
BE: Recap, last meeting there was discussion about getting a strawman from YK
YK: I began specifying, but existing questions prevented
BE: some want to solve not only the typeof null problem, but also "array"
YK: What is the usecase for Object.isObject
DC: Polymorphic interface
AWB: "has properties"
RW: Similar to isNaN: isNull that is only for null
OH:(Reiterates that we cannot change typeof)
AWB: what is it about host (exotic) objects that need to be differentiated from native (ordinary) objects?
YK: Reclarification about things that are not objects (in the [object Object] sense) that say they are.
AWB: If we go down this path, can anyone redefine the return value
YK: My question is: either always return object Object, or let anyone change to return anything
AWB: Rephrase as "extending toString()". Removing [[Class]] from spec, but now as [[NativeBrand]]. The default: exactly as they are today. in ES6, if this property is defined, then use it, if not, use default.
Mixed discussion of real world uses of: Object.prototype.toString.call(o)
BE: 1JS Killed typeof null
BE, OH: Like the idea of a configurable property to define explicit value of brand
YK: why is what "toString" returns so important?
AR: 2 things:
Summary
There is worry that changes to spec that affect the return of toString will have adverse impact on existing libraries and users when they encounter new runtime behaviours where the existing behaviour is expected.
Belief that we need a more flexible mechanism, whether it is AWB's configurable property that defaults when not explicitly set, or AR et al trait type test proposal.
BE, AWB: nominal type tests considered an anti-pattern per Smalltalk, but they happen in JS not only "because they can" -- sometimes because of built-ins you need to know
6 Internationalization Standard
Norbert Lindenberg: (Introduction and opening discussion)
Discussion, re: contributors
6.1 Last call for feedback before final draft
Function length values? Using ES5 section 15 rules would cause respecified functions like String.prototype.localeCompare to have larger length values; using ES6 rules would let them keep old values.
Leads into larger discussion about Function length property.
Decision: Apply ES6 rules to all functions in Internationalization API.
Numbering system, number formatting system. Would like to reference Unicode Technical Standard 35.
Outstanding issue:
If you have 4 different impls, 3 of them support a language that you want to support, how can you polyfill the 4th to support the language.
The constructor can re-declared?
Conclusion: There is no easy way currently, second version of Intl spec will address this.
Conformance tests being written for test262.
NL will have the final draft prepared for September meeting, but will produce drafts leading up to that meeting.
6.2 Microsoft and Google are implementing prototypes
Unicode support
AWB:
within curlies: any unicode code point value \u{nnn} so essentially three ways within string literal:
BT: treating curlies as utf32 value? AWB: curlies contain code point value, which you could call utf32
DH: old-style escapes always are a single utf16 code unit, so always .length 1; new-style escapes always are a single Unicode code point, so may have .length 2
NL: "<<some stupid emoji>>" = "\u{1F601}" = "\uD83D\uDE01" = "\u{D83D}\u{DE01}"
AWB: one point of controversy: what happens with utf16 escape sequences within identifiers
var <<wacky identifier>> = 12
-- is that a valid identifier?var \u{<<wacky identifier code point>>} = 12
-- is that a valididentifier?
NL: and, for example, what if it goes in an eval?
DH: careful! difference between:
eval("var <<emoji>> = 6")
eval("var \uD83D\uDE01 = 6") eval("var \uD83D\uDE01 = 6")
AWB: disallowed:
var \uD83D\uDE01 = 6
eval("var \\uD83D\\uDE01 = 6")
allowed:
var \u{1F601} = 6
eval("var \\u{1F601} = 6")
DH: any reason to allow those?
YK: sometimes tools taking Unicode identifiers from other languages and translating to JS
DC: we have an opportunity to do this right; \u{...} is the right way to think of things
DH: we have eval in the language, so the language thinks of strings as UTF16 and should have a correspondence in the concept of programs
LH: there's just no strong argument for this inconsistency
DH: there's no real practical value for disallowing; there is potential harm for the inconsistency in causing confusion in an already-complicated space
DC: the only real value here is for attackers; no normal code uses this
BE: and maybe code generators
LH: it's just removing an inconsistency that could be a gotcha
LH: there isn't a codePointLength -- is that intentional?
AWB: since strings are immutable could be precomputed
DH: which is why you want it to be provided by the engine, so it can optimize (precompute, cache, whatever)
DH: should it be a function, to signal to programmer that it has a potential cost?
AR: but no other length is a function
DH: fair enough, just spitballing
AWB: what about code point iteration from end to beginning? and also codePointIndexOf? don't have those yet
4.1 (cont) Processing full Unicode Source Code
String Value
Conversion of the input program to code point sequence outside of standard
Trad. \uxxxx escapes represent a single char, creates a single BMP character, 16bit element
Issue: in string values, ?? (Etherpad is broken) === \u1F601 === \uD83D\uDE01 === \u{D83D}\u{DE01}. In identifiers, ?? === \u1F601 !== \uD83D\uDE01 !== \u{D83D}\u{DE01}. Inconsistency that's hard to explain to developers.
DC: This feature is more likely to be used by hackers than developers.
AWB: Two APIs
String.fromCodePoint (build string from integer values)
String.prototype.codePointAt
What's here, valid surrogate pair?
DH: Mixing the API levels is problematic, should it be scrapped?
…The problem in naming is the "At"
…If we're going to build code point abstractions, we really need a new data type.
NL: ICU has iterators for grapheme clusters, words, sentences, lines – all based on UTF-16 indices. Abstractions don't require different indices.
Need more here.
4.13 Destructuring Issues
A. Patterns discussion on es-discuss
Issue: ToObject() on the RHS? This is currenty specified and enables things like: let {concat, slice} = "";
This equivalence is desirable and maintain by the current spec: let { foo } = { bar: 42 }
let foo = { bar: 42 }.foo;
A syntax for pattern matching against objects match({ bar: 42 }) { case { foo } { console.log("foo") } default { console.log("no foo") } }
let { ?foo } = {} let ?foo = {}.foo // wtf
DH: Pure WAT. Let's pick the most common case and address that. You cannot presume to cover everyone's pet case
What is the right thing to do.
DH: Future pattern matching
LH: Reiteration of correct matching vs intention
More discussion, defer until AR is present
let { toString: num2str } = 42;
let num2str = (42).toString;
Consensus without AR is to impute undefined for missing property when destructuring, and if we add pattern matching, use different rules for patterns compared to their destructuring meaning.
BE talked to AR at dinner on day 2, thinks he heard this and may have agreed (to avoid breaking consensus). Need to confirm.
B. Defaults
Explicit undefined value triggerw use of default value initializer.
let foo = (x = 5) => x;
foo(undefined) // returns undefined by current draft foo() // returns 5 by current draft
Issue: is this desirable? dherman and others think an explicit undefined should trigger use of default value. use case in support function setLevel(newLevel=0) {light.intensity = newLevel} function setOptions(options) { setLevel(options.dimmerLevel); //missing prop returns undefined, should use default setMotorSpeed(options.speed); ... }
Note same rules are used for both formal parameter default values and destructuring default values.
let foo = (…x) => x.length;
foo(undefined) // 1 foo() // 0
Need summary. decision: change spec. to make undefine trigger use of default value.
C. Unresolved issues related to iterator naming/access
spread works on array-like destructuring has rest pattern
import iterator from "@iter"
function list(x) { return iterator in x ? [ y for y of x ] : x; }
[a, …] = list(jQuery(selector)); [a, …] = list([…]); [a, …] = list(function *() { … });
f.call(f, …args) same as f.apply(f, args);
Summary:
(DH) iterator is a unique name -- can't be public because iterable test not confined to for-of RHS
Destructing and spread - no iterator protocol. (return to existing draft semantics of arraylike — [Cannot be both iterable and array-like])
Array.from:
Array.from should… (this is a change to current specification)
(Filed: ecmascript#588)
Continued...