Es4-discuss Digest, Vol 2, Issue 7
See earlier thread.
The problem is that allowing people to return a string (override toJsonString) will very fast lead to invalid JSON. I thought it was agreed upon that it was better to rely on toJson intead?
Let's get to the bottom of this quickly.
The current proposal is based on long-standing JS code from json.org.
It delegates toJSONString, just as JS has always delegated toString,
allowing objects to serialize themselves fully, erroneously, or
however they please.
The counter-proposal avoids delegating serialization at all,
consolidating it in native code for speed (modulo toJSON costs) and
reliability. If an object or class wants to specialize serialization
to filter certain enumerable properties, it has to implement toJSON
and create/hashcons an object with just the right enumerable properties.
The prototype-based delegation in JS for toString has not been a
source of bugs AFAIK, but toString results are not serialized to
protocol peers who deserialize -- so who knows? It's not a critical
issue. With JSON, it is. The inherently well-formed serialization of
the counter-proposal is its selling point. So let's argue about that.
I for one would love to see the ability to set DontEnum on properties. This is a very logical thing do with some properties and is very useful in code interopibility even when you are not trying to achieve a pure "Hash". It doesn't seem like a very big addition either (vs sharps). Also, having function/methods as slots that can be swapped with values seems like a fundamental concept of ES and its simplicity and consistency. It would seem bizarre to me to set toString to a primitive and expect anything but an error when coercion tried to convert the object to string. ES3 throws an error, and it seems totally logical that it should. Also, it seems weird that we use the term "Hash", isn't Map or Dictionary more correct? Isn't a Hash just the most natural and efficient implementation of a Map or Dictionary? Maybe I am just used to Java terminology. Of course we know what each other is talking about, so it is not really a big deal. Kris
On May 2, 2007, at 12:37 PM, Kris Zyp wrote:
I for one would love to see the ability to set DontEnum on
properties. This is a very logical thing do with some properties
and is very useful in code interopibility even when you are not
trying to achieve a pure "Hash". It doesn't seem like a very big
addition either (vs sharps).
Everyone has read developer.mozilla.org/es4/proposals enumerability.html by now, right?
Also, having function/methods as slots that can be swapped with
values seems like a fundamental concept of ES and its simplicity
and consistency. It would seem bizarre to me to set toString to a
primitive and expect anything but an error when coercion tried to
convert the object to string. ES3 throws an error, and it seems
totally logical that it should.
Agreed.
Also, it seems weird that we use the term "Hash", isn't Map or
Dictionary more correct? Isn't a Hash just the most natural and
efficient implementation of a Map or Dictionary? Maybe I am just
used to Java terminology. Of course we know what each other is
talking about, so it is not really a big deal.
Hash is just shorter, and conventional among Perl and (I think)
Python and probably Ruby folks. We wouldn't necessarily use it; the
late-breaking attempt to add a class with no helpful syntax (the
attempt that foundered in part on concern about weak reference
support being implied) was called Dictionary.
(Wow, I didn't know anyone read the digest.)
On May 2, 2007, at 2:37 PM, Kris Zyp wrote:
Also, having function/methods as slots that can be swapped with
values seems like a fundamental concept of ES and its simplicity
and consistency. It would seem bizarre to me to set toString to a
primitive and expect anything but an error when coercion tried to
convert the object to string. ES3 throws an error, and it seems
totally logical that it should.
I agree with this. My example is willfully boneheaded for the sake of
brevity. Key safety is an issue when arbitrary keys are set -- for
instance, the situation described here [1].
Peter's point is illuminating, I think -- we're really just looking
for a better UI for Hash.set and Hash.get. ES3's object/hash duality
is convenient, but I'm willing to store my keys and values some other
way if it means that I can get key safety. Introducing a new
operator (like "#.") instead of Hash.set/Hash.get would minimize the
pain of giving up this duality, especially if said operator can be
overridden.
[1] www.andrewdupont.net/2006/05/18/javascript-associative- arrays-considered-harmful/#comment
Doug Crockford made some statements about TG1 members and motives.
The speculations about motives were made by others on this list, not by me.
In answer to a question from Brent Ashley, I said that the working group was not in consensus, a fact which would have been apparent to any keen observer at the San Francisco Ajax Experience when you and I were sitting side-by-side arguing about the wisdom of the proposal.
I pointed out that ecmascript.org and the write paper are not official ECMA publications. The working group did not vote on either. I recommended that everyone read the white paper, and if they liked it, they should say so. And if they didn't, they should say that too. My intention was to start a debate. My advice was that people read it before taking sides. I wish more people were taking that advice. The proposal is not a done deal. It has serious consequences that should be discussed.
Doug, I appreciated your comments at the end of the debate about people reading the white-paper. It was very courteous. That said, you did imply that the WG was split, by mentioning two members in support and two opposed.
Your comments to me after the presentation about back-compatibility were interesting, and I think any concerns about successful implementations of the back-compatibility (for instance, their implementations vis a vis the browsers' DOM implementations) are the most interesting part of the debate, and the part getting very little play. In effect, if ES4 implementations are successfully able to run ES3 code, the major concern that people have is fairly moot. So if there are factual reasons to suspect that ES4 implementations will fail to successfully support ES3, we should be talking about them, front and center.
Thanks again!
On 10/30/07, Douglas Crockford <douglas at crockford.com> wrote:
It has serious consequences that should be discussed.
Well, it looks like there are two years of meeting minutes on the wiki. Do the details of these consequences appear anywhere in the minutes? If not, now is the time to get very specific, or get out of the way.
There's a difference between meeting minutes of the WG and the discussions that need to occur now in the development community. If ES4 is to succeed, a chunk of the development community needs to get behind it (and at least not be actively opposed to it). The insider baseball that's been occurring for the last two years is not particularly useful to the development community (no, I am not going back to read two years worth of minutes). It would be extremely helpful to get a coherent set of the concerns about ES4 (Doug's "consequences") so that interested parties outside of the WG can get a sense of what's going on technically. I would specifically like to hear a realistic technical scenario where the implementation of ES4 produces serious complications in the open web.
On Oct 30, 2007, at 3:09 PM, Douglas Crockford wrote:
Doug Crockford made some statements about TG1 members and motives.
The speculations about motives were made by others on this list,
not by me.
Yeah, you have said little on this list, but somewhat more at that
panel at TAE Boston. I wasn't there, but multiple people who were
(some on this list) attributed statements or words to you. Perhaps
they misquoted.
- The anonymous slashdot firehose post, which said "At The Ajax
Experience conference, it was announced that an ECMAScript4 white
paper had been released. The implication being that the white paper
was the upcoming spec, which is untrue. Not to mention this is not an
official ECMA site, but a site run by only some of the members from
the ECMAScript4 group. These facts were later revealed by another
concerned ECMAScript4 member."
John Resig confirmed that the "concerned ECMAScript4 member" was you.
-
Kris Gray: "Crockford - Adobe, Mozilla, Yahoo and Microsoft. The
first t[w]o support it, the later two oppose it." -
Ric Johnson: "However, Doug did strike a chord when he said 'the
ghost of Netscape'".
Was Ric quoting you accurately, and if so, what did you mean?
- Ric Johnson again: "There are A LOT of accusations of backroom
deals being made"
Whatever your exact words, what you did actually say raised the
question of motive. People want to know the "why" when told the scary
"what". Is there some back-room deal? Is there misconduct according
to Ecma rules? And so on. Naming two (of four or six) companies
working on ES4 is bound to cause speculation.
But for the record, I have no reliable eyewitness quoting you as
saying "motive". The word may never have crossed your lips.
However, from the above, it seems clear that you did publicly call
out Adobe and Mozilla as potentially misrepresenting ES4's status in
Ecma, or the lack of consensus in TG1, or both. You apparently
mentioned Opera too, but somehow left the distinct impression with
multiple listeners that TG1 was split down the middle: Adobe and
Mozilla vs. Microsoft and Yahoo!. These impressions immediately beg
questions of motive. I think you can see that.
Thank you all for keeping this thread alive, despite all the flamz,
I still am missing something. If someone could somehow prove that ES4 is flawed, do we have actually have a chance to fix it before it is burned over my childhood memories? How do we do that? Do I need to draw up a diagram comparing similar systems and show failure rates mathematically and then mail it to Al Gore?
The problem with this debate is that I could argue both sides: I work as a .Net dev by day (mostly), and I promote Open Source in my free time. I use IE6, IE7, Firefox 2, Opera and Safari.
Doug is correct: as a product manager, it is bad to add more features: it increases our risks with reduced ROI
Brenden is also correct: If the working group voted and the current proposal won - it is better to have a stronger, more secure language. Sure they can argue it is bloated, but SO WHAT?
I know! I know! What if I set up some sort of webpage (with Ajax!) that asks people to VOTE on the current spec? What would be the rules?
- Validating identity to prevent double votes a) Classify employer and prove they did not force your vote
- Do we need technical questions to verify understanding?
- Should there be a line item veto?
The idea is silly, but I wonder: who would honor it?
I believe I heard quite a few declarations from Mozilla/ Adobe/ Opera here. I would like to hear a statement from Yahoo and Microsoft
Doug: If 99% of technical users said they wanted ALL of the features, would you accept it? What is your threshold? One more thing: Can you make a statement to the effect that Yahoo does not have any 'deal' with Microsoft to hinder this process? I am sorry to point to you, but I really do not expect an answer from MS, considering the past.
To put another face on it, it seems to me we are being unfair. If Opera put in a new feature in their browser, I assume most people would just believe it was their business. BUT if Microsoft decides to put a direct interpreter for .net in IE8 and CALL it JavaScript (or jScript), we would ALL cry foul. Is ES4 the new OOXML? In effect, some major players are getting together to force a standard that the others do not agree on. Then again, if Microsoft simply did not implement Javscript2, would the other vendors be happy?
As a software engineer, I welcome most of the proposed changes. BUT, as a web developer, I feel I am forced to take Doug's view: even there was only ONE change between browsers, I would have to have special code to handle ALL cases. And the sad thing is that it WILL happen. I am willing to bet up to $100,000 - any takers?
If you think the Ajax libraries are complex now, just wait. And if anyone is interested in the web, but did nothing about this now, we will all loose.
SHAMELESS PLUG: We are all talking about the 'Open Web'; I feel so strongly about this, I donated OpenAjax.Org to the Alliance in order to promote the largest collaboration in history. I was overjoyed when Microsoft AND Google joined. Now I realize they can not solve all the problems in the current context. If ANYONE can implement the OpenAjax hub with all libraries and a back end infrastructure to support developers, they have my permission to use OpenAjax.Com for FREE as part of OpenDomain. Any takers?
Ric Johnson
- Ric Johnson: "However, Doug did strike a chord when he said 'the ghost of Netscape'".
Was Ric quoting you accurately, and if so, what did you mean?
I stand by this one.
- Ric Johnson again: "There are A LOT of accusations of backroom deals being made"
However, it was NOT Doug that claimed any backroom deals. I have it written in my notes in several places, but do not know whom. I was sufficiently disturbed by the claim, I highlighted it to make sure I posted it. Can anyone claim this quote?
I am not a professional reporter. I am just knurd who blogs about my passions.
On Oct 30, 2007, at 3:59 PM, Yehuda Katz wrote:
I would specifically like to hear a realistic technical scenario
where the implementation of ES4 produces serious complications in
the open web.
Me too. Here's one analysis of how ES4 might break existing scripts:
- New keywords and syntax hanging off them. Already addressed by not
opening the ES4 namespace without opt-in version selection, e.g.
by <script type="application/ecmascript;version=4">. See the
versioning proposal at
- Intentional incompatibilities. Today we knocked down three proposed
such changes:
bugs.ecmascript.org/ticket/241, bugs.ecmascript.org/ticket/250, bugs.ecmascript.org/ticket/251
- Unintended incompatibilities: these have to be addressed by testing
the reference implementation, other implementations, and the spec- language (run by one's brain). This is an auditing procedure, QA.
Often coverage is a problem, but we have had good experiences with
fuzz-testing JS and intend to fuzz the RI.
Hope this helps.
Brenden is also correct: If the working group voted and the current proposal won - it is better to have a stronger, more secure language. Sure they can argue it is bloated, but SO WHAT?
The proposal is not a more secure language. It does nothing to address ECMAScript's biggest design flaw: the insecurity caused its dependence on a global object. XSS attacks are a direct consequence of this flaw. By making the language more complex, this problem becomes even harder to reason about and fix.
I have been bringing this up since my first day in the working group. This is not a concern that is being sprung at the last minute.
The working group hasn't voted. The proposal has not won. We have agreed to disagree, developing two competing proposals in the same working group. I am pursuing with Microsoft a counter proposal for a simpler, reliable remedy to real problems. My position isn't that JavaScript doesn't need fixing. I think we need to be more selective in how we fix it. Bloat, in my view, is not good design.
Doug, What specifically would you do in ES3+ to improve this situation?
An HTML attachment was scrubbed... URL: esdiscuss/attachments/20071030/b1252924/attachment-0002
Douglas,
Thank you! I asked for a clear message, and you gave me one!
Don't we need the global object to be compatible?
Can you point to a suggestion of what use mere mortals may do to influence the proposals? (Other than this on this list)
We may all agree that 'bloat' is bad. We might not all say that ES4 is bloated. Do you want to use EcmaScript.Net as a place for your counter proposal (yet another OpenDomain - when I am passionate about an idea, I go all the way)
Politics is the art of compromise: let us DIFF your proposal with ES4 and negotiate adding at least half the new features. Can you agree to that? Will Microsoft put up a bond they will deliver the same on a fixed date? What would be in that pool of features you WOULD accept? What would be the features Mozilla would accept dropping?
Thank you for your reply, but I see we missed one important question: Can you please make a statement to the effect that, to your knowledge Yahoo and Microsoft do not have a 'deal' to impede Javascript2 based on financial terms?
Thanks again Ric
On 10/31/2007, "Brendan Eich" <brendan at mozilla.org> wrote:
On Oct 30, 2007, at 3:59 PM, Yehuda Katz wrote:
I would specifically like to hear a realistic technical scenario where the implementation of ES4 produces serious complications in the open web.
Me too. Here's one analysis of how ES4 might break existing scripts:
- New keywords and syntax hanging off them. Already addressed by not opening the ES4 namespace without opt-in version selection, e.g. by <script type="application/ecmascript;version=4">. See the versioning proposal at
Brendan,
I know that a version string may help us avoid incompatibilities, however I am also cognizant that this may serve the purpose of fragmenting the language. Look at what happened with VbScript. If we open the door to the version string, then Microsoft could add their own interpretation. I went through the pain of trying this with earlier JavaScript, and it never worked. It is not standard practice just to use the <script> tag 'plain'
Ric
On Oct 30, 2007, at 5:03 PM, Douglas Crockford wrote:
Brenden is also correct: If the working group voted and the current proposal won - it is better to have a stronger, more secure language. Sure they can argue it is bloated, but SO WHAT?
The proposal is not a more secure language. It does nothing to
address ECMAScript's biggest design flaw: the insecurity caused
its dependence on a global object.
Yes, ES4 is backward-compatible, so it still has a mutable global
object. Mutable prototypes are a hazard too, but required by backward
compatibility.
Breaking the global object's mutability without opt-in from new code
would break the web, so I'm surprised you are complaining about it.
We've seen each other in TG1 meetings since July 2006, but no one
including you has come up with a proposal to solve this problem
compatibly, except by doing exactly what we have done: adding
features in ES4 that developers can use to control visibility and
mutation. The very features you oppose.
ES4 provides const, fixed typename bindings, lexical scope (let),
program units, and optional static type checking -- all of which do
make ES4 code easier to analyze and instrument to enforce security
properties.
XSS attacks are a direct consequence of this flaw. By making the
language more complex, this problem becomes even harder to reason
about and fix.
No, ES4 with const, fixed typename bindings, lexical scope, program
units, etc. is easier to reason about than ES3, not harder.
ES4 is easier to reason about automatically. It's ironic that you
invoke reason. Reasoning about code should not be done by humans
only. Reason => logic => proof => proof system <=> type system, as I
wrote at ajaxian.com in a comment recently.
Your own words invoke the Curry-Howard correspondence, which requires
a type system that supports static analysis, as well as unambiguous
name binding. ES4 directly supports type checking and unambiguous
name binding. Yet you seem to oppose all of ES4, especially the type
system. That seems contradictory.
I have been bringing this up since my first day in the working
group. This is not a concern that is being sprung at the last minute.
I remember the 2006 July meeting well, and the minutes are posted here:
I don't think you suggested anything not recorded in some way there.
If you mean that the item under "Yahoo!s position" about security
promises motivating Yahoo! to entertain a new and incompatible
language, that claim led nowhere. Mainly because unlike Yahoo!,
browser vendors and web developers in general will not tolerate
incompatibilities for "Security" -- but also (more important) because
no one has a convincing solution for capital-S Security, other than
adding features to ES3 that users can choose to control mutability
and more conveniently and efficiently hide names.
The eval discussion recorded in those minutes did lead to a decision
by the group not to support a sandbox optional argument to eval. It
also apparently led to that es3.1 String.prototype.eval idea, which
you have not proposed for ES4. Too bad, but perhaps we'll be able to
include it in anyway.
The working group hasn't voted. The proposal has not won. We have
agreed to disagree, developing two competing proposals in the same
working group. I am pursuing with Microsoft a counter proposal for
a simpler, reliable remedy to real problems. My position isn't that
JavaScript doesn't need fixing. I think we need to be more
selective in how we fix it. Bloat, in my view, is not good design.
Who could argue with that?
Since backward compatibility matters a lot to us, both to conserve
working code so some of it can be gradually migrated to ES4, and to
minimize code footprint for a common ES3/ES4 implementation on small
devices (you've heard both of these points since the July 2006
meeting), we have indeed made some additive changes -- 'let' in
addition to 'var', for example. We've even prototyped some of these
in Firefox, to prove they didn't break anything (with opt-in
keywords) and did not exact a big code footprint cost (they did not).
This approach shows serious concern for compatibility, but in spite
of it, we still get "break the web" FUD. Yet our "compatibility
through opt-in additions" approach also makes it all too easy for
someone arguing casually to accuse ES4 of being bloated. Catch-22.
Taking the high road here does nothing but pat yourself on the back,
and for work not visible to anyone except you and Microsoft (and some
Ajax developers who got advance copies? That "ES4 will break the web"
covert blog-spin campaign leaked, BTW). What good does it do to
posture against bloat when your alternative proposal is hidden? I'm
assuming there is an alternative proposal beyond some JScript documents.
Look, I can take a different high road and mimic your arguing style:
"Everyone familiar with the uses of closures in JS (for information
hiding), something you've championed, who has run into closure-
induced leaks and the larger object population costs of too many
closures, will say your proposal is the opposite of bloated: stunted,
insufficient, inefficient."
Doesn't feel very good to be on the receiving end, does it? I didn't
have to do any work to demonstrate my claims. I didn't even have to
state problems precisely.
It would be great if you could start, after over a year of
involvement in TG1, and a while in es4-discuss, to communicate
better. I'm glad to have as many words as you've sent today, even
with the bloat remark, given how few specific and actionable words
you've uttered to TG1. But the bloat accusations and superior-
sounding claims to be selective need to stop. Everyone working on TG1
has been selective by some criteria. No one wants bloat. For us to
get anywhere and avoid a fork, this kind of rhetoric has to stop.
On 31/10/2007, Ric Johnson <Cedric at opendomain.org> wrote:
I still am missing something. If someone could somehow prove that ES4 is flawed, do we have actually have a chance to fix it before it is burned over my childhood memories? How do we do that? Do I need to draw up a diagram comparing similar systems and show failure rates mathematically and then mail it to Al Gore?
There's features broken and there's language broken. I'd argue that the ES3 regex model is broken, that ES4 could and should fix it*, and that it won't hurt live code to do so but will help making regex in ES make sense. But on the other hand, ES3 has an unambiguous spec on this point, and Opera and Mozilla both seem to implement it as specified. Not fixing it right now would not really hurt the language, the language isn't broken by this feature being broken, except for making it harder to fix in the future (and possibly harder to add new features such as conditionals and lookbehinds). So language not broken, only less useful that it could be.
I've yet to see any single argument with technical merit that the language is broken - I've seen broken features, mostly broken since ES3 that can't be fixed without breaking backwards compatibility, but nothing that breaks the language. The argument about ES4 additions size > ES3 size is the only one that seems to have any merit
whatsoever, and that only in the way of being somewhat limiting on limited power devices such as mobile phones, where the compiler and vm size must be kept low.
To put another face on it, it seems to me we are being unfair. If Opera put in a new feature in their browser, I assume most people would just believe it was their business. BUT if Microsoft decides to put a direct interpreter for .net in IE8 and CALL it JavaScript (or jScript), we would ALL cry foul. Is ES4 the new OOXML? In effect, some major players are getting together to force a standard that the others do not agree on. Then again, if Microsoft simply did not implement Javscript2, would the other vendors be happy?
I think there'd be considerably more happiness if Microsoft provided a .NET based ECMAScript 4 compliant JScript version in IE8/9, whether based on JScript.NET or as a DLR runtime, than if they didn't provide an ECMAScript 4 engine at all by IE9. Even a COM/IDispatchEx version would be welcomed over not having an ECMAScript 4 engine at all. Sure, we have an effort to get a Tamarin version to hook into IE by COM/IDispatchEx. Who will distribute it (pretty sure Microsoft will not), can it reach enough users to matter? Can the distributor be trusted in corporate environments? Can it be compatible enough with legacy JScript to be used as an all-out replacement?
Actually, I'd be more comfortable with an ECMAScript 4 engine based on .NET in IE than I would be with even a Microsoft backed Tamarin engine. I think the more interoperable implementations on different virtual machines the better, and ECMAScript 4 on .NET opens up for single language client and server side code, which I think many developers would like. (Most of them just don't want the single language to be JavaScript...)
<offtopic>
- I've had the argument before, the idea is simply summed up as:
- Backreferences to capturing submatches that have not participated or that failed to match should fail. (In ES3 these succeed at matching the empty string.)
- A capturing submatch should be reset only next time it participates in a match. (Instead of each time the containing group is repeated, as ES3 states.)
- Backreferences to repeated submatches must always reference the value of the submatch last time it participated and matched. (IE/JScript uses the first match always, even if there are later matches.
One example trick that doesn't work in ES3 but works in other regex implementations is this pattern used for conditionals:
(?=(a)()|())\1?b(?:\2c|\3d)
However, this fails in the ES3 model since all of \1, \2 and \3 will always match the empty string, instead of either both \1 and \2 failing or \3 failing. </offtopic>
On Oct 30, 2007, at 5:36 PM, Ric Johnson wrote:
On 10/31/2007, "Brendan Eich" <brendan at mozilla.org> wrote:
Brendan,
I know that a version string may help us avoid incompatibilities,
however I am also cognizant that this may serve the purpose of fragmenting the language. Look at what happened with VbScript.
VBScript is full of lessons to avoid, but don't borrow trouble. The
opt-in versioning for ES4 required only to use new keywords that were
not reserved by ES3, or even if listed among ES3's future reserved
keywords, were not reserved by IE.
If we open the door to the version string, then Microsoft could add their own interpretation.
The standard for ES4, as proposed at that versioning wiki page above,
mandates what implementations must do when interpreting certain
values of the version parameter named by RFC 4329.
We really can't stop anyone from adding interpretations allowed by
the RFC, unless we mandate that no other values are recognized unless
codified by future Editions of ECMAScript. Would that address your
concern? Even that kind of injunction is just words on paper.
Microsoft or anyone else could ignore it. I mean, it's not as if
JScript does not deviate from ES3 in obvious (and easily fixed) ways.
I went through the pain of trying this with earlier JavaScript, and it never worked. It is not standard practice just to use the <script>
tag 'plain'
Sure, but you have now argued in a circle. If the script tag handler,
upon seeing <script> (no type or version selected), invokes the
ECMAScript implementation so that it understands new ES4 keywords,
then the browser behaves incompatibly from today's browsers, and
you've just broken the web.
Douglas Crockford wrote:
The proposal is not a more secure language. It does nothing to address ECMAScript's biggest design flaw: the insecurity caused its dependence on a global object. XSS attacks are a direct consequence of this flaw. By making the language more complex, this problem becomes even harder to reason about and fix.
I have been bringing this up since my first day in the working group. This is not a concern that is being sprung at the last minute.
It is true that you brought this up on the first day you came to the meetings. I said then, and I say again now, that I think we will actually be putting ES4 programmers in a better position wrt. safety than ES3 programmers.
I wonder if the following two basic facts about the ES4 environment are clear. I've brought them up in the past but you usually dismiss them rather than saying why exactly they don't help:
-
fixtures in ES4 are early-bindable, non-overridable and (in cases of methods, types, classes, and other sorts of constants) non-replaceable. Intercepting control from a script that uses fixture names by mutating some part of the global object or prototypes is not possible in this design.
-
'use namespace intrinsic' at the top of an ES3 file, then publishing it under the ES4 mime type, should be enough to accomplish this in many cases. All the multiname references that would previously hit the prototype chains and/or global.foo bindings become fixture references that XSS code cannot intercept. Even if they don't early-bind due to the absence of type annotations on slots, the multinames wait around until runtime and the fixtures mask the public names / prototype chains.
Do they not help with some of the XSS scenarios? I imagine that they would, but have not nearly the depth of experience with XSS problems.
On 10/31/2007, "Brendan Eich" <brendan at mozilla.org> wrote:
Sure, but you have now argued in a circle. If the script tag handler, upon seeing <script> (no type or version selected), invokes the ECMAScript implementation so that it understands new ES4 keywords, then the browser behaves incompatibly from today's browsers, and you've just broken the web.
I did not say I had a solution - just that, to me, adding a version would accomplish Doug's goal of 'just name it another language'
This is what I KNOW: Mozilla/Opera/Adobe will get it right - or die. There will bugs in the implementation There will be different interpretations of the spec Time is the killer: ES4 needs to be out NOW to help Ajax compete with richer solutions, but has to be mature. Also, different vendors will have different schedules. Microsoft HAS contrived to obstruct similar proceedings in the past
This is my opinion: Douglas is a nice guy, and really smart. I assume he is just passionate about JavaScript and is not participating in any conspiracy. I am a bit overwhelmed by the new spec, but I assume you know what you are doing. I do not work for Microsoft, but use their tools on a daily basis. It is in my interest to make a true 'Open Web' that includes them. We can not force them to implement any specific feature.
Here my critical point: The technical discussion is MOOT. Microsoft plays this game way to well. The ONLY way to win is not to play: We need to make JavaScript2 so well that they have no choice but to copy it. We need EVERY other vendor on board to make it the same across the board and we need to release it all together on the same day with several working implementations and tutorials. We need OpenAjax to solve the problems before ES4 gets here and to provide a migration path,
We need a master plan to achieve success against Microsoft's Stonewall defense: may I suggest a fianchetto of one of the bishops?
Thanks for reading yet another lengthy post! (No I do not get paid by the word!)
Ric
P.S. Allen Wirfs-Brock from Microsoft did make an important post on my blog: openajax.com/blog/2007/10/29/god-bless-scoble/#comments
On Oct 30, 2007, at 7:02 PM, Ric Johnson wrote:
On 10/31/2007, "Brendan Eich" <brendan at mozilla.org> wrote:
Sure, but you have now argued in a circle. If the script tag handler, upon seeing <script> (no type or version selected), invokes the ECMAScript implementation so that it understands new ES4 keywords, then the browser behaves incompatibly from today's browsers, and you've just broken the web.
I did not say I had a solution - just that, to me, adding a version
would accomplish Doug's goal of 'just name it another language'
No, I'm sorry -- "another language" and a "new version of an existing
language" are not the same thing. Or do you think ES3 is another
language from ES2, which came before it? ES3 added new syntax not in
ES2 (likewise ES2 to ES1).
The world has been through JS evolution before. We'll get through
another round.
On Oct 30, 2007, at 6:45 PM, Ric Johnson wrote:
Doug is correct: as a product manager, it is bad to add more
features: it increases our risks with reduced ROI
I sympathize with the concern behind this statement, but I would
argue its analysis is incomplete on the following grounds:
- Programming language design is very much different from consumer
product design - The risks of adding feature always needs to be carefully balanced
against the risk of not adding them
Programming language design is historical in focus; stick to what has
worked in the past, avoid things that have not. Innovation in
language design is glacial compared to consumer product design.
Languages that experiment in features without historical precedent do
so at their own risk. By comparison, consumer product design seeks
to specifically do things that have not been done before, and to
avoid that which is well-known and "mundane".
Brendan has mentioned this, but it's worth repeating: there is
nothing "new" in the ES4 design. All of the features mentioned have
appeared previously in well-known languages. The features have been
fully researched, implemented and deployed in production systems, and
withstood the test of time.
I'm a product guy, not a PLT guy, so when I first started reading the
ES4 proposals, I was just as worried about scope and feature creep.
Having now spent a year or so absorbing the content and learning
about this || much about PLT, it's sunk in that the proposal is not
as big as it looks.
(And this doesn't account for the fact that ES4 will be the most
precise specification of ECMAScript yet, with a reference
implementation no less!)
When concerned about "bloat", there are specific problems one worries
about:
-
backward compatibility: ES4 is backward compatible, choosing only
to add to the language. If one is still scared by this, then one
must consider, what number of features would be "safe"? If you have
an API with 5 methods and you add 10 more, but you're worried about
backward compatibility, what's the difference between 1 new method
and 10 new methods? Why is the greater number of methods inherently
a bigger risk to backward compatibility? -
size of implementation: while I consider this a valid fear, I must
also point out that several of the TG1 members are incredibly
experienced in building JS interpreters for embedded systems. And
those members are in favor of ES4. I, JS hacker, cannot claim
knowledge enough to undermine their domain expertise. If other
experts in the field were concerned about this, that would be another
story. But thus far I have not seen any such objectors. -
time to completion: again, I must defer to the expert language
implementors of TG1.
The only remaining claim to featuritis is that developers will find
the new language unwieldy and hard to use. Valid concern, but how
can any developer claim this without actually having tried the
language? As I said above, you can find all these features in other
well-known languages, and they've worked well. Historical precedent
is on the side of ES4 on this one.
On a more personal note: I've spent the past 3 years building a JS
application from scratch; I've watched the application grow from
programming-in-the-small to programming-in-the-large, growing along
side a startup whose product has grown successful and accumulated
users and features at a rapid pace. I used to scoff at strong data-
typing -- who needs it when you've got good tests? These days I'm
not laughing. Every feature in ES4 would be a great help in
scaling this application out. If asked to name a "least useful
feature," I wouldn't name any. The feature set feels right.
So is the large feature set of ES4 risky? I don't believe so, but
even more importantly, so what? We're drowning out here (though not
all of us realize it). Without these features, JS apps are doomed to
scale (of codebase) limitations that will fundamentally inhibit the
growth of web-based applications. The "play it safe" strategy is
penny-wise and pound-foolish. When I hear concerns that ES4 will
"break the web," I can't help but think of how many times I've heard
that the web is already broken! The risks of not adopting the ES4
surely must factor into this calculus, too.
As far as I can see it, there are a couple of non-technical issues here that have to be resolved, but they all boil down to who is willing to implement what. Is there truly no way to compromise?
MS says that they think ES4 deserves to a be a new language. Suppose that is true and ES4 is renamed, say ES++. Then would MS be willing to implementing ES++ in their new browser? Or is the name truly a scapegoat to blame their misgivings of ES4 the language itself?
Or if you consider the feature set of ES4 to be too bloated, that ES4 is trying to tackle problems it was never meant to handle, what exactly do you dislike about ES4? What features would do you want to cut out of the language? What features would the pro-ES4 people be willing to cut out? I know this will be hard, but let's ignore feature dependencies for the moment. Say, for example, that you dislike the class system but are fine with the type system (as nonsensical as that sounds). Perhaps you're fine with grafting a type system on top of the prototype system without classes. Or if you also dislike the type system, what do you propose instead? For encapsulation, you might propose sticking with closures. Now, how are you going to address the pitfalls of closures, namely memory usage and inefficiency? ES4 advocates may also be concerned that encapsulation via closures is plain ugly, while ES4 critics say that classes make the language inelegant. Perhaps there is another solution? Anyway, do you see where I'm getting with this? There should be some room for compromise. I'd like to see some active discussion concerning these feature disagreements.
Lastly, all this accusations concerning intentions need to stop. Mozilla, this concerns you too - I find that saying you won't let big bully MS block ES4 is somewhat hypocritical, considering that the SVG 1.2 committee did something similar to you guys (make no mistake, I'm not accusing anyone and I hope this is the last time SVG is mentioned on this mailing list). In any case, hypocritical or not, all these accusations are not constructive at all. Responding to accusations with more accusations is plain sophomoric in my book. Please, I have enough of my political bullshit from the news - I don't expect that here.
Thanks, Yuh-Ruey Chen
On Nov 2, 2007, at 2:40 PM, Yuh-Ruey Chen wrote:
Lastly, all this accusations concerning intentions need to stop.
In the absence of technical arguments, these are inevitable.
Mozilla, this concerns you too - I find that saying you won't let big bully MS block ES4 is somewhat hypocritical, considering that the SVG 1.2 committee did something similar to you guys (make no mistake, I'm not accusing anyone and I hope this is the last time SVG is
mentioned on this mailing list).
If you mean SVG Tiny 1.2 violating W3C process rules by skipping from
Working Draft to Candidate Recommendation without addressing all
received technical objections, there's no comparison. We working on
ES4 could use some specific technical objections from the Ecma
members who oppose ES4. I promise we wouldn't ignore any before
moving ES4 to the final stage of Ecma standardization (there is no CR
analogue in Ecma).
Hey, you brought this up -- no fair doing a drive-by about SVG while
I'm in earshot! :-)
In any case, hypocritical or not, all these accusations are not constructive at all. Responding to accusations
with more accusations is plain sophomoric in my book. Please, I have enough of my political bullshit from the news - I don't expect that here.
If these are inevitable because technical objections haven't been
made, that tells you something important. The politics are overriding
technical considerations. Act accordingly, but please, don't assume
there is a technical fix to a political problem. Forcing one by a
bartering process will only make a Franken-standard.
On Mon, 2008-02-25 at 10:15 +0000, Mike Cowlishaw wrote:
Currency calculations are not very interesting at all :-). But (outside HPC and specialized processors such as graphics cards) they are by far the most common.
Surely the Adobe fellows have something to say about this :). Flash tweening and 3D Flash libraries like Papervision3D make extensive use of floating point math. In an environment where tweaks like multiplying fixed points instead of dividing yields substantial performance increases, the "currency calculations" you speak of fade into insignificance.
That's exactly my point. Currency (and other human-related calculations) need to be done in decimal. Scientific calculations, graphics, etc., need to be done in the base that best serves the purpose -- which might be decimal if you need high precision hardware and the binary hardware is low (double) precision) or it might be binary if you need performance above all else (in the short term).
For graphics/drawing calculations, where users are not exposed directly to the precise results, binary is just fine. But for scripting where the results of the calculations are seen as numbers, bar charts, etc., by users, decimal is necessary.
A case in point -- my PMGlobe app (www.cary.demon.co.uk/pmglobe) has to calculate several square roots for every pixel on every refresh
On 05/03/2008, es4-discuss-request at mozilla.org < es4-discuss-request at mozilla.org> wrote:
Send Es4-discuss mailing list submissions to es4-discuss at mozilla.org
To subscribe or unsubscribe via the World Wide Web, visit mail.mozilla.org/listinfo/es4-discuss or, via email, send a message with subject or body 'help' to es4-discuss-request at mozilla.org
You can reach the person managing the list at es4-discuss-owner at mozilla.org
When replying, please edit your Subject line so it is more specific than "Re: Contents of Es4-discuss digest..."
Today's Topics:
- Re: ES4 draft: Vector (Jon Zeppieri)
- Feature positions (Waldemar Horwat)
- New Operator (TNO)
- ES4 draft: Object (Lars Hansen)
---------- Forwarded message ---------- From: "Jon Zeppieri" <jaz at bu.edu> To: es4-discuss at mozilla.org Date: Wed, 5 Mar 2008 16:04:29 -0500 Subject: Re: ES4 draft: Vector I just realized that my message from yesterday about the 'fixed' property (of the Vector class) was sent as a reply to the thread about the Map class... Sorry.
So, why a read/write, rather than a read-only, 'fixed' property?
-Jon
2008/3/3 Lars Hansen <lhansen at adobe.com>:
I enclose a slightly incomplete/rough draft for the Vector class. Please comment.
--lars
Es4-discuss mailing list Es4-discuss at mozilla.org, mail.mozilla.org/listinfo/es4-discuss
---------- Forwarded message ---------- From: Waldemar Horwat <waldemar at google.com> To: es4-discuss at mozilla.org Date: Wed, 05 Mar 2008 13:50:52 -0800 Subject: Feature positions The Google team sat down and over several long meetings reached consensus on our positions on the various ES4 features. I see that Mbedthis has done likewise.
spreadsheets.google.com/pub?key=pFIHldY_CkszsFxMkQOReAQ&gid=2
Waldemar
---------- Forwarded message ---------- From: TNO <thenewobjective at yahoo.com> To: es4-discuss at mozilla.org Date: Wed, 5 Mar 2008 14:13:04 -0800 (PST) Subject: New Operator Is it too late to propose an integer division operator into the spec? I do quite a bit of WSH programming in both VBScript an JScript and sometimes its a bit of an irritant during a translation, it would be nice to see this operator "" available in the new ECMAScript instead of having to rely on more inefficient workarounds:
VBScript:
Dim result result = 19 *\ *4 '(result = 4)
Maybe a word operator like "div" since backslash as an operator would be
ambiguous at the end of a physical line since many interpreters support bash/python-style line continuation.
JScript:
From: Jonathan Toland <jonahtoland at yahoo.com> To: es4-discuss at mozilla.org Date: Tue, 29 Apr 2008 18:34:18 -0700 (PDT) Subject: Re: Triple quoted strings i've been reading some past posts and wanted to add some supplemental input on this one and emphasize some key advantages to mirroring e4x templating syntax in triple quoted strings. first currently beginning to work in as3 i highly regard the progress the wg has made. es4 really has grown up into a very structured (yet still almost uniquely flexible, extensible, and dynamic) language in comparison to the tediousness of trying to scale up my js apps.
- i have often missed string interpolation working in js although i have
google-caja.googlecode.com/svn/changes/mikesamuel/string-interpolation-29-Jan-2008/trunk/src/NOT-FOR-TRUNK/interp/index.html outlines a way to bolt string interpolation onto existing interpreters while avoiding the XSS problems that usually accompany string interpolation.
It aliases eval, which is disallowed according to ES3, but that is fixable. I believe that this will break with the attenuated eval proposals, but if some form of string interpolation is a goal, then (1) it might provide a way to backport the feature onto legacy interpreters (2) the XSS-safety might be useful to language designers
i kown why ToNumber(undefined) is NaN and ToNumber(null) is 0?
so undefined == undefined is true,undefined == null is true, but undefined >= undefined,undefined <= undefined, undefined>=null and
undefined <= null is false. all this cause by ToNumber(undefined) is NaN and ToNumber(null) is 0.
Precedent and developer conversations I've had strongly suggest that some code wants mutable primordials on the inside of a module that can be consumed without the mutations affecting the importer's primordials.
It would be really interesting to have module-local prototypes -- and not only the standard classes, but also for user-defined classes which originate from other modules.
Imagine, if you will, that modules add an extra layer in the scope chain which somehow intercepts and proxies prototypes, resolving prototype misses up to the "real" prototype for said class -- analogous to the way property misses are resolved in JavaScript.
moduleA.js: Array.prototype.a = "hello"; exports.foo = [];
moduleB.js: Array.prototype.a = "world"; exports.foo = [];
program.js: print(require("moduleA").foo.a, require("moduleB").foo.a)); print(require("moduleA").foo instanceof Array, require("moduleB").foo instanceof Array)); print(typeof Array.prototype.a);
output: hello world true true undefined
Is object identity the inverse of hash? So myObj == identity(hash(myObj))? Then weakly keyed or mapped tables are obviously possible, even though they'll have to be manually cleared of expired values.
Unfortunately, while this may support encoding weak mappings, I suspect a feature like that is used more often in frameworks than application code, so there might be a big penalty. ActionScript 3 introduced a Dictionary object that performs weak mappings -- perhaps that can be used to find a concrete comparison point. i don't know how this is distinct from 'ephemeron' tables, so I might be missing something.
On Sat, Jul 3, 2010 at 10:27 PM, Leo Meyerovich <lmeyerov at gmail.com> wrote:
Is object identity the inverse of hash? So myObj == identity(hash(myObj))?
No. There is not and must not be an identity function which turns data into access. This would destroy the most important safety property of JavaScript, that object references are not forgeable.
Also, hashes can collide. So the most you get is the conventional:
-
if (typeof x !== 'number' && x === y) then hash(x) === hash(y)
-
if (hash(x) === hash(y)) then it is likely but not guaranteed that x === y.
Then weakly keyed or mapped tables are obviously possible, even though they'll have to be manually cleared of expired values.
Unfortunately, while this may support encoding weak mappings, I suspect a feature like that is used more often in frameworks than application code, so there might be a big penalty. ActionScript 3 introduced a Dictionary object that performs weak mappings -- perhaps that can be used to find a concrete comparison point. i don't know how this is distinct from 'ephemeron' tables, so I might be missing something.
I don't know ActionScript or its Dictionaries.
On Jul 3, 2010, at 10:50 PM, Mark S. Miller wrote:
On Sat, Jul 3, 2010 at 10:27 PM, Leo Meyerovich <lmeyerov at gmail.com> wrote: Is object identity the inverse of hash? So myObj == identity(hash(myObj))?
No. There is not and must not be an identity function which turns data into access. This would destroy the most important safety property of JavaScript, that object references are not forgeable.
I'm with you there. Was surprised when I thought it was being proposed in that way!
Also, hashes can collide. So the most you get is the conventional:
if (typeof x !== 'number' && x === y) then hash(x) === hash(y)
if (hash(x) === hash(y)) then it is likely but not guaranteed that x === y.
Object hashes seems like a misnomer in this case as it seems fine to require them not to collide. I don't know what happens in JS when you use more than 32 bits or 64 bits of references, but I'd expect a similar out-of-memory restriction for a for-free (hashless) implementation ;-) Anyways, red herring and I probably don't know enough about serialization, object persistence, etc. to discuss what a useful API would be here for expected use case in a helpful way.
Then weakly keyed or mapped tables are obviously possible, even though they'll have to be manually cleared of expired values.
Unfortunately, while this may support encoding weak mappings, I suspect a feature like that is used more often in frameworks than application code, so there might be a big penalty. ActionScript 3 introduced a Dictionary object that performs weak mappings -- perhaps that can be used to find a concrete comparison point. i don't know how this is distinct from 'ephemeron' tables, so I might be missing something.
I don't know ActionScript or its Dictionaries.
www.adobe.com/livedocs/flash/9.0/ActionScriptLangRefV3/flash/utils/Dictionary.html
I no longer have asc installed so cannot directly check when they write 'weakKeys' whether they really meant weak keys or weak mappings (so both key & value are weak); the description reads both ways. It might be useful to analyze how it is used (or not) before suggesting a deviation from it. This is not from the perspective of weak dictionaries being good but something that can be learned from.
FWIW, I believe ActionScript's use introduces non-determinism to the language: iterating over the dictionary can reveal an object that is only weakly referenced if the GC hasn't run yet. At this point, the JS semantics guys might want to chime in -- it simplifies requirements on GC implementation, but I don't know how it changes program analysis (e.g., an escape analysis would simply be conservative there). Just as introducing forgeable references breaks encapsulated reasoning, this might also interact oddly with some useful characterizations of JS. Unfortunately, the semantically clean alternative, forcing a reachability check, isn't decidable.
... and while it might be unclear from my tone, I've repeatedly hit pain points weak references, so I'm for an encoding getting in. It's a big deal for proper frameworks & DSL design.
The dictionary approach was nice in that it supports weak references (a singleton collection) and provides object->value mapping in one small API, but from a semantics / design perspective, it's not very orthogonal. E.g., a weak reference option and an actual object->value dictionary can be used to create a weak dictionary with an equally (asymptotically, at least) sized heap.
On Jul 3, 2010, at 10:27 PM, Leo Meyerovich wrote:
Is object identity the inverse of hash?
No, merely the missing-from-the-standard-library egal function that you can write yourself. See
Mark answered on Object.hashcode already. Here's the old ES4 proposal:
It has the desired security properties, TC39 members agreed a couple of meetings ago. This helped break a logjam over adding unencapsulated Object.hashcode as a strawman (yet we still have no strawman page on it, just the ES4 page :-/).
A collection implementation will want an identity test to resolve collisions, but === isn't quite right. Wherefore egal, or let's call it Object.identity -- only if Object.hashcode is added too.
On Jul 4, 2010, at 12:30 AM, Brendan Eich wrote:
On Jul 3, 2010, at 10:27 PM, Leo Meyerovich wrote:
Is object identity the inverse of hash?
No, merely the missing-from-the-standard-library egal function that you can write yourself. See
... then I still don't see how to do an encoding of weak references that isn't invasive (e.g., adding in a user-level GC that constructors are somehow guaranteed to go through). For a non-invasive approach, there might be probablistic guarantees achieved by API restrictions (e.g., limit the number of bad guesses of IDs for the otherwise forgeable ID->object function) or a creative use of lexically scoped regions, but I'm skeptical of anything complicated like that.
Did you have something in mind in how hashes (or some other feature) enable weak references / dictionaries? It didn't jump out at me and I want them :)
On Jul 4, 2010, at 12:46 AM, Leo Meyerovich wrote:
... then I still don't see how to do an encoding of weak references that isn't invasive (e.g., adding in a user-level GC that constructors are somehow guaranteed to go through). For a non-invasive approach, there might be probablistic guarantees achieved by API restrictions (e.g., limit the number of bad guesses of IDs for the otherwise forgeable ID->object function) or a creative use of lexically scoped regions, but I'm skeptical of anything complicated like that.
I don't see the connection. We're talking about Ephemeron (Tables) as primitives in the language. The GC definitely has to know about them. Likewise for hashcode: if the object's address is one-way hashed to the hashcode() result, but the GC moves the object, then the object will need to grow a field to store its invariant hashcode.
Did you have something in mind in how hashes (or some other feature) enable weak references / dictionaries? It didn't jump out at me and I want them :)
We're not looking to enable anyone to build weak structures in JS without help from a new primitive dedicated to that task, which also avoids leaking information about finalization, addresse, etc. Modulo naming, the Ephemeron Table proposal is that new and harmonious primitive.
We are considering adding Object.identity/hashcode to support user-created implementations of strong-key sets, maps, trees, etc. Those are useful too. Not every object-to-* use-case requires an EphemeronTable.
... then I still don't see how to do an encoding of weak references that isn't invasive (e.g., adding in a user-level GC that constructors are somehow guaranteed to go through). For a non-invasive approach, there might be probablistic guarantees achieved by API restrictions (e.g., limit the number of bad guesses of IDs for the otherwise forgeable ID->object function) or a creative use of lexically scoped regions, but I'm skeptical of anything complicated like that.
I don't see the connection. We're talking about Ephemeron (Tables) as primitives in the language. The GC definitely has to know about them. Likewise for hashcode: if the object's address is one-way hashed to the hashcode() result, but the GC moves the object, then the object will need to grow a field to store its invariant hashcode.
Ah, I was confused by the use of bootstrap in "Could we bootstrap Set, Map, and WeakMap and call it enough?" ; I thought you meant a user-level encoding of these given language-level hashes etc.
Found the table proposal.. without iteration (and assuming no weak ptrs), one of my main use cases (data flow constructs), is still restricted. Given other proposals like object hashing, I'm not sure how much of a win these tables are. If the reason they're not there is non-determinism, weak ptrs would make this a non-issue. Furthermore, I suspect numerics already introduce non-deterministism and embeddings of JS push most programs towards it as well (browsers, servers, games), so purism would be a Quixotian goal. Assuaging some concerns, iteration can be phrased as a capability (and thus disabled) and not made generic (preventing unintended encountering such as preventing it from hitting generic for-in sugar).
On Jul 4, 2010, at 4:37 AM, Leo Meyerovich wrote:
Ah, I was confused by the use of bootstrap in "Could we bootstrap Set, Map, and WeakMap and call it enough?" ; I thought you meant a user-level encoding of these given language-level hashes etc.
The strong maps could be efficiently built in "user code" with Object.hashcode/identity. The committee would have a relatively easy time getting those specified, and implementors would find them relatively easy to implement, compared to higher-level and more extensive APIs. Then library authors could start using them.
The point is that TC39 does not want to invent library code if we can defer to the ecosystem and then de-jure-standardize what emerges as good de-facto standard API.
But then JS users always (for many years, anyway) have to download some code. This is inevitable, although no excuse for not standardizing promptly: downrev browsers will require a download to patch in or around missing APIs; uprev could provide a built-in module. (Details of the module system -- simple modules in the wiki -- aim to address this issue.)
So when I wrote "bootstrap" I meant that TC39 might take a bit more risk de-jure-specifying Set and Map along with WeakMap, instead of just providing Object.{hashcode,identity} and waiting for evolution to do the rest.
There's a delicate dance between library authors and the core language maintainers. You can see this with bind in several libraries (sometimes called something else; API details vary) vs. ES5's Function.prototype.bind. We're still learning, by doing as much as studying what is out there.
Found the table proposal.. without iteration (and assuming no weak ptrs), one of my main use cases (data flow constructs), is still restricted. Given other proposals like object hashing, I'm not sure how much of a win these tables are. If the reason they're not there is non-determinism, weak ptrs would make this a non-issue.
Weak pointers have a leak hazard fixed by Ephemerons when you tie a cyclic knot among keys and values in a table (strawman:weak_references discusses this).
Weak pointers may also leak GC schedule and liveness -- very much "deteriminism", not ND -- by hooking post-mortem finalization. Thus the interest in Ephemerons.
Furthermore, I suspect numerics already introduce non-deterministism and embeddings of JS push most programs towards it as well (browsers, servers, games), so purism would be a Quixotian goal.
I happen to agree at the limit, but the argument is not as absolutist as you might think. Mark argues that even with the ND legacies in the language, we should strive to avoid adding more. This is not a bad guideline as far as it goes.
Assuaging some concerns, iteration can be phrased as a capability (and thus disabled) and not made generic (preventing unintended encountering such as preventing it from hitting generic for-in sugar).
Iteration based on harmony:proxies (strawman:iterators) is looking good, but ETs should not be enumerable at all.
First, we should agree on the memory bloat/leak hazard, along with the post-mortem finalization "leak", of weak ptrs being sufficient cause to want ETs instead.
On 2010-07-04, at 04:09, Brendan Eich wrote:
Likewise for hashcode: if the object's address is one-way hashed to the hashcode() result, but the GC moves the object, then the object will need to grow a field to store its invariant hashcode.
FWIW Dylan (and its ancestors) gets around this by having object-hash return 2 values: a hash code and a hash state. You can only compare hash codes associated with the 'same' state. The hash state is an opaque object with operations for comparing and accumulating (e.g., to compute the hash state of a table of objects).
The argument against introducing non-determinism (or GC-dependent determinism) from maybe a month ago seems much stronger than that of covert channels. A claim of a reasonable implementation of a production language to be free of covert channel is rather suspect. E.g., preventing covert communication through cache hierarchy properties (warm vs cold, full or not, ...) is no fun.
Mark's example is strong in that it (arguably) attacks realistic benign code to steal a secret; it doesn't just achieve communication between two malicious codes. However, the existence of GC -- irrespective of weak maps -- already allows a similar attack pretty quickly when even rather restricted forms of mutable state are shared. This suggests that such channels are only feasibly prevented by non-extreme-experts or in heavily restricted scenarios similar to message passing isolated processes (frames, workers, ...) at which point skipping a typical+useful feature because of this point this particular use case loses significant steam. As weak maps seem to be a capability that can be disabled, this desired notion of language-level security be encapsulated as a library, further weakening the argument. This latter perspective might actually be a useful perspective for future controversial proposals given the variety of desirable security models, analyses, etc. I could be wrong on the impact here and would welcome reasoning on that!
That said, going back to the beginning: deterministic GC-independent semantics are a Good Thing. Whether this matters seems to be a crucial discussion. Is there a concern for basic correctness for more mundane code? Are those use cases similar to the above where a 'strictness' library suffices? Again, I'd point to the ActionScript experience for a reference: can anybody attack or break a flash program with weak maps, or is this FUD? I'm more concerned about the dangers of misuse in terms of memory leaks and event-oriented logic glitches between code running on different ES implementations (and suspect the answer has a reasonable chance of being 'yes' for these).
,
On Sep 2, 2010, at 12:08 AM, Leo Meyerovich wrote:
That said, going back to the beginning: deterministic GC-independent semantics are a Good Thing. Whether this matters seems to be a crucial discussion. Is there a concern for basic correctness for more mundane code? Are those use cases similar to the above where a 'strictness' library suffices? Again, I'd point to the ActionScript experience for a reference: can anybody attack or break a flash program with weak maps, or is this FUD? I'm more concerned about the dangers of misuse in terms of memory leaks and event-oriented logic glitches between code running on different ES implementations (and suspect the answer has a reasonable chance of being 'yes' for these).
I tend to agree that the answer is 'yes', assuming the question was "Whether [deterministic GC-independent semantics] matters" ;-).
It's hard to make an airtight case for "security" even on a good day, but removing observability from the weak maps proposal makes it strictly simpler and less likely to cause any number of problems. I don't believe that lack of enumerability and .length cripples the API for the intended use cases of soft fields, membrane caches, and the like.
On Sep 2, 2010, at 12:29 AM, Brendan Eich wrote:
On Sep 2, 2010, at 12:08 AM, Leo Meyerovich wrote:
That said, going back to the beginning: deterministic GC-independent semantics are a Good Thing. Whether this matters seems to be a crucial discussion. Is there a concern for basic correctness for more mundane code? Are those use cases similar to the above where a 'strictness' library suffices? Again, I'd point to the ActionScript experience for a reference: can anybody attack or break a flash program with weak maps, or is this FUD? I'm more concerned about the dangers of misuse in terms of memory leaks and event-oriented logic glitches between code running on different ES implementations (and suspect the answer has a reasonable chance of being 'yes' for these).
I tend to agree that the answer is 'yes', assuming the question was "Whether [deterministic GC-independent semantics] matters" ;-).
Leaving how in question -- e.g., achievable with a simple library toggle -- and whether it always matters (such as for those would want the enumeration ability and could arguably contain the non-determinism).
It's hard to make an airtight case for "security" even on a good day, but removing observability from the weak maps proposal makes it strictly simpler and less likely to cause any number of problems. I don't believe that lack of enumerability and .length cripples the API for the intended use cases of soft fields, membrane caches, and the like.
/be
There are use cases for enumeration like proper native GC in data flow abstractions and it seems others that have piped up with their own uses. Obviously, different people on this list have presented different use cases for weak maps and it seems those two exact scenarios it. I'd be curious about further ones for enumeration (which I suspect surface in proper implementations of various event-oriented abstractions) and weak maps in general -- it may be we're the only ones who care and only those two scenarios are important to the committee members ;-) For example, the ability to trigger a purge might matter for reliability in some uses (which I'd suspect for certain transient resource pools), even if it's a no-op wrt JS semantics.
2010/9/2 Leo Meyerovich <lmeyerov at gmail.com>:
On Sep 2, 2010, at 12:29 AM, Brendan Eich wrote:
On Sep 2, 2010, at 12:08 AM, Leo Meyerovich wrote:
That said, going back to the beginning: deterministic GC-independent semantics are a Good Thing. Whether this matters seems to be a crucial discussion. Is there a concern for basic correctness for more mundane code? Are those use cases similar to the above where a 'strictness' library suffices? Again, I'd point to the ActionScript experience for a reference: can anybody attack or break a flash program with weak maps, or is this FUD? I'm more concerned about the dangers of misuse in terms of memory leaks and event-oriented logic glitches between code running on different ES implementations (and suspect the answer has a reasonable chance of being 'yes' for these).
I tend to agree that the answer is 'yes', assuming the question was "Whether [deterministic GC-independent semantics] matters" ;-).
Leaving how in question -- e.g., achievable with a simple library toggle -- and whether it always matters (such as for those would want the enumeration ability and could arguably contain the non-determinism).
It's hard to make an airtight case for "security" even on a good day, but removing observability from the weak maps proposal makes it strictly simpler and less likely to cause any number of problems. I don't believe that lack of enumerability and .length cripples the API for the intended use cases of soft fields, membrane caches, and the like.
/be
There are use cases for enumeration like proper native GC in data flow abstractions and it seems others that have piped up with their own uses. Obviously, different people on this list have presented different use cases for weak maps and it seems those two exact scenarios it. I'd be curious about further ones for enumeration (which I suspect surface in proper implementations of various event-oriented abstractions) and weak maps in general -- it may be we're the only ones who care and only those two scenarios are important to the committee members ;-) For example, the ability to trigger a purge might matter for reliability in some uses (which I'd suspect for certain transient resource pools), even if it's a no-op wrt JS semantics.
I would strongly oppose any way to trigger a full garbage collection from JavaScript. Experience from the Java world shows that it is inevitably abused with very serious performance consequences.
Note the mealy description here with words like "suggests" and "best effort" download.oracle.com/javase/1.5.0/docs/api/java/lang/System.html#gc()
There is even a command line option on some VMs to disable this call and turn it into a noop.
So please no.
I would strongly oppose any way to trigger a full garbage collection from JavaScript. Experience from the Java world shows that it is inevitably abused with very serious performance consequences.
Red herring, though JVM deployments seem like good examples of systems using advanced controls on memory behavior.
My point was that weak map has been examined in terms of few and seemingly haphazardly drawn use cases. There's an accepted desire for more advanced use of GC functionality, so I'm interested in more description of to what ends (e.g., use cases beyond soft fields, wrappers/membranes, and data flow / callback management). Pushing it through in terms of these few use cases is surprising for such a semantically significant feature -- I am not questioning the mechanics (which have been admirably done wrt safety) but scope.
If nobody is clear about what else they're hoping to use it for, I rather not waste anyone's time on the expressive abilities here (and see no obvious problems with the mechanics of the current proposal).
,
2010/9/2 Leo Meyerovich <lmeyerov at gmail.com>:
I would strongly oppose any way to trigger a full garbage collection from JavaScript. Experience from the Java world shows that it is inevitably abused with very serious performance consequences.
Red herring,
Perhaps I misunderstood. You wrote "For example, the ability to trigger a purge might matter". You can't do a purge of a weak map without a GC.
2010/9/2 Leo Meyerovich <lmeyerov at gmail.com>:
There are use cases for enumeration like proper native GC in data flow abstractions
Perhaps you could elaborate on this. I wasn't able to understand what you meant by this.
Btw. In case it isn't obvious I would rather avoid changes to the WeakMaps proposal that make the effects of GC visible to the programmer (beyond the obvious one of the program not crashing with an OOM error). This is bound to lead to incompatibilities and/or unwanted restrictions on the implementation of GC. Making the keys enumerable would be one such change.
On Sep 2, 2010, at 4:08 AM, Erik Corry wrote:
2010/9/2 Leo Meyerovich <lmeyerov at gmail.com>:
There are use cases for enumeration like proper native GC in data flow abstractions
Perhaps you could elaborate on this. I wasn't able to understand what you meant by this.
I mean a natural embedding of data flow constructs as in www.cs.brown.edu/~greg/thesis.pdf that eliminate many instances of spaghetti/non-composable callback graphs and replaces them with direct/linear/composable/self-managing code while co-existing with the rest of the language.
There's a lot of debatable language philosophy in the that thesis, but distractions aside, it's essentially an expressive continuation of data binding we're already seeing in many modern systems (e.g., Flex, LINQ, Max/MSP have weaker forms.). Even if you don't directly use such a language or library, you can view them as forming a basic pattern. This pattern comes up repeatedly, especially in the AJAX community: while I've worked on one such JS system, I've seen it repeatedly rediscovered/reinvented by others without any prior knowledge of it. This pattern is also used outside of the web community.
Btw. In case it isn't obvious I would rather avoid changes to the WeakMaps proposal that make the effects of GC visible to the programmer (beyond the obvious one of the program not crashing with an OOM error). This is bound to lead to incompatibilities and/or unwanted restrictions on the implementation of GC. Making the keys enumerable would be one such change.
The challenge is that the pattern, wrt proper GC, is to enumerate over a weak collection (e.g., map) of callbacks, firing them and then repeating the process for anything registered to those callbacks. If the end of a callback chain is used, the chain won't be collected. If the end of a chain of such propagation isn't used, the chain (back to the last used link) can be collected. Not doing so means a (significant) memory leak and likely performance loss.
This collection semantics can be encoded... but at that point we're likely 1) reimplementing garbage collection within JS and 2) forcing manual memory management by the programmer. A similar argument can be made about not really needing weak maps for membranes -- manual management could work, but it's a headache (and I suspect breaks modularity in the same sense as not having GC in general would).
Hope that made sense and I'm looking forward to any other use cases.
,
Although I illustrated the non-overt channel here as a covert channel in a possibly failed attempt at clarity, my real point and my real worry is about its use as a side channel. As a side channel, this ability to sense the collection of individual objects is much more informative than anything I know of in current JS. If anyone knows of any side channels that seem as bad, perhaps I've overlooked them, so please post. If they also seem unpluggable, then I would agree that my position here becomes pointless and we'd need to give on resisting side channels as well.
Off the top of my head:
-
Does ES5 fully freeze the global / root protos / etc. or allow a way to call function without them? Taint can show up through here.
-
Even with frozen APIs, do we acknowledge timing attacks? Out-of-memory, non-termination, and other platform exception tricks?
-
Sniffing is already ubiquitous in terms of language / host API variants
Stepping back, I'm not even sure precisely why gc status classifies as a side channel. First, when passing references, defensive coders must already consider non-local properties to protect against memory leaks: gc state should, arguably, already be carefully thought about when dealing with components. Second, this seems somewhat close to a transient resource reification pattern where it's use as explicit communication seems fine: you check against what the system has reclaimed, etc.**
Lest anyone misunderstands, I actually do agree that it's suspicious if you do not consider coding wrt to gc as being natural. Making the intended threat model to protect against more precise would help me at least. For example, it could answer why the ability to check whether an object has been collected is not an overt channel that is part of getting a reference: if you don't want somebody to check, don't give them the object.
**having overidable object destructors are probably a cleaner alternative in most cases for this second use
On Fri, Sep 3, 2010 at 6:04 PM, Leo Meyerovich <lmeyerov at gmail.com> wrote:
Although I illustrated the non-overt channel here as a covert channel in a possibly failed attempt at clarity, my real point and my real worry is about its use as a side channel. As a side channel, this ability to sense the collection of individual objects is much more informative than anything I know of in current JS. If anyone knows of any side channels that seem as bad, perhaps I've overlooked them, so please post. If they also seem unpluggable, then I would agree that my position here becomes pointless and we'd need to give on resisting side channels as well.
Off the top of my head:
- Does ES5 fully freeze the global / root protos / etc. or allow a way to call function without them? Taint can show up through here.
Of course not. But it does allow these to be frozen. SES makes use of that to freeze them (after cleaning them up) as the last step of its initialization < code.google.com/p/es-lab/source/browse/trunk/src/ses/initSES.js#557
.
- Even with frozen APIs, do we acknowledge timing attacks? Out-of-memory, non-termination, and other platform exception tricks?
Acknowledged.
- Sniffing is already ubiquitous in terms of language / host API variants
Sure. What issue does that raise?
More generally, ES mostly has what one might call sources of "static nondeterminism" < code.google.com/p/google-caja/wiki/SourcesOfNonDeterminism>. By "static" I mean ways implementations of the spec may differ from each other. But these only open a non-overt channel if they may differ dynamically among runs on the same platform in a way that can be influenced.
Stepping back, I'm not even sure precisely why gc status classifies as a side channel. First, when passing references, defensive coders must already consider non-local properties to protect against memory leaks: gc state should, arguably, already be carefully thought about when dealing with components. Second, this seems somewhat close to a transient resource reification pattern where it's use as explicit communication seems fine: you check against what the system has reclaimed, etc.**
Lest anyone misunderstands, I actually do agree that it's suspicious if you do not consider coding wrt to gc as being natural. Making the intended threat model to protect against more precise would help me at least. For example, it could answer why the ability to check whether an object has been collected is not an overt channel that is part of getting a reference: if you don't want somebody to check, don't give them the object.
Nice. Excellent questions. The boundary between overt and non-overt is relative to the semantics of a platform. For the same concrete platform implementation, there can be multiple semantic accounts, and thereby multiple notions of where the boundary is between overt and non-overt.
Similarly, we normally say that a program is correct (in a formal correctness sense) if it would behave correctly on any platform that obeys the semantics. If a program happens to operate correctly on a given platform by relying on unspecified features of that platform, but would operate incorrectly on a different possible implementation of that platform's semantics, we would normally say the program is incorrect. Of course, given a different semantic account of the platform, different programs for that platform will be considered correct.
Formal models for reasoning about confidentiality that ignore non-overt channels are generally unsound. Deterministic platforms aside, "correctly" preserving confidentiality involves reasoning about implementations and about multiple levels of abstraction.
A correct program must only rely on the communications provided by overt channels. This brings us to an important realization from Fred Spiessen's thesis www.evoluware.eu/fsp_thesis.pdf: Formal models for reasoning
about integrity that ignore non-overt channels remain sound. The correctness of a program at preserving integrity is within conventional correctness, which we can reason about soundly in a modular manner.
Now to your questions. My earlier statements are with regard to using a conventional programming language semantics, in which memory allocation mechanics are mostly ignored. At < strawman:gc_semantics> I captured
some notes and references to papers that explore how to extend such conventional semantics to take account of some of these issues. At the opposite extreme, we could reason about the semantics of the machine instruction set, where the "program" we are reasoning about is the entire tower of software leading up the same program. At this level of abstraction, there is finite memory and no memory allocation. Under this perspective, we would get very different answers about what is correct or incorrect, and corresponding different answers about what is overt and non-overt.
Of course, the artifacts this committee produces and this list discusses are semantics for next standard versions of JavaScript. These are programming language semantics in the conventional sense.
To make my previous too-abstract answer more concrete, let's examine the specific example you raise:
On Fri, Sep 3, 2010 at 6:04 PM, Leo Meyerovich <lmeyerov at gmail.com> wrote:
For example, it could answer why the ability to check whether an object has been collected is not an overt channel that is part of getting a reference: if you don't want somebody to check, don't give them the object.
Ok, let's suppose that we consider a semantics in which a shared object reference is an overt communications channel. Overt implies that correct programs may rely on this communications channel. This semantics introduces a non-modularity, in that providing an abstraction to two clients enables those clients to legitimately interact in ways out of control of their shared abstraction. Going back to Parnas, modularity is supposed to enable us to make a greater range of semantically-incremental changes in a syntactically local manner. If these clients were later given separate instances, as separate facets of the same shared abstraction, that would cause surprising breakage.
How would we program modularly in such a system? We would need to avoid ever sharing references except when we intend to enable communication. Every operation that today would cause reference sharing would instead have to mint and hand out a new facet to each new client. We would no longer be able to share data structures, as the shared references within these structures would be overt but unintended communications channels. We would effectively have to partition the heap into disjoint subheaps, where all communications between them copies all object being passed.
In short, the language we'd get would resemble ES3, in which modular programming was impossible because all shared user-defined objects were mutable. The only protection mechanism we'd have would be like postMessage between iframes (without the previously suggested optimization), which is fine for some problems but not a practical way to compose libraries. By introducing Object.freeze, we fixed all this.
Snipping together several emails:
Of course not. But it does allow these to be frozen. SES makes use of that to freeze them (after cleaning them up) as the last step of its initialization code.google.com/p/es-lab/source/browse/trunk/src/ses/initSES.js#557.
So why not allow the ability to disable weak map enumeration? Perhaps default on vs. off is a distinction more important to SES than strict mode.
- Sniffing is already ubiquitous in terms of language / host API variants
Sure. What issue does that raise?
It's an example of (static) data leakage we've already accepted, and, yep, I won't lose sleep over it anytime soon!
More generally, ES mostly has what one might call sources of "static nondeterminism" code.google.com/p/google-caja/wiki/SourcesOfNonDeterminism. By "static" I mean ways implementations of the spec may differ from each other. But these only open a non-overt channel if they may differ dynamically among runs on the same platform in a way that can be influenced.
That page suggests we agree that combining a language implementation with a predictable cost model (e.g., time iterating a short list vs a long one is observable) with "Date()" is enough to form a reliable covert channel. In the motivating mashup scenario for our discussion, I suspect we're just as likely (or more likely!) to get actual side channels from APIs with exploitable complexities as from enumerable weakmaps. I couldn't tell from the wiki page if you thought we should remove "Date()" in general, in Cajita in particular, or should just be generally aware of it.
Inclusion of "Date()" suggests a strict regimen well beyond ocap-style ES5 if you want object (e.g., API) sharing mashups without side channels.
Nice. Excellent questions. The boundary between overt and non-overt is relative to the semantics of a platform. For the same concrete platform implementation, there can be multiple semantic accounts, and thereby multiple notions of where the boundary is between overt and non-overt.
I've thought about the meaning of covert vs. overt in terms of granularities of semantics for the past couple of years. What is visible in a model is overt; what is only visible in a finer model is covert in the coarse model. PL papers typically hide cost models and GC, though I view it more as a way to avoid distractions and developers do use some sort of high-level model of them when writing programs. Thus, aspects of them are indeed overt.
I don't know about side-channels; it seems too widely used a term for me to want to pick one clean meaning at the language level. In the case of your taxonomy breaker, it might even be orthogonal to overt vs. covert. If there's any sort of bug communicating what wasn't intended, irrespective of (c)overtness, that might be a side channel. E.g., maybe you forgot to disable the global readable debug log.
Formal models for reasoning about confidentiality that ignore non-overt channels are generally unsound. Deterministic platforms aside, "correctly" preserving confidentiality involves reasoning about implementations and about multiple levels of abstraction.
As we can always make a model more fine-grained, by my definition of covert vs overt, any confidentiality result is unsound (which is not surprising: you better not be sniffing my electrons!). So perhaps you mean that we only admit attacks within a restricted range of semantic models. The question then is what is the finest model considered and where do we draw the line between overt and covert ones. This starts to be very involved...
A correct program must only rely on the communications provided by overt channels. This brings us to an important realization from Fred Spiessen's thesis www.evoluware.eu/fsp_thesis.pdf: Formal models for reasoning about integrity that ignore non-overt channels remain sound. The correctness of a program at preserving integrity is within conventional correctness, which we can reason about soundly in a modular manner.
I didn't see which part of the thesis you're referring to, perhaps you can point to a section?
Now to your questions. My earlier statements are with regard to using a conventional programming language semantics, in which memory allocation mechanics are mostly ignored. At strawman:gc_semantics I captured some notes and references to papers that explore how to extend such conventional semantics to take account of some of these issues. At the opposite extreme, we could reason about the semantics of the machine instruction set, where the "program" we are reasoning about is the entire tower of software leading up the same program. At this level of abstraction, there is finite memory and no memory allocation. Under this perspective, we would get very different answers about what is correct or incorrect, and corresponding different answers about what is overt and non-overt.
Of course, the artifacts this committee produces and this list discusses are semantics for next standard versions of JavaScript. These are programming language semantics in the conventional sense.
- "For code that can overtly sense what has been collected, such as by using weak references, such observation opens an overt information leakage channel" -- it seems here you are accepting weak references as an overt peephole into references they attach to. The more I think about it, the more I think about this as being an (acceptable) ocap explanation of weak references that isn't far from, say, (open) destructors.
I agree with your later comment about non-local reasoning. I've been trying to think of a way to make an enforceable decision at the allocation or sharing point -- e.g., declaring an object cannot be reachable by a weak reference -- but haven't thought of anything satisfying beyond being able to disable weak enumerability at the global level as a MOP capability.
Most selective controls are going to still enable implicit control dependencies similar to your example. I don't see ES5/Cajita/... providing reliable way to do both general object sharing and information flow control against these without adding some major restrictions and machinery that I suspect could also obviate the need for such controls.
- If we want to be side-channel free to the extent of weak map enumeration mattering, more stringent GC progress guarantees seem essential. I haven't followed the literature here, but this sounds kind of neat at the research level (which also might be a concern for those whose address doesn't end in ".edu"!).
Ok, let's suppose that we consider a semantics in which a shared object reference is an overt communications channel. Overt implies that correct programs may rely on this communications channel. This semantics introduces a non-modularity, in that providing an abstraction to two clients enables those clients to legitimately interact in ways out of control of their shared abstraction.
I totally agree! However, I'd point my finger at garbage collection: even without weak references, it breaks modularity. GC, for high-level semantic models, automates the memory allocation protocol, seemingly restoring modularity to programs. I don't recall the best paper describing this perspective; definitely surfaced in part in Dan Grossman's "The TM / GC Analogy" and maybe in Wilson's GC survey, but I think there was a more focused writeup elsewhere. However, the automation isn't perfect: memory leaks etc. still happen and we still use non-local reasoning when reference passing (for memory use, not confidentiality). Perhaps a more historically fair way of phrasing this is that "garbage collection didn't fully solve modularity wrt memory use."
How would we program modularly in such a system? We would need to avoid ever sharing references except when we intend to enable communication.
Even without timing attacks, weak references, etc., I think ES5 is still unreliable for information flow properties. SES sounds like a step closer, but even there, I wouldn't trust 20,000+ lines of it without something like an information flow analysis. Writing information safe ES seems like it would be much less like ES5 than anything like SES, AdSafe, etc.
Furthermore, for most analyses I've read, I think the modularity (for confidentiality) isn't so negatively impacted. Yes, you now have to check that exposed tainted references outlive untrusted widgets (or finer ideas like collection is not impacted by high data), but I suspect most such analyses would disallow giving the items to beginwith! I don't recall any papers about such leaks; I think the timing attacks (even without traditional GC) are a bigger concern.
Rereading this email, I'm not sure the covert/overt/side channel perspective is adding much to this discussion. Our willingness to address timing attacks such as predictable GC, (concrete) acceptable levels of information leakage, and 'toggleability' seem more important. The others are interesting, but maybe not for here (say the ocap list).
,
I will reveal that what you propose is more or less possible with plain vanilla ecmascript 3. To modify your syntax proposal, what the syntax would look like is something like this:
$(document).X(1,2).Y(3,4).Z("foo");
this call returns an object, with a call, and an apply method (possibly a function object?) that when called, starts a chain of execution that is suspended at the finishing of each operand. Most of the time this suspension would be caused by a simple call to setTimeOut(next, 0); With other functions that are obviously asyncronous, such as XHR calls, the suspension is resumed at the point when the callback would normally be called.
This would be made possible by a library, I will call MND for the purposes of this discussion. You start by passing MND an object with a set of related methods:
var MyMonad = MND.make(MyObject);
the MyMonad object has methods with all the same names as the object that you passed in, but when you call them, the names and arguments are simply placed on an internal stack, and an object with call/apply methods is returned.
What you've described is more or less a generalized version of what my script loader LABjs does.
$LAB.script(...).script(...).wait(...).script(...).wait(...);
The internal code runs through the entire chain the first pass, and stores all the calls, params, etc. It also (different from your general Monad behavior) fires off a "preloading" of each parameter to a .script(). But then, during the second asynchronous pass through the chain, it hops through the queue/stack of calls one at a time, in the appropriate order, waiting for each step to finish before proceeding.
Not that this discussion thread has anything at all to do with LABjs (or script loading), but I just wanted to share that such a concept is in fact valid and workable, and has been in LABjs for well over a year.
I also have experimented with a similar concept in the promise/defer area, with a chainable "Promise" implementation I did. It roughly looks like this:
X(1,2) .then(Y) // Y called with no params .then(function(){ Z("foo"); }) ...
While this works (and I currently use it in some of my projects), it creates a lot more awkwardness around "message passing", and even more complexity in expressing/handling the "conditional" case where you have alternation between a "success" or "failure" case.
For instance, I experimented with:
X(1,2) .then(Y) .else(Z);
When there's only one "then" condition and only one "else" condition, it's not too bad. But, a key concept in the chainability of promises is something like:
X(1,2) .then(Y) .then(Z) .else(W) .then(A) .else(B) .else(C) ...
With the use of a ternary type operator, and even ( ) grouping, it can become clear where the pairings of then/else are, but with a linear chain like this, it's much harder to decipher. Also, when I complicated my "Promise" lib with "else" functionality, it got so much more complex that it had some race conditions I never could solve.
I eventually sort of decided the effort wasn't worth it. And shortly thereafter, in that frustration, my initial ideas for needing native syntax to negotiate all this were born. It just took me awhile to publicly formalize my proposal. :)
I was very interested when I heard about private names. It seems like a very good idea. However, the specification that the strawman gives is harder than I thought it would be.
I am unsure I fully understand this private names proposal.
For starters, what is the "scope" of private names? eg,
private name; var obj = {}; obj.name = 5; // We want obj to have a Private Name value. // ... Later on ... (here, we should do something so that the following lines work). var foo = {}; foo.name = {name:'foo'}; // Here, we would like foo to have a property "name" whose value is an object whose name property holds the value "foo". // How to do so? Is foo['name'] = {'name':'foo'}; unavoidable; does it even work? // Is {name:'foo'} going to contain a Private Name, or rather a property (which is what we want)?
Secondly, I am trying to understand what this code does, too: a = {}; k = {a: a}; a['k'] = k;
function aa(o) { private a; k.a = o; // Does it "override" the fact that a is a property? a.a = a.k.a; return #.a; }
var a1 = aa(a); var a2 = aa(a); // Does it return the same thing? print(a2); print(a1 === a2); print(typeof a2); print(a[a2] === a.k.a);
On Dec 21, 2010, at 2:52 PM, thaddee yann tyl wrote:
I was very interested when I heard about private names. It seems like a very good idea. However, the specification that the strawman gives is harder than I thought it would be.
I am unsure I fully understand this private names proposal.
For starters, what is the "scope" of private names?
This is spec'ed at strawman:private_names#private_declaration_scoping to some level of precision:
"private declarations are lexically scoped, like all declarations in Harmony. Inner private declarations shadow access to like-named private declarations in outer scopes. Within a block, the scoping rules for private declarations are the same as for const declarations. "
eg,
private name; var obj = {}; obj.name = 5; // We want obj to have a Private Name value. // ... Later on ... (here, we should do something so that the // following lines work). var foo = {}; foo.name = {name:'foo'}; // Here, we would like foo to have a // property "name" whose value is an object whose name property holds the // value "foo". // How to do so? Is foo['name'] = {'name':'foo'}; unavoidable; does it // even work? // Is {name:'foo'} going to contain a Private Name, or rather a // property (which is what we want)?
Yes, if you don't close the block scope (even if implicit, top-level script or function body scope) then |private name| is still in effect.
If you want {name: 'foo'} to mean {"name": 'foo'}, then either close the block scope (open a fresh one first if necessary), or quote the identifier. Same for foo.name (foo["name"]).
Secondly, I am trying to understand what this code does, too: a = {}; k = {a: a}; a['k'] = k;
function aa(o) { private a; k.a = o; // Does it "override" the fact that a is a property?
I don't know what you mean by "override". The private_names strawman is clear enough about what happens here. k has a new property named by the private name lexically bound to a in function aa, and not named by the string "a", created by assignment and set to have as its value the value of |o|.
a.a = a.k.a; return #.a; }
var a1 = aa(a); var a2 = aa(a); // Does it return the same thing?
No, you get a fresh one per evaluation of the private declaration. See
function Thing() { private key; // each invocation will use a new private key value
and related examples from strawman:private_names#using_private_identifiers.
As one might hope, you can nest closures to make "class private" vs "instance private" names. There's no need for extra keywords or special cases. A private declaration is evaluated by generating a new, unique name.
print(a2); print(a1 === a2);
false
print(typeof a2);
The strawman answers this, but includes words about an alternative design wherein private names are built-in objects.
print(a[a2] === a.k.a);
false
Note that (a[a2] === a.k[a2] && a[a2] === a).
This is all kind of plug-and-chug from my reading of the strawman. Was it unclear in a way where you could suggest a fix? Or maybe it just didn't start with a minimal statement of the rules.
The private names proposal has a lot of good ideas, but their use is not obvious. The reasons I see for that are: The "private a;" declaration:
- changes meaning of all obj.a in scope
- looks like a scoped variable, not a private element of an object
- is not generative-looking ... which makes it harder to understand, and use.
I find that David Herman's proposal fixes those issues:
But your idea suggests yet another alternative worth adding to our growing pantheon. We could allow for the scoping of private names, but always require them to be prefixed by the sigil. This way there's no possibility of mixing up public and private names. So to use an earlier example from this thread (originally suggested to me by Allen):
function Point(x, y) { private #x, #y; this.#x = x; this.#y = y; }
I understand that the number sign gets really heavy and annoying after some time. As a result, I suggest a simpler syntax, "private .secret;":
a = {}; k = {a: a}; a['k'] = k; function aa(o) { private .a; k..a = o; // or: private c.a; c.a = o; a.a = a.k.a; // or: a['a'] = a['k']['a']; a.a = k..a; // here, on the other hand, k.a is the private stuff. return .a; } let a2 = aa(a); print( a[a2] === a ); // true
Would this syntax be less Perlish?
ps: my understanding is that, using the current specification, print(a[a2] === a.k.a) from my earlier email yields true, not false, since (a.k.a === a). Am I wrong?
On Dec 23, 2010, at 8:36 AM, thaddee yann tyl wrote:
I understand that the number sign gets really heavy and annoying after some time. As a result, I suggest a simpler syntax, "private .secret;":
a = {}; k = {a: a}; a['k'] = k; function aa(o) { private .a; k..a = o; // or: private c.a; c.a = o; a.a = a.k.a; // or: a['a'] = a['k']['a']; a.a = k..a; // here, on the other hand, k.a is the private stuff. return .a; } let a2 = aa(a); print( a[a2] === a ); // true
Would this syntax be less Perlish?
The .. is wanted by other extensions, and already used by ECMA-357 (E4X), FWIW.
But perhaps your idea of
private .x;
is good by itself?
ps: my understanding is that, using the current specification, print(a[a2] === a.k.a) from my earlier email yields true, not false, since (a.k.a === a). Am I wrong?
Oh, you're right -- I forgot you set k = {a:a} up front.
On 23.12.2010 22:39, Brendan Eich wrote:
On Dec 23, 2010, at 8:36 AM, thaddee yann tyl wrote:
I understand that the number sign gets really heavy and annoying after some time. As a result, I suggest a simpler syntax, "private .secret;":
a = {}; k = {a: a}; a['k'] = k; function aa(o) { private .a; k..a = o; // or: private c.a; c.a = o; a.a = a.k.a; // or: a['a'] = a['k']['a']; a.a = k..a; // here, on the other hand, k.a is the private stuff. return .a; } let a2 = aa(a); print( a[a2] === a ); // true
Would this syntax be less Perlish? The .. is wanted by other extensions, and already used by ECMA-357 (E4X), FWIW.
But perhaps your idea of
private .x;
is good by itself?
I remember the April's discussion of D.Herman's Names, where I assumed several syntax "sugars" (?) for that private symbols
On 23.12.2010 22:39, Brendan Eich wrote:
The .. is wanted by other extensions, and already used by ECMA-357 (E4X), FWIW.
JFTR: and also in ECMA-262:
1..toString()
Dmitry.
On 23.12.2010 22:55, Dmitry A. Soshnikov wrote:
Will it be a big impudence to ask someone ... to summarize the current state of the discussion
Hm, sorry, seems I missed with a thread. This one ("...Issue 22") seems is already a sum (or not?) of the thread I meant "New private names proposal" (and probably "Name syntax"). Don't know from which to start reading.
Dmitry.
On Dec 23, 2010, at 11:59 AM, Dmitry A. Soshnikov wrote:
On 23.12.2010 22:39, Brendan Eich wrote:
The .. is wanted by other extensions, and already used by ECMA-357 (E4X), FWIW.
JFTR: and also in ECMA-262:
1..toString()
Yes, although if we added any .. as in Ruby or CoffeeScript it would, by the maximal munch principle, be tokenized instead of two dots. This is not a backward-compatible change as you note, but it's not a huge break either. It's rare to call methods on integer literals and one can always parenthesize.
On Dec 23, 2010, at 8:36 AM, thaddee yann tyl wrote:
The private names proposal has a lot of good ideas, but their use is not obvious. The reasons I see for that are: The "private a;" declaration:
- changes meaning of all obj.a in scope
- looks like a scoped variable, not a private element of an object
- is not generative-looking ... which makes it harder to understand, and use.
I find that David Herman's proposal fixes those issues:
But your idea suggests yet another alternative worth adding to our growing pantheon. We could allow for the scoping of private names, but always require them to be prefixed by the sigil. This way there's no possibility of mixing up public and private names. So to use an earlier example from this thread (originally suggested to me by Allen):
function Point(x, y) { private #x, #y; this.#x = x; this.#y = y; }
I understand that the number sign gets really heavy and annoying after some time. As a result, I suggest a simpler syntax, "private .secret;": [snip]
There is another issue relating to the syntax chosen form accessing private names: the refactoring impact of sigils. This has not been wide discussed on this thread although it was implicit in this:
On Dec 22, 2010, at 4:40 PM, Brendan Eich wrote:
Now I want to make a private member of my objects, not for security but just to save my code from myself, who tends to use .x as a property name too much, and my colleague MonkeyBob, who likes to monkey-patch.
With private names, I just need to add the *** line to my constructor:
function MyConstructor(x, ...) { private x; // *** this.x = x; ... // closures defined that use x }
and I'm better off. Even if ES5's Object.getOwnPropertyNames is used by the inspector run by my other colleague ReliableFred, who needs to see x even though it is private. Fred won't do anything wrong with the value of x, but if he can't see it, he can't debug his code, which uses my code (the bug could be anywhere, or multi-factorial).
In my experience, information hiding decisions are often changed. Both during the initial develop of an object abstraction and as the abstraction evolves over the life of a system. Any information hiding design that requires that references to hidden items be syntactically distinguished will complicate the refactoring transformations necessary to change the hidden status of such items. that is why a sigil or new access syntax was not incorporated into my Private Names design.
On Dec 23, 2010, at 8:36 AM, thaddee yann tyl wrote:
The private names proposal has a lot of good ideas, but their use is not obvious. The reasons I see for that are: The "private a;" declaration:
- changes meaning of all obj.a in scope
- looks like a scoped variable, not a private element of an object
- is not generative-looking ... which makes it harder to understand, and use.
I find that David Herman's proposal fixes those issues:
But your idea suggests yet another alternative worth adding to our growing pantheon. We could allow for the scoping of private names, but always require them to be prefixed by the sigil. This way there's no possibility of mixing up public and private names. So to use an earlier example from this thread (originally suggested to me by Allen):
function Point(x, y) { private #x, #y; this.#x = x; this.#y = y; }
I understand that the number sign gets really heavy and annoying after some time. As a result, I suggest a simpler syntax, "private .secret;": [snip]
There is another issue relating to the syntax chosen form accessing private names: the refactoring impact of sigils. This has not been wide discussed on this thread although it was implicit in this:
On Dec 22, 2010, at 4:40 PM, Brendan Eich wrote:
Now I want to make a private member of my objects, not for security but just to save my code from myself, who tends to use .x as a property name too much, and my colleague MonkeyBob, who likes to monkey-patch.
With private names, I just need to add the *** line to my constructor:
function MyConstructor(x, ...) { private x; // *** this.x = x; ... // closures defined that use x }
and I'm better off. Even if ES5's Object.getOwnPropertyNames is used by the inspector run by my other colleague ReliableFred, who needs to see x even though it is private. Fred won't do anything wrong with the value of x, but if he can't see it, he can't debug his code, which uses my code (the bug could be anywhere, or multi-factorial).
In my experience, information hiding decisions are often changed. Both during the initial develop of an object abstraction and as the abstraction evolves over the life of a system. Any information hiding design that requires that references to hidden items be syntactically distinguished will complicate the refactoring transformations necessary to change the hidden status of such items. that is why a sigil or new access syntax was not incorporated into my Private Names design.
I understand your point, but I am deeply disturbed by the fact that this syntax modifies all |obj.secret| and |obj[secret]| and |{secret:value}| within the scope of |private secret;|, including those that we may not think about.
The following "private obj.secret;" syntax complies with your non-sigil request, and it addresses all the issues I have met:
function Point(x, y) { private this.x, this.y; // This private statement only has an influence upon "this". this.x = x; this.y = y; // do stuff... return [#.x, #.y]; }
It defines precisely what object has a private name up front, thus avoiding accidents such as: var letters = {}; function Point(x, y) { private x, y; // We want this.x and this.y to be private. this.x = x; this.y = y; // ... later that evening... letters.x = 'X'; letters.y = 'Y'; // Oops, these weren't meant to be private names! }
ps. Merry christmas.
I believe that David Bruant has a good point. We need a shorter syntax because we advocate the use of map/reduce, etc., which require simple anonymous functions.
As to why we should choose # rather than coffescript's ->, there are two points:
- The # syntax is more readable. -> is perlish. It looks like two very
much used operands, "minus" and "greater than", without any related meaning. Python, which has an enormous emphasis on readability, is ready to add the ':' at the end of the line before each block to make it readable. Here, greater readability is on par with a decrease in the number of characters needed.
- I talked to Alex Russell about this, and his answer was: "Arrow requires recursive descent, fat arrow just passes the buck on bind. Not settled." This seems like an interesting issue that makes the # syntax even more agreeable.
As to comment on Kyle Simpson's raised issue, it is true that this channel lacks web developers' intervention. I am a simple web developer myself. I must admit most developers don't sign up to this mailing list because it can be complex to handle (lot of stuff to read) and there is no tl;dr. As such, the "voting poll" idea isn't absurd. If done well, on the ecmascript's webpage, and with ads on Mozilla's mdn, it can give an interesting estimate of how popular it would be... On the other hand, it cannot be safe. People would vote without digging deep enough into why such proposal is nice to have. As a result, we can't promise that the result of the poll will be chosen by the committee.
On Sat, May 7, 2011 at 9:16 AM, Thaddee Tyl <thaddee.tyl at gmail.com> wrote:
I believe that David Bruant has a good point. We need a shorter syntax because we advocate the use of map/reduce, etc., which require simple anonymous functions.
No. We don't "need" syntactic sugar. The current function syntax is working and we are talking a few characters difference, not hundreds.
Map/reduce don't "require" syntactic sugar.
You may "want" shorter syntax but we've been getting by well without it. I think framing it as a "need" or a "requirement" is dishonest to the discussion.
As to why we should choose # rather than coffescript's ->, there are two points:
[long discussion ensues based on taste]
There are so many more important issues to address.
Peter
From: Allen Wirfs-Brock <allen at wirfs-brock.com> Date: June 15, 2011 7:08:19 GMT+02:00 To: es-discuss <es-discuss at mozilla.org> Subject: Prototypes as the new class declaration
In esdiscuss/2011-June/015090 I described how self uses an explicit copy method on prototypes to construct instance objects and summarized the difference between self and JavaScript in this regard as:
So, there is pretty much a directly correspondence between a self copy method and a JavaScript constructor function. However, there is definitely a difference of emphasis seen here. In self, it is the prototype object that is the focus of attention. In traditional JavaScript it is the constructor object. (emphasis added)
In response to a reply from Axel I further said:
A prototype P, as used in self, does seem to frequently subsume the role of a class. A constructor generally corresponds to the copy method of such a prototype.
Having thought about this a bit more, I'm beginning to think that perhaps we could get by without the proposed class declarations.
Cool! That’s what I meant when I wrote the following:
On Jun 14, 2011, at 11:25 PM, Axel Rauschmayer wrote:
Cool! That’s what I meant when I wrote the following:
From: Axel Rauschmayer <axel at rauschma.de> Date: June 13, 2011 10:55:56 GMT+02:00 To: Allen Wirfs-Brock <allen at wirfs-brock.com>, es-discuss <es-discuss at mozilla.org> Subject: Re: Classes: suggestions for improvement
The correspondence is not quite that straight forward. A prototype P, as used in self, does seem to frequently subsume the role of a class. A constructor generally corresponds to the copy method of such a prototype.
Isn’t that the illusion that class literals aim to create? That a class is actually a single construct, an object C with a method construct() and that you can apply the new operator to that object? new C(x,y) would perform the following steps:
- Create a new instance o whose prototype is C.
- Invoke o.construct(x,y)
Maybe "initialize" would be a better name than "construct", then.
Except we are still trying to sugar the prototypal pattern (but not name the constructor; rather, name the prototype). And the constructor property of the prototype is named 'constructor'.
Inventing a new protocol doesn't seem as good. We can't migrate constructor-function-with-.prototype property as directly; we have two protocols not one.
Yes, the "outer" object that's denoted by the "class name" is now the prototype, not the constructor. Even though we extend new and instanceof to avoid requiring '.constructor' suffixing, that dot and name may have to be manually suffixed otherwise -- to call super.constructor, to get at "class" methods and properties.
The loss of the manifest name for the abstraction denoting the constructor seems like the thing to discuss.
Terminology (created by me, but I think it explains well what is going on):
- |this| points to the object where property lookup starts. It always points to the beginning of the prototype chain.
- |here| points to the object where a property was found. |here| can point to any object in the prototype chain.
|super| starts property lookup in the prototype of |here|, but does not change |this|. That is, a method invoked via |super| still has the same |this| and property lookup via |this| is unchanged. If the super-method again uses |super|, then property lookup will begin in the prototype of |here| (= where the super-method has been found). Etc.
From: Mariusz Nowak <medikoo+mozilla.org at medikoo.com> Date: June 20, 2011 13:49:20 GMT+02:00 Subject: Re: Making "super" work outside a literal?
It's most sane proposal I think. However few things are not obvious to me, will following evaluate as I assume:
var A = { one: function () { return 'A.foo'; } }; var B = Object.create(A);
var C = Object.create(B); C.one = function () { return super.one(); };
var c1 = Object.create(C); obj.one(); // 'A.foo'
That would be c1.one(), right?
|here| === C and thus the search for super.one starts in B and finds A.one.
B.two = function () { this.three(); }; B.three = function () { return 'B.three'; }; C.two = function () { super.two(); }; C.three = function () { return 'C.three'; };
var c2 = Object.create(C); c2.two(); // C.three
|here| === C and thus super.two === B.two
B.two() uses the original and unchanged |this| which is still c2. => this.three === C.three
SUMMARY: I agree with your findings.
On Mon, Jun 20, 2011 at 1:27 PM, Axel Rauschmayer <axel at rauschma.de> wrote:
Terminology (created by me, but I think it explains well what is going on):
- |this| points to the object where property lookup starts. It always points to the beginning of the prototype chain.
- |here| points to the object where a property was found. |here| can point to any object in the prototype chain. |super| starts property lookup in the prototype of |here|, but does not change |this|. That is, a method invoked via |super| still has the same |this| and property lookup via |this| is unchanged. If the super-method again uses |super|, then property lookup will begin in the prototype of |here| (= where the super-method has been found). Etc.
It doesn't seem quite right that an upward call like
Object.getPrototypeOf(here).foo.call(this)
has sugar
super.foo()
but sideways calls like
here.foo.call(this)
don't have any sugar.
By the way, I like this idea that "super" is available all the time (not just in an initializer) like "this" is always available; however, adding another implicit variable "here" which is dynamic like "this" is disconcerting as "this" has been quite a wild beast in JavaScript to say the least.
Peter
On Jun 20, 2011, at 2:51 PM, Peter Michaux wrote:
By the way, I like this idea that "super" is available all the time (not just in an initializer) like "this" is always available; however, adding another implicit variable "here" which is dynamic like "this" is disconcerting as "this" has been quite a wild beast in JavaScript to say the least.
'super' is not dynamic as proposed in harmony:object_initialiser_super. Even in Allen's recent post about Object.defineMethod, 'super' is not bound dynamically per call site, as 'this' is. It's a hidden property of the function object.
rauschma wrote:
var c1 = Object.create(C); obj.one(); // 'A.foo'
That would be c1.one(), right?
Yes, sorry edited post twice.
rauschma wrote:
|here| === C and thus the search for super.one starts in B and finds A.one.
B.two = function () { this.three(); }; B.three = function () { return 'B.three'; }; C.two = function () { super.two(); }; C.three = function () { return 'C.three'; };
var c2 = Object.create(C); c2.two(); // C.three
|here| === C and thus super.two === B.two
B.two() uses the original and unchanged |this| which is still c2. => this.three === C.three
SUMMARY: I agree with your findings.
From my perspective, this proposal looks perfect. I wouldn't look further :)
Mariusz Nowak
Plus, the prototype is in some ways secondary. It's the less directly used object when one calls a constructor often, after populating the prototype. And if class methods come into the picture, the prototype is even more "backstage", an implementation detail.
That seems to be a matter of taste: To me prototypes are the core of JavaScript inheritance. The single construct that is used to handle both instance-of and subclass-of. If you draw an object diagram, it is constructors that move into the background and prototypes that remain.
What do you do with constructors-as-classes to check the following? o instanceof C You look for C.prototype in the prototype chain of o.
Axel
On Jun 28, 2011, at 4:22 PM, Axel Rauschmayer wrote:
Plus, the prototype is in some ways secondary. It's the less directly used object when one calls a constructor often, after populating the prototype. And if class methods come into the picture, the prototype is even more "backstage", an implementation detail.
That seems to be a matter of taste: To me prototypes are the core of JavaScript inheritance. The single construct that is used to handle both instance-of and subclass-of. If you draw an object diagram, it is constructors that move into the background and prototypes that remain.
What do you do with constructors-as-classes to check the following? o instanceof C You look for C.prototype in the prototype chain of o.
No, the JS engine does that!
I, or random classy-dynamic-language-experienced users, just do "o instanceof C", i.e., they ask "is o an instance of [constructed by] C"?
Not everyone mentally deconstructs operators into their more primitive semantics. Especially if there's no prototype-chain hacking, just shallow classical inheritance via a solid library.
From: Russell Leggett <russell.leggett at gmail.com> Subject: Minor extension to Set Literal Prototype operator as minimal classes Date: October 1, 2011 19:27:27 GMT+02:00 To: es-discuss <es-discuss at mozilla.org>
I'm probably not the first to have thought of this, but it seemed like the most minimal class support I could think of, and it sounds like that's where things are headed from the lack of agreement. Through the additions made by the Set Literal Prototype operator proposal, I think you nailed 90% of the pain point for most developers. Nicer method syntax, simple extension functionality, and the addition of super seals the deal. The only thing missing is how to translate that set of functionality to work with "classes" instead of just objects. It seems so close, in fact, that the idea of doing a class hierarchy in a different way seems almost unintuitive.
Note that in JavaScript terminology, a class is a function (which in turn is an object). So there really are no classes. Just functions that produce instances when used with the "new" operator (if a function is used this way, it is often called a constructor). Then the difficulty is how to assemble it all:
- The super-constructor SuperClass must be the prototype of the sub-constructor ClassName, if class properties are to be inherited.
- The super-prototype SuperClass.prototype must be the the prototype of ClassName.prototype.
- (Instance) methods must be added to ClassName.prototype.
- Class properties must be properties of ClassName (which is a function!).
- Instance properties must be set up in the constructor.
If you look at Allen’s class pattern then the proto operator <| takes care of #1 + #2, .prototype.{} does #3, .constructor.{} does #4. You have to make sure that the compound expression ends with something that returns the constructor, or else the assignment at the beginning makes no sense.
If you want to read up
On Sat, Oct 1, 2011 at 3:42 PM, Axel Rauschmayer <axel at rauschma.de> wrote:
Note that in JavaScript terminology, a class is a function (which in turn is an object). So there really are no classes. Just functions that produce instances when used with the "new" operator (if a function is used this way, it is often called a constructor).
I'm fully aware of this - that's why I tried to remember to aways use "class" in parens because its not really classes.
Then the difficulty is how to assemble it all:
- The super-constructor SuperClass must be the prototype of the sub-constructor ClassName, if class properties are to be inherited.
- The super-prototype SuperClass.prototype must be the the prototype of ClassName.prototype.
- (Instance) methods must be added to ClassName.prototype.
- Class properties must be properties of ClassName (which is a function!).
- Instance properties must be set up in the constructor. If you look at Allen’s class pattern then the proto operator <| takes care of #1 + #2, .prototype.{} does #3, .constructor.{} does #4. You have to make sure that the compound expression ends with something that returns the constructor, or else the assignment at the beginning makes no sense. If you want to read up on this – a while ago, I’ve written a post that might be helpful: www.2ality.com/2011/06/prototypes-as-classes.html
Yes, I included the pattern in the email I sent to show it as a reference. When I supplied my version of the pattern, I was not trying to say that the pattern would work as is, but rather that it should be possible to make the operator capable of also working to create a nice class pattern. This is why I said it would be an additional overload to the operator.
According to the original pattern:
const className = superClass <| function(/*constructor parameters */) {
//constructor body
super.constructor(/*arguments to super constructor */);
this.{
//per instance property definitions
};
}.prototype.{
//instance properties defined on prototype
}.constructor.{
//class (ie, constructor) properties
};
We take superClass (a constructor function) and we use it as the prototype of the new constructor function on the RHS, then we use chaining to update the prototype, then update constructor. As you say, you have to be careful of returning the constructor (even if not modified) to make this chaining work. While I think this is already an improvement, its still complicated enough that we're working on more sugar for it. What I was suggesting was that if the LHS was a constructor function, and the RHS was an object literal, then:
const ClassName = SuperClass <| {
constructor(/*constructor parameters */) {
//constructor body
super.constructor(/*arguments to super constructor */);
this.{
//per instance property definitions
}
}
method1(){ return super.method1(); }
method2(){}
prop1:"Properties unlikely, but allowed"
}.{
//class properties
staticMethod(){}
};
- If a "constructor" method is supplied as part of the object literal, it will be used as the new constructor, and take the LHS constructor as its prototype. If not, an new empty constructor will be created instead and take the LHS constructor as its prototype.
- The rest of the object literal will be used as the new constructors prototype, and it will use the LHS's prototype as it's prototype.
- The result of the <| operand in my example would be the new class/constructor function, so it can be assigned to "const ClassName" or as in my example, the .{ operator can be used to add class methods and still return the correct result.
From: John J Barton <johnjbarton at johnjbarton.com> Subject: Re: Object.extends (was Re: traits feedback) Date: October 7, 2011 16:49:50 GMT+02:00 To: Quildreen Motta <quildreen at gmail.com> Cc: John-David Dalton <john.david.dalton at gmail.com>, es-discuss Steen <es-discuss at mozilla.org>
Several people advocate Object.extend() that copies only own properties. I don't understand why; I'll make the case for copying all properties.
At a call site I use, say
foo.bar(); Ordinarily I should not be concerned about details of bar()'s implementation. In particular I should not be concerned if bar() is own or not.Now I decide to create a new object from |foo|, by adding properties with Object.extend(): var fuz = Object.extend(foo, {paper:'in', shoes:'my'}); later I write fuz.bar();
If Object.extend() only uses own properties, the success or failure of that statement depends upon the location of bar() in foo. While one might argue that use of extend() is one case where devs should be concerned with details of bar()'s implementation, how does it help in this case? If bar() is in a prototype then I am back to manual property manipulation get the |fuz| object I want.
If Object.extend() copies all properties (into prototype or not is a different issue) then the resulting object acts as an extension of the one I want to extend.
On the other side I don't see any advantage to copying only own properties. Again I will suggest Object.getOwnProperties() as a way of providing the answer some would like to see. Again I will point out that Object.extend() with plain objects is not affected by these issues.
If you do something like var fuz = Object.extend(foo, {paper:'in', shoes:'my'});
Then fuz will get all properties of Object.prototype, again, as duplicates. In the above, you are clearly most interested in what you see in the literal and those are the own properties.
Also note that JavaScript only changes properties in the first object in a prototype chain: var proto = { foo: 3 }; var obj = Object.create(proto); obj.foo = 5; console.log(proto.foo); // 3
And this kind of "extend" enables poor man’s cloning as follows: var orig = { foo: "abc" }; var clone = Object.extend(Object.create(Object.getPrototypeOf(orig)), orig);
I would prefer the name Object.copyOwnPropertiesTo(source, target) or Object.copyOwnTo(source, target) to the name “extend” (which, to me, suggests inheritance).
On Fri, Oct 7, 2011 at 8:34 AM, Axel Rauschmayer <axel at rauschma.de> wrote:
*From: *John J Barton <johnjbarton at johnjbarton.com> *Subject: *Re: Object.extends (was Re: traits feedback) *Date: *October 7, 2011 16:49:50 GMT+02:00 *To: *Quildreen Motta <quildreen at gmail.com> *Cc: *John-David Dalton <john.david.dalton at gmail.com>, es-discuss Steen < es-discuss at mozilla.org>
Several people advocate Object.extend() that copies only own properties. I don't understand why; I'll make the case for copying all properties.
At a call site I use, say foo.bar(); Ordinarily I should not be concerned about details of bar()'s implementation. In particular I should not be concerned if bar() is own or not.
Now I decide to create a new object from |foo|, by adding properties with Object.extend(): var fuz = Object.extend(foo, {paper:'in', shoes:'my'}); later I write fuz.bar();
If Object.extend() only uses own properties, the success or failure of that statement depends upon the location of bar() in foo. While one might argue that use of extend() is one case where devs should be concerned with details of bar()'s implementation, how does it help in this case? If bar() is in a prototype then I am back to manual property manipulation get the |fuz| object I want.
If Object.extend() copies all properties (into prototype or not is a different issue) then the resulting object acts as an extension of the one I want to extend.
On the other side I don't see any advantage to copying only own properties. Again I will suggest Object.getOwnProperties() as a way of providing the answer some would like to see. Again I will point out that Object.extend() with plain objects is not affected by these issues.
If you do something like var fuz = Object.extend(foo, {paper:'in', shoes:'my'});
Then fuz will get all properties of Object.prototype, again, as duplicates. In the above, you are clearly most interested in what you see in the literal and those are the own properties.
I don't understand how you can get properties as duplicates in JS. I disagree, since in my example I specifically point out that fuz.bar() should work.
Also note that JavaScript only changes properties in the first object in a prototype chain: var proto = { foo: 3 }; var obj = Object.create(proto); obj.foo = 5; console.log(proto.foo); // 3
Fine, but not germane as far as I can tell.
And this kind of "extend" enables poor man’s cloning as follows: var orig = { foo: "abc" }; var clone = Object.extend(Object.create(Object.getPrototypeOf(orig)), orig);
Sadly this code will fail since -- surprise! -- Object.create() does not
take an object as the second argument.
I would prefer the name Object.copyOwnPropertiesTo(source, target) or Object.copyOwnTo(source, target) to the name “extend” (which, to me, suggests inheritance).
Since we live in a right to left world (a = b();) we need the target on the left. Then multiple RHS also works easily. However, note that if Object.create() were fixed: var clone = Object.create(Object.getPrototypeOf(orig), Object.getOwnProperties(orig));
jjb
2011/10/7 John J Barton <johnjbarton at johnjbarton.com>
On Fri, Oct 7, 2011 at 8:34 AM, Axel Rauschmayer <axel at rauschma.de> wrote:
If you do something like var fuz = Object.extend(foo, {paper:'in', shoes:'my'});
Then fuz will get all properties of Object.prototype, again, as duplicates. In the above, you are clearly most interested in what you see in the literal and those are the own properties.
I don't understand how you can get properties as duplicates in JS. I disagree, since in my example I specifically point out that fuz.bar() should work.
You can't get duplicate properties, because keys are unique. However, you'll still get loads of clutter in the target object. Also, again, I'm not sure the majority of the use-cases would be concerned with copying the properties in the whole prototype chain -- that's expensive! -- and you could achieve the latter easily by recursively extending the object.
And this kind of "extend" enables poor man’s cloning as follows: var orig = { foo: "abc" }; var clone = Object.extend(Object.create(Object.getPrototypeOf(orig)), orig);
Sadly this code will fail since -- surprise! -- Object.create() does not
take an object as the second argument.
Object.create here is only taking a single parameter though. `orig' is being passed to Object.extend. The kind of problems that arise when you have overtly verbose qualified names that you can't even keep track of what's where.
I would prefer the name Object.copyOwnPropertiesTo(source, target) or
Object.copyOwnTo(source, target) to the name “extend” (which, to me, suggests inheritance).
See previous point.
Just to be sure I understood that thing right.
When this is the "stack" of the example by Jussi Kalliokoski (other colors mean other queue entry)
- console.log("1");
- console.log("2");
- wait;
- console.log("5");
- console.log("3");
- console.log("4");
The wait keyword made it execute this way:
- console.log("1");
- console.log("2");
- console.log("3");
- console.log("4");
- console.log("5");
Instead of this way:
- console.log("1");
- console.log("2");
- console.log("3");
- console.log("4");
- console.log("5");
?
Ok, that's not how I suggested it to work. But to be honest, I think this approach is event better! You would'nt have to implement stack pausing/resuming at all! In fact, the "wait" or "delay" keyword would just create a new queue, move all entries from the main queue to that sub-queue and then run the queue blocking to its end. It all would just look like a normal function call and wouldn't be that hard to implement at all.
You say that's a bug and this will indeed be right, but what about adding such a functionality to the spec? I think the argument about "to hard to implement" would be gone this way, right?
2012/7/6 <es-discuss-request at mozilla.org>
I think you understood it right... But while the hard to implement argument is now invalid at least for SpiderMonkey, it's a really bad idea to add this to the language.
First of all, you read the blogpost, right? I explained pretty much how it makes everything unpredictable and volatile even. Having something like this would be a showstopper for having any serious logic done in JavaScript. You can't do anything that makes a critical state unstable without taking serious precautions and even then you might end up losing the whole state. This is even worse than putting&enforcing locks and mutexes in JS.
Second, it doesn't add any real value. If you take any real example where something like a wait keyword would make the code easier to read, let's say node.js only had an async readFile function:
var content
fs.readFile('blah.txt', function (e, data) { if (e) throw e
content = data }, 'utf8')
wait
// Do something with content
Here the wait will do nothing, because the callback isn't immediate, except in very rare edge cases maybe, and the content will be undefined when you start trying to process it.
So, while I still think it might be a prolific choice for browser vendors to do this in the main thread for blocking operations, as it will make everything go smoother regardless of a few bad eggs, and hardly anything breaks, I don't think it should be provided elsewhere and especially not as a language feature. And this is just my $0.02 (re: prolific choice), I think browser vendors will disagree with my view there, but what they'll most certainly agree with me on is that it's problematic to allow any kind of blocking operations in the main thread (especially timing-critical), and having alert(), confirm() and syncXHR in the first place was not a very good choice in hindsight. That's why I didn't believe it was a bug in the first place, seemed too designed.
Sent from my Cricket smartphone
DELETED_POST 2012/09/29 14:30 <es-discuss-request at mozilla.org>:
From the 39-year-old curmudgeons-before-their-time camp, I'd like to point
out that comments would be bad for two reasons:
-
They're a sure sign somebody's gone and constructed something to fit implementation/consumption of data rather than describe it which ultimately makes modifying any system handling and constructing layered/nested JSON about as much fun as rebuilding a house of cards on a pile of broken glass.
-
The very next version of IE would start doing something awful with them because Microsoft engineers simply can not resist the temptation to do neat-o things with comments.
But seriously though, let's KISS on the JSON. It ain't broke.
Practical usefulness:
- ES5: JSON stringification of object:
null
meansnull
, andundefined
means "no value, don't include the corresponding key".
- ES6: default values in function arguments (and maybe destructuring
assignment?): null
means null
, and undefined
means "no value, take
the default".
This is all after-the-fact justification. I understand the usefulness
made of this feature now that we have the 2 values, but I wonder if given the choice to start over a newJS would be designed with 2 such values.
Even if the two particular cases I've mentioned were not forecasted when
designing JavaScript, it does not diminish the value of the feature. When designing JS, some decisions had been taken, whose usefulness or badness has been understood only after experience.
That we've found uses for these types don't mean that they're the best we could have done. In fact, we could have achieved the same without having a Null type (let alone two!)
As for JSON serialisation, Null
could be a special type for JSON alone,
that would not be able to be mixed with other types. And if accessing
object properties that are not there threw an Error instead, we wouldn't
have to worry about these things.
In this case, foo.bar
would always yield a useful value if foo
is an
object, because bar
is guaranteed to be present in foo
and to be an
actual value (not undefined/null). It can't be mixed with other things, and
we have no problems with iterating over foo
.
If we are not sure bar
exists in foo
, a function would be defined in
foo's
prototype that would return a Maybe<T>
type. So instead of
blindly doing foo.bar
when you're not sure bar
exists, you'd use
foo.get('bar') // => Maybe<T>
or foo.getOrElse('bar', 'defaultValue') // => T | defaultValue
.
The benefits of this is that we get early errors where it matters, and people are less likely to forget to deal with possible errors in their applications, because dealing with such is made explicit — this still plays well with the delegation semantics, since errors are thrown after we've searched all the prototype chain.
Destructuring assignments could also have an explicit "I know this might not exist, but I want a value anyways" semantics. ES6 Modules have this because people want early errors (and for static linking I suppose?).
I would add my two cents here.
Where the precise type of the data stream is known (e.g. Unicode big-endian or Unicode little-endian), the BOM should not be used.
And there is something in RFC 4627 that tells me JSON is not BOM-aware:
JSON text SHALL be encoded in Unicode. The default encoding is UTF-8.
Since the first two characters of a JSON text will always be ASCII characters [RFC0020], it is possible to determine whether an octet stream is UTF-8, UTF-16 (BE or LE), or UTF-32 (BE or LE) by looking at the pattern of nulls in the first four octets.
00 00 00 xx UTF-32BE
00 xx 00 xx UTF-16BE
xx 00 00 00 UTF-32LE
xx 00 xx 00 UTF-16LE
xx xx xx xx UTF-8
================== These patterns are not BOM, otherwise they would be something like this:
00 00 FE FF UTF-32BE
FE FF xx xx UTF-16BE
FF FE 00 00 UTF-32LE
FF FE xx xx UTF-16LE
EF BB BF xx UTF-8
It is kind of unfortunate that "the precise type of the data stream" is not determined, and BOM is not accepted.
But a mechanism to decide the encoding is specified in the RFC, and it does not include a BOM, in fact it prevents the use of BOM (00 00 FE FF does not match the 00 00 00 xx pattern, for instance)
So, "by the RFC", BOM is not expected / understood.
Although I am afraid that the RFC has a problem: I think "日本語" (U+0022 U+65E5 U+672C U+8A9E U+0022) is valid JSON (same as "foo").
The first four bytes are:
00 00 00 22 UTF-32BE
00 22 E5 65 UTF-16BE
22 00 00 00 UTF-32LE
22 00 65 E5 UTF-16LE
22 E6 97 A5 UTF-8
The UTF-16 bytes don't match the patterns in RFC, so UTF-16 streams would (wrongly) be detected as UTF-8, if one strictly follows the RFC.
, Mihai
====================================================== From: Bjoern Hoehrmann <derhoermi at gmx.net>
To: ht at inf.ed.ac.uk (Henry S. Thompson) Cc: IETF Discussion <ietf at ietf.org>, JSON WG <json at ietf.org>, "Martin J.
Dürst" <duerst at it.aoyama.ac.jp>, www-tag at w3.org, es-discuss <
es-discuss at mozilla.org>
Date: Mon, 18 Nov 2013 14:48:19 +0100 Subject: Re: BOMs
- Henry S. Thompson wrote:
I'm curious to know what level you're invoking the parser at. As implied by my previous post about the Python 'requests' package, it handles application/json resources by stripping any initial BOM it finds -- you can try this with
import requests r=requests.get("www.ltg.ed.ac.uk/ov-test/b16le.json") r.json()
The Perl code was
perl -MJSON -MEncode -e "my $s = encode_utf8(chr 0xFEFF) . '[]'; JSON->new->decode($s)"
The Python code was
import json json.loads(u"\uFEFF[]".encode('utf-8'))
The Go code was
package main
import "encoding/json" import "fmt"
func main() { r := "\uFEFF[]"
var f interface{}
err := json.Unmarshal([]byte(r), &f)
fmt.Println(err)
}
In other words, always passing a UTF-8 encoded byte string to the byte
string parsing part of the JSON implementation. RFC 4627 is the only
specification for the application/json on-the-wire format and it does
not mention anything about Unicode signatures. Looking for certain byte
sequences at the beginning and treating them as a Unicode signature is
the same as looking for /* ... */
and treating it as a comment.
Björn Höhrmann · mailto:bjoern at hoehrmann.de · bjoern.hoehrmann.de Am Badedeich 7 · Telefon: +49(0)160/4415681 · www.bjoernsworld.de 25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · www.websitedev.de
- mnita at google.com wrote:
The first four bytes are:
00 00 00 22 UTF-32BE 00 22 E5 65 UTF-16BE 22 00 00 00 UTF-32LE 22 00 65 E5 UTF-16LE 22 E6 97 A5 UTF-8
The UTF-16 bytes don't match the patterns in RFC, so UTF-16 streams would (wrongly) be detected as UTF-8, if one strictly follows the RFC.
RFC 4627 does not allow string literals at the top level.
Sorry, I just took the first sentence here (second one added to the confusion, not clarified it, but this is probably just me):
A JSON text is a sequence of tokens. The set of tokens includes six structural characters, strings, numbers, and three literal names. A JSON text is a serialized object or array.
Anyway, this is good. It means that the RFC has no problem, it's just me :-)
But the conclusion that the RFC does not allow BOM is independent, and I think it stands.
Mihai
@Alex Kocharin:
I would't call the functions like time. It was only a very simple example. Often my functions would be called from outside so I wouldn't be able to pass the values. Also the main reason I want this is for optimizations, so as you pointed out, can't use the ugly with.
An HTML attachment was scrubbed... URL: esdiscuss/attachments/20131225/1dccb38a/attachment-0001
---------- Forwarded message ---------- From: Brendan Eich <brendan at mozilla.com> To: raul mihaila <raul.mihaila at gmail.com> Cc: es-discuss <es-discuss at mozilla.org> Date: Wed, 25 Dec 2013 13:44:21 -0500 Subject: Re: Fwd: local variables with inherited values
That's unusual -- why would it take on the out same-named variable's value? Note it won't work for 'const', only 'let'. Is there any difference for 'let' between your proposal and A2, lexical window? Let's see, by simplifying the example to make the inner 'const x' into a 'let x':
I misinterpreted Waldemar Horwant's cases. There's no difference between my proposal and A2 + B2, besides a possible way to make it more readable.
In any case, we aren't adding another keyword, and your approach does not work for 'const' at all. For 'let' or any new keyword, it hoists and implicitly initializes, which is too implicit and magical, even for JS.
You're right, it wouldn't work for const.
It's far less readable, though, because of the implicit magic initialization from outer shadowed x. What if there is no outer x? And 'const' does not work at all. We can't have two values for a given 'const' binding.
I agree, if there's no way to make it more readable, it will be too magical and bad. (BTW, throw if there's no outer x.) I am beginning to like the syntax from ES4. I suppose it would work with multiple variables as well. let (x = x, y = y, z = x + y /* maybe would work? */) { .... }
raul mihaila wrote:
I am beginning to like the syntax from ES4. I suppose it would work with multiple variables as well. let (x = x, y = y, z = x + y /* maybe would work? */) { .... }
Yes, ES4 let expr/block works with multiple declarators in a comma-separated list in the head. Perhaps it will be re-championed for ES7, but we need to get basic let/const/class and function-in-block fully baked first.
Since we add more and more constructs to aid functional programming, it would be nice to see something like this
Array.prototype.head Array.prototype.tail Array.prototype.last
like in Haskell. Haskell lists are good at operations in the front of the list. So, head and tail are faster. But in JavaScript we don't have to iterate the entire Array to get the last item, I guess.
- Thanks and ,
Sakthipriyan
That search (/(["']?)enumerable\1\s:\strue/
) would miss quite a few
intentional cases as well, because IIRC Object.create and friends default
to false for each of the descriptors, enumerable, configurable, and
writable. Maybe, that could be better run with this ast-types visitor or
similar? Note that I typed this on a phone, and thus is untested and
possibly with a lot of typos in it as well, given it's easy to miss a key.
// count = 0;
visitCallExpression: function (path) {
var node = path.node;
function eqish(a, b) {
return Object.keys(b)
.every(function (key) {
if (!(key in a)) return false;
var propA = a[key], propB = b[key];
if (typeof propB !== 'object' ||
propA === null)
return propA !== propB ||
propA !== propA; // NaN
return eqish(propA, propB);
});
}
function ident(str) {
return {type: 'Identifier', name: str};
}
function literal(val) {
return {type: 'Literal', value: val};
}
function c(str) {
return {
type: 'MemberExpression',
object: ident('Object'),
property: ident(str),
computed: false
};
}
function helper(args) {
var cond = (args.length <= 1 ||
args[1].type !== 'ObjectExpression' ||
!args.some(function (node) {
return eqish(node, stringEnum) ||
eqish(node, identEnum);
}) ||
!args.every(function (node) {
return eqish(stringEnumish, node) ||
eqish(identEnumish, node);
}));
if (cond) count++
return cond;
}
var stringEnum = {
type: 'Property',
key: literal('enumerable'),
value: literal(false),
kind: 'init',
};
var identEnum = {
type: 'Property',
key: ident('enumerable'),
value: literal(false),
kind: 'init',
};
var stringEnumish = {
type: 'Property',
key: literal('enumerable'),
value: literal(false),
kind: 'init',
};
var identEnumish = {
type: 'Property',
key: ident('enumerable'),
value: literal(false),
kind: 'init',
};
var O_dPs = c('defineProperties');
var O_c = c('create')
var O_dP = c('defineProperty');
var callee = node.callee;
var args = node.arguments;
if (eqish(callee, O_dP))
return helper(args);
if (eqish(callee, O_dPs) ||
eqish(callee, O_c))
return args.every(function (spec) {
return helper(Object.keys(spec));
});
return false;
},
From: Axel Rauschmayer <axel at rauschma.de> To: brendan at mozilla.org Cc: es-discuss list <es-discuss at mozilla.org> Date: Sun, 4 Jan 2015 03:17:54 +0100 Subject: Re: Implicit coercion of Symbols Does it have to be a Reflect.* method? It could be
Symbol.prototype.getDescription()
or a getter.On 04 Jan 2015, at 02:25, Brendan Eich <brendan at mozilla.org> wrote:
Rick Waldron wrote:
That example above is pretty compelling for throw always consistency.
With a new Reflect.* API for converting a symbol to its
diagnostic/debugging string?
If you agree, please file a bugs.ecmascript.org ticket.
/be
es-discuss mailing list es-discuss at mozilla.org, mail.mozilla.org/listinfo/es-discuss
-- Dr. Axel Rauschmayer axel at rauschma.de, rauschma.de
A good name for such a getter would be, IMO,
Symbol.prototype.description
. It would also seem to fit with some other
common JS idioms, such as Array.prototype.length
, etc. It also feels
clearer and cleaner as a getter than an instance method or function.
From: Fred Schott <fkschott at gmail.com> To: es-discuss at mozilla.org Cc: Date: Sun, 4 Jan 2015 23:06:56 -0600 Subject: Question regarding duplicate computed proto properties In Section B.3.1 on "proto Property Names in Object Initializers" there is a paragraph explaining when duplicate properties will result in a syntax error. It says:
It is a Syntax Error if PropertyNameList of PropertyDefinitionList contains any duplicate entries for "proto" and at least two of those entries were obtained from productions of the form PropertyDefinition : PropertyName : AssignmentExpression .
Where PropertyName is defined as:
PropertyName[Yield,GeneratorParameter] : LiteralPropertyName [+GeneratorParameter] ComputedPropertyName [~GeneratorParameter] ComputedPropertyName[?Yield]
LiteralPropertyName : IdentifierName StringLiteral NumericLiteral
That paragraph (using the definitions provided) seems to assert that it is a syntax error if there are any duplicate uses of proto with an IdentifierName, StringLiteral, or ComputedPropertyName. To translate this into an example, it seems to assert that in ES6 this is not valid:
var obj = { proto: somePrototype, ["proto"]: somePrototype }
// Error: SyntaxError
Is that correct? Step 6 of Section B.3.1 explicitly states that the computed ["proto"] does not have the same special behavior as the literal property. But from my understanding of other es-discuss topics & resources, the restriction on duplicate "proto" properties also does not apply to computed properties. If that is true, then the quoted paragraph above seems to be incorrect or misleading.
Can anyone clarify this? I may just be misunderstanding the docs and/or the recent discussions. Or it could be that the definition has changed around that quoted paragraph and it needs to be updated.
es-discuss mailing list es-discuss at mozilla.org, mail.mozilla.org/listinfo/es-discuss
As far as my understanding goes, that rule should not apply. The rationale I guess is that having that inconsistency helps eliminate issues with dynamic properties and potential gotchas like these:
class A {}
class B {}
var foo = {};
var bar = {};
foo.__proto__ = A.prototype;
bar.__proto__ = B.prototype;
for (var prop in foo) {
bar[prop] = foo[prop];
}
// If this is false, then that's a problem.
assert(bar instanceof B);
Also, AFAICT, these should never throw, for the same reason:
class A {}
var proto = '__proto__';
{
__proto__: A.prototype
[proto]: null
}
{
__proto__: A.prototype
['__proto__']: null
}
From: Marius Gundersen <gundersen at gmail.com> To: Kevin Smith <zenparsing at gmail.com>, es-discuss <es-discuss at mozilla.org> Cc: Date: Mon, 5 Jan 2015 21:58:54 +0100 Subject: Re: (x) => {foo: bar}
One scenario where arrow functions are likely to return objects is in the map function of arrays and the pipe function for streams. For example:
[1,2,3] .map(x => {value: x, double:2*x}) .filter(x => x.double < 5)
I guess, get ready to practice your Lispy parentheses-balancing skills? ;)
[1,2,3]
.map(x => ({value: x, double:2*x}))
.filter(x => x.double < 5)
(filter
#(< (get % :double) 5)
(map (fn [x] {:value :x, :double :(* 2 x)})) [1, 2, 3])
In all reality, it's easier to tell people to simply put parentheses
around an object literal than to tell people to avoid very specific
nits. The one ambiguity that would cause the most problems in parsing
would be the property shorthand syntax: {foo}
. This is technically
both a valid block and object, so, given the right circumstances, that
could rub against the halting problem, and would theoretically require
up to the entire object to decide whether it is an object or block.
So, something like this would require up to the function call to see
it's an object and not a block (and not exactly obvious), and the
result is never seen until runtime:
x => {x, y, z, f() {}}
I'm skeptical on whether this will work on the implementation side.
From: Allen Wirfs-Brock <allen at wirfs-brock.com> To: "Fabrício Matté" <ultcombo at gmail.com> Cc: es-discuss <es-discuss at mozilla.org> Date: Sat, 17 Jan 2015 12:14:17 -0800 Subject: Re: A new ES6 draft is available
On Jan 17, 2015, at 11:57 AM, Fabrício Matté wrote:
Currently in ES6, the only reserved keywords that can appear
immediately before a .
are this
and super
.
The
this
binding resolves to a value, so MemberExpressions make sense.
The super
keyword is being implemented in ES6, so there are no precedents
to set expectations.
Any other reserved word followed by a
.
is a syntax error. So
reserved words followed by period and an identifier is one of the few
available extension alternatives we have available. And is a natural ready
syntax think of new.target' as meaning give me the target value of the
currently active new
operator.
I agree
new.target
is very natural and pleasant to read. It just feels
rather alien to see an operator in the beginning of a MemberExpression, which currently would only be allowed in this very specific scenario. Of course, if this syntax extension form would be useful for other use cases as Kevin and you have outlined, then I don't oppose it.
I suspect the hypothetical naive JS programmer postulated above
wound't be aware of any of those concepts.
A naive one probably not, but a curious avid developer most definitely
would. ;)
hopefully they will say what is `new.target', "google it" and
immediately find the answer.
You mean, land on a Stack Overflow answer with thousands of upvotes and
very little explanation about why/how a MemberExpression can begin with an
operator. Of course, if you interpret new
as simply a ReservedWord token
instead of an operator, then everything makes perfect sense.
The way I accomplish in the grammar was to add the productions:
MemberExpression : MetaProperty MetaProperty : 'new' '.' 'target'
So we could start talking about "meta properties" as a MemberExpression
alternative and say new.target
is a "meta property".
Allen
Is my understanding correct in that these two conditions are effectively equivalent?
class Foo {
constructor() {
if (new.target === null)
return new Foo();
console.log(new.target === Foo);
}
}
var global = Function('this')();
function Bar() {
if (this === global)
return new Bar();
console.log(this.constructor === Bar);
}
If so, then it would be an easy string transform to write, considering its small scope. (delimited by any operator except period on each side, keep parentheses balanced, done in one pass, potentially in a buffer - could even be in hand-written asm.js, great for node)
Your second example may break if the constructor is called via
.call()
/.apply()
or as a CallExpression : MemberExpression or if it
has been .bind()
ed. Although these may look like corner cases, a good
transform should cover these cases, especially CallExpression :
MemberExpression as it is very common in Node.js land to have constructors
exported as properties of an exported object.
Btw, you're also missing a return
before this
in your "global" function
body.
On Jan 19, 2015, at 5:32 AM, Fabrício Matté wrote:
Your second example may break if the constructor is called via
.call()
/.apply()
or as a CallExpression : MemberExpression or if it has been.bind()
ed. Although these may look like corner cases, a good transform should cover these cases, especially CallExpression : MemberExpression as it is very common in Node.js land to have constructors exported as properties of an exported object.
This style of this
testing is what was necessary to distinguish [[call]] from [[construct]] under the old ES6 @@create design. And the above edge cases were what complicated all of the built-in constructors (and required branding checks and initialization flags) to fully cover.
It was the complexity of correctly identifying those edge cases which lead us to recommending that JS code should never try to do it, and JS developers shouldn't try to defined classes whose constructors did an implicit 'new' when called.
With 'new.target' that complexity all goes away and I see no particularly reason to discourage constructors that 'new' themselves when called as functions.
With 'new.target' that complexity all goes away and I see no particularly reason to discourage constructors that 'new' themselves when called as functions.
Do you think that's what the new built-in constructors (Map, Set, etc.) should do, then? (Act as factories when called).
On Jan 19, 2015, at 10:43 AM, Kevin Smith wrote:
With 'new.target' that complexity all goes away and I see no particularly reason to discourage constructors that 'new' themselves when called as functions.
Do you think that's what the new built-in constructors (Map, Set, etc.) should do, then? (Act as factories when called).
They're currently spec'ed to throw. I wouldn't change that for ES6. It could change in the future.
They're currently spec'ed to throw. I wouldn't change that for ES6. It could change in the future.
Granted.
However, I don't think we should consider the design complete without an answer to this question, regardless of what gets specced for ES6. The answer may carry implications "in the large".
To illustrate my thinking, if we answer that new built-ins should carry on the "factory when called" tradition of the pre-ES6 built-ins, then we are setting a very strong precedent for how user classes ought to behave. In that case, we can expect that users will want to implement "factory when called" as a default behavior for their classes. From that, we can foresee that the following pattern:
if (!new.target) return new Whatever(...args);
will become a kind of boilerplate. Obviously, we want to avoid boilerplate as a general rule. When taken together, these considerations might have a bearing on the overall design.
So I think it will be a good exercise to think a "few steps ahead" on this question.
(I'm really pleased with this direction in the subclassing design, btw!)
Kevin Smith wrote:
They're currently spec'ed to throw. I wouldn't change that for ES6. It could change in the future.
Granted.
However, I don't think we should consider the design complete without an answer to this question, regardless of what gets specced for ES6.
The answer may carry implications "in the large".To illustrate my thinking, if we answer that new built-ins should carry on the "factory when called" tradition of the pre-ES6 built-ins, then we are setting a very strong precedent for how user classes ought to behave. In that case, we can expect that users will want to implement "factory when called" as a default behavior for their classes. From that, we can foresee that the following pattern:
if (!new.target) return new Whatever(...args);
will become a kind of boilerplate. Obviously, we want to avoid boilerplate as a general rule. When taken together, these considerations might have a bearing on the overall design.
So I think it will be a good exercise to think a "few steps ahead" on this question.
Agreed. If we believe most class constructors will construct when called, we should bias the design that way. ASAP (which in my view includes ES6).
If we want JS to impose a new
tax on constructor calls, OTOH, for some
unstated (=== unknown) reason, let's be clear on that decision. If not
on the reason :-P.
(I'm really pleased with this direction in the subclassing design, btw!)
In general, I am too. In particular re: future-proofing and normal vs. abnormal or happy vs.less-happy path design ergonomics (as I think you agree), I'm not.
In that case, we can expect that users will want to implement "factory when called" as a default behavior for their classes. From that, we can foresee that the following pattern:
if (!new.target) return new Whatever(...args);
will become a kind of boilerplate. Obviously, we want to avoid boilerplate as a general rule.
+1!
I personally consider new
as user code pollution.
I have been considering to use classes but export factories in my libraries
and packages' code. The initial idea was:
class Foo {/*...*/}
module.exports = function(...args) {
return new Foo(...args);
};
But the sample above has obvious problems:
- It becomes a tedious boilerplate to wrap every single class into a function export;
- It can't be transpiled to ES5. Rest parameters are transpiled to
.apply()
which can only do [[Call]] and not [[Construct]].
(forgot to CC es-discuss before oops, let's see if this forward goes correctly now)
On 19 January 2015 at 22:41, Fabrício Matté <ultcombo at gmail.com> wrote:
In that case, we can expect that users will want to implement "factory
when called" as a default behavior for their classes. From that, we can foresee that the following pattern:
if (!new.target) return new Whatever(...args);
will become a kind of boilerplate. Obviously, we want to avoid boilerplate as a general rule.
+1!
I personally consider
new
as user code pollution. I have been considering to use classes but export factories in my libraries and packages' code. The initial idea was:
class Foo {/*...*/} module.exports = function(...args) { return new Foo(...args); };
But the sample above has obvious problems:
- It becomes a tedious boilerplate to wrap every single class into a function export;
- It can't be transpiled to ES5. Rest parameters are transpiled to
.apply()
which can only do [[Call]] and not [[Construct]].
Point 2 is incorrect (e.g. new (Foo.bind.apply(Foo, [undefined].concat(args)))
).
Point 2 is incorrect (e.g.
new (Foo.bind.apply(Foo, [undefined].concat(args)))
).
I stand corrected. That's very good news, thanks! Nice and elegant solution, by the way.
I should have taken the time to look at how current transpilers handle that. Your solution is similar to Traceur's generated output, but it is also interesting to see how 6to5 spreads arguments in a NewExpression.
Oh, I also had a typo in the previous mail (s/rest parameters/spread
),
guess I was a bit too sleepy last night.
Reason for the verbose and quirky output in 6to5 is due to performance, see 6to5/6to5#203
From: Brendan Eich <brendan at mozilla.org> To: Axel Rauschmayer <axel at rauschma.de> Cc: Arthur Stolyar <nekr.fabula at gmail.com>, es-discuss list <
es-discuss at mozilla.org>
Date: Thu, 22 Jan 2015 17:32:48 -0800 Subject: Re: JavaScript 2015? I wouldn't hold my breath. Sun was not ever in the mood, even when I
checked while at Mozilla just before the Oracle acquisition closed. Also, "the community" cannot own a trademark.
Trademarks must be defended, or you lose them. This arguably has happened
to JavaScript. Perhaps the best course is to find out (the hard way) whether this is so.
/be
Axel Rauschmayer wrote:
This reminds me: Axel (not Alex) cannot recommend "JavaScript 2015" to
anything near the Ecma standard, because trademark. :-/
Ah, good point. It’d be lovely if whoever owns the trademark now (Oracle?) could donate it to the community. Or the community buys it via crowd-funded money.
-- Dr. Axel Rauschmayer axel at rauschma.de <mailto:axel at rauschma.de> rauschma.de
Would this constitute a good question to ask Oracle formally? Probably the best and safest way to check.
The current spec being worked on to resolve this problem is at whatwg.github.io/loader. It's still under construction, but it's being written with browser and Node interop in mind.
From: John Barton <johnjbarton at google.com> To: Glen Huang <curvedmark at gmail.com> Cc: monolithed <monolithed at gmail.com>, es-discuss <es-discuss at mozilla.org> Date: Thu, 5 Feb 2015 07:53:47 -0800 Subject: Re: include 'foo/index.js' or include 'foo'? The following solution has worked very well for us:
import './foo/index.js'; means resolve './foo/index.js' relative to the importing file.
All of the rest mean look up 'foo' in the developer's mapping of names,
replacing 'foo' with a path that is then used to resolve the import.
To be sure 'foo' 'foo/index' and 'foo/' would likely fail after lookup
since they don't name files.
(This kind of thing cannot be "up to the host". If TC39 passes on
deciding, then developers will).
jjb
On Thu, Feb 5, 2015 at 5:01 AM, Glen Huang <curvedmark at gmail.com> wrote:
I believe this is out the scope of ecmascript. It’s up to the host to
determine how the paths are resolved.
See
people.mozilla.org/~jorendorff/es6-draft.html#sec-hostnormalizemodulename
On Feb 5, 2015, at 8:51 PM, monolithed <monolithed at gmail.com> wrote:
I could not find an answer in the specification regarding the following
cases:
import './foo/index.js' import 'foo/index.js' import 'foo/index' import 'foo' import 'foo/'
Is there a difference?
Node.js lets create an 'index.js' file, which indicates the main
include file for a directory.
So if you call require('./foo'), both a 'foo.js' file as well as an 'foo/index.js' file will be considered, this goes for non-relative includes as well.
es-discuss mailing list es-discuss at mozilla.org, mail.mozilla.org/listinfo/es-discuss
es-discuss mailing list es-discuss at mozilla.org, mail.mozilla.org/listinfo/es-discuss
---------- Forwarded message ---------- From: Andy Earnshaw <andyearnshaw at gmail.com> To: Frankie Bagnardi <f.bagnardi at gmail.com> Cc: es-discuss <es-discuss at mozilla.org> Date: Thu, 5 Feb 2015 18:06:49 +0000 Subject: Re: Should
use strict
be a valid strict pragma?On 5 Feb 2015 15:06, "Frankie Bagnardi" <f.bagnardi at gmail.com> wrote:
I think any issues with that are imagined. Languages have rules, and
of the people who both know what 'use strict' does and are using es6 syntax, they're very unlikely to make the mistake.
Sure, it's theoretical at this point but not unimaginable. Eventually
everyone will be using "es6 syntax", and there are plenty of blogs and books around that explain the benefits of strict mode.
I don't see people using template literals for arbitrary strings... it
could happen but it probably won't.
Not everyone, but I think it's likely that some will. If you want to add
a variable ref to a static string, for instance, it's simpler if you don't have to change the delimiters to back ticks. We can probably only speculate right now.
Mathias makes a good point also, it's not strings that equal the string 'use strict', it's exactly two possible arrangements of characters.
And that's obvious to us, but won't be to everyone. I'm on my phone right
now so it's hard to check, but I think you'll find more sources explaining the directive is a string literal than you would an arrangement of characters.
My immediate reaction was that it wasn't worth it, but I kinda think that
if it is easy enough to implement then any risk would be avoided by doing so.
---------- Forwarded message ---------- From: Steve Fink <sphink at gmail.com> To: es-discuss at mozilla.org Cc: Date: Thu, 05 Feb 2015 11:20:12 -0800 Subject: Re: Should
use strict
be a valid strict pragma? On 02/05/2015 05:12 AM, Andy Earnshaw wrote:I think you're missing the point Leon is trying to make. He's saying
that, in ES 6 we have a new way to write strings. In some ways, these more
powerful strings may condition some people to use as their main string delimiter. An unsuspecting person may liken this to PHP's double quotes vs single quotes, thinking that the only difference is that you can use
${variable}` in strings that are delimited with backticks, but other than
that everything is the same. When they write this in their code:
`use strict`;
They may introduce bugs by writing non-strict code that doesn't throw
when it should. Adding it to the spec wouldn't be difficult and it would avoid any potential confusion or difficult-to-debug issues. It's definitely easier than educating people, IMO.
'use strict' and "use strict" are magic tokens and should stay that way,
not propagate to other ways of writing literal strings. Literal strings are different things, which happen to share the same syntax for backwards-compatibility reasons.
If people switch to backticks for all their literal strings, so much the
better -- then single and double quotes will only be used for directives, and there will be less confusion. (I don't actually believe that. At the very least, I don't expect JSON to allow backticks anytime soon. Nor do I think that using backticks indiscriminately is good practice.)
---------- Forwarded message ---------- From: Axel Rauschmayer <axel at rauschma.de> To: Steve Fink <sphink at gmail.com> Cc: es-discuss list <es-discuss at mozilla.org> Date: Thu, 5 Feb 2015 20:24:24 +0100 Subject: Re: Should
use strict
be a valid strict pragma? Also: given that modules are implicitly strict, you will hardly ever use
the strict directive in ES6.
On 05 Feb 2015, at 20:20, Steve Fink <sphink at gmail.com> wrote:
On 02/05/2015 05:12 AM, Andy Earnshaw wrote:
I think you're missing the point Leon is trying to make. He's saying
that, in ES 6 we have a new way to write strings. In some ways, these more
powerful strings may condition some people to use as their main string delimiter. An unsuspecting person may liken this to PHP's double quotes vs single quotes, thinking that the only difference is that you can use
${variable}` in strings that are delimited with backticks, but other than
that everything is the same. When they write this in their code:
`use strict`;
They may introduce bugs by writing non-strict code that doesn't throw
when it should. Adding it to the spec wouldn't be difficult and it would avoid any potential confusion or difficult-to-debug issues. It's definitely easier than educating people, IMO.
'use strict' and "use strict" are magic tokens and should stay that way,
not propagate to other ways of writing literal strings. Literal strings are different things, which happen to share the same syntax for backwards-compatibility reasons.
If people switch to backticks for all their literal strings, so much the
better -- then single and double quotes will only be used for directives, and there will be less confusion. (I don't actually believe that. At the very least, I don't expect JSON to allow backticks anytime soon. Nor do I think that using backticks indiscriminately is good practice.)
Ignore that email... :(
On Feb 6, 2015 3:21 AM, "Isiah Meadows" <impinball at gmail.com> wrote:
The current spec being worked on to resolve this problem is at
whatwg.github.io/loader. It's still under construction, but it's being written with browser and Node interop in mind.
- Re: How to fix the
class
keyword (Allen Wirfs-Brock)One of those possible enhancement that has been talked about is to implicitly treat a [[Call]] of a class constructor as an implicit 'new', just like you are suggesting.
Doesn't this need to be configurable, and Brendan Eich suggested? gist.github.com/ericelliott/1c6f451b2ed1b634c2f2
If implicit new
is the only behavior, it blocks the factory refactor
path. gist.github.com/ericelliott/e994ee541d0ed365f5fd
If skipping the new
treatment is the only behavior, it breaks backward
compatibility with constructors expecting the new
treatment.
~ee
It'd be nice if we could have used with
to set an arbitrary object to
@
, instead of just this
.
with (jQuery){ @ajax(options) }
People use delegation far too often.
Wait...this got me thinking... The proposal itself doesn't bring along a lot of merits, but it seems like it could be a great stepping stone to a limited pattern matching syntax. This would probably be a little more justifiable IMHO than merely a custom destructuring syntax. Maybe something like this:
Type[Symbol.pattern] = (obj) => {
return [obj.a, obj.b];
}
const Other = {
[Symbol.pattern]: obj => obj,
}
class None {
static [Symbol.pattern](obj) {
return obj
}
}
// Pattern matching, signaled by `in` here
switch (object) in {
case Type([a, b]): return a + b
case Other({a, b}): return a * b
case None: return undefined // Special case, no identifier initialized
}
// Extensible destructuring, easy to implement with the pattern
// matching
let Type([a, b]) = object
let Other({a, b}) = object
In the destructuring phase for both, I was thinking about the following
semantics to assert the type, based on typeof
and the prototype. This
will help engines in optimizing this as well as some type safety for all of
us.
function _checkProto(object, Type) {
// Note: Type[Symbol.pattern] must be callable
if (typeof Type[Symbol.pattern] !== 'function') throw new TypeError()
if (typeof Type === 'function') {
if (type === Array) {
return Array.isArray(object)
} else {
return object instanceof Type
}
} else {
return Object.prototype.isPrototypeOf.call(Type, object)
}
}
function isInstance(object, Type) {
switch (typeof object) {
case 'object': return obj != null && _checkProto(object, Type)
case 'function': return Type === Function
case 'boolean': return Type === Boolean
case 'number': return Type === Number
case 'string': return Type === String
case 'symbol': return Type === Symbol
case 'undefined': return false
}
}
Finally, get the result and do a basic variable pattern assignment, LHS
being the operand, and RHS calling Type[Symbol.pattern]
.
The switch
statement example would (roughly) desugar to the following:
switch (true) {
case isInstance(object, Type):
let [a, b] = Type[Symbol.pattern](object)
return a + b
case isInstance(object, Other):
let {a, b} = Other[Symbol.pattern](object)
return a * b
case isInstance(object, None):
return undefined
}
The destructuring examples would (roughly) desugar to this:
if (!isInstance(object, Type)) throw new TypeError()
let [a, b] = Type[Symbol.pattern](object)
if (!isInstance(object, Other)) throw new TypeError()
let {a, b} = Other[Symbol.pattern](object)
The type assertions will help engines in optimizing this, and it'll also make this safer. It also just makes sense for pattern matching.
As a side effect, you can get the value without destructuring (i.e. the
literal result of Symbol.pattern
) via this:
let Map(m) = someMap
let m = Map[Symbol.pattern](someMap)
I, myself, came up with a Scala-inspired case class concept gist.github.com/impinball/add0b0645ce74214f5aa based on this, and
used it to make try-catch handling a little more Promise-like, which I know quite a few TypeScript users would eat up in a hurry Microsoft/TypeScript#186 (particularly the sum
type implementation). I also created a little toy Option/Maybe implementation gist.github.com/impinball/4833dc420b60ad0aca73
using pattern matching to ease null
/undefined
handling.
And also, check out my gist
gist.github.com/impinball/62ac17d8fa9a20b4d73d for a proposed
Symbol.pattern
prolyfill for the primitive types. That would allow for
things like this:
switch (list) in {
case Array([a, b, c]): return doSomething(a, b, c)
case Map({a, b, c}): return doSomethingElse(a, b, c)
default: return addressBadType(list)
}
---------- Forwarded message ---------- From: Andreas Rossberg <rossberg at google.com> To: "Samuel Hapák" <samuel.hapak at vacuumapps.com> Cc: es-discuss <es-discuss at mozilla.org> Date: Tue, 4 Aug 2015 14:26:45 +0200 Subject: Re: Extensible destructuring proposal On 31 July 2015 at 20:09, Samuel Hapák <samuel.hapak at vacuumapps.com>
wrote:
So, do you still have objections against this proposal? Could we
summarize them?
@Andreas, do you still think that there is category error involved?
If you want to overload existing object pattern syntax, then yes,
definitely. I strongly disagree with that. It breaks regularity and substitutability (you cannot pass a map to where a generic object is expected).
As for a more Scala-like variant with distinguished syntax, I'm fine with
that semantically. But I still don't buy the specific motivation with maps. Can you give a practical example where you'd want to create a map (immutable or not) for something that is just a record of statically known shape? And explain why you need to do that? Surely it can't be immutability, since objects can be frozen, too.
To be clear, I wouldn't reject such a feature outright. In fact, in a
language with abstract data types, "views", as they are sometimes called in literature, can be valuable. But they are also notorious for a high complexity cost vs minor convenience they typically provide. In particular, it is no longer to write
let {a: x, b: y} = unMap(map)
as you can today, then it would be to move the transformation to the
pattern:
let Map({a: x, b: y}) = map
So I encourage you to come up with more convincing examples in the
JavaScript context. Short of that, I'd argue that the complexity is not justified, at least for the time being.
I actually used template strings as templateing in two of my projects. You can take aa look here:
mohsen1/json-schema-view-js/blob/master/src/index.js#L59-L175
Two main problems I had with this was:
- There is no
if
condition in template strings. I had to hack my way by yet another template string function that returns empty string if the condition is falsy: mohsen1/json-schema-view-js/blob/master/src/helpers.js#L9-L21 - The falsy stuff are not rendered as you expect (empty string):
let o = undefined;
console.log(`${o}`); // => 'undefined'
Sent from my iPhone
On 14 Sep 2015, at 2:59 PM, Mohsen Azimi <me at azimi.me> wrote:
I actually used template strings as templateing in two of my projects. You can take aa look here:
mohsen1/json-schema-view-js/blob/master/src/index.js#L59-L175
Will take a look.
Two main problems I had with this was:
- There is no
if
condition in template strings. I had to hack my way by yet another template string function that returns empty string if the condition is falsy: mohsen1/json-schema-view-js/blob/master/src/helpers.js#L9-L21
Wouldn't you use the ternary operator? E.g.
condition ? iftrue : iffalse
- The falsy stuff are not rendered as you expect (empty string):
let o = undefined; console.log(`${o}`); // => 'undefined'
I think that behaviour is what I'd expect (and different toString behaviour in template strings would be confusing).
Being a one of those humble ~0.017% of consumers of O.o, I'd like, too, to raise a vote against the removal of the feature. Having implemented my own 2 way binding library utilizing O.o from one and MutationObserver from another side, I must say that it is very very convenient facility, regardless of callback oriented flow, microtask's 'to be done shorty' flavor etc. Low usage may easily be explained by narrow support matrix in browsers and I'm sure that this number would easily multiply itself once at least one more of majors adds support for that + some popular library utilize it as an engine.
Actually, some time beforehand I've even posted here a call for enhancing the O.o to support 'deep' observe functionality (which currently must be implemented applicatively on each object in the observed tree), much like the MutationObserver does, and I still believe it would be a right thing to do.
Please, consider the withdrawal once again, the fact the here and there people write observe-like logic with getters/setters, memory state digesting and stuff like that speaks for the real need and lacuna to be filled up.
, Guller Yuri.
2015-11-02 21:31 GMT+02:00 <es-discuss-request at mozilla.org>:
Yeah, I wanted to avoid anything too complex in favour of a simple, self-contained approach. Which is why I regret bringing it up, since I realise I was probably trying to adhere to the virtues of simplicity a little too strongly.
For anybody interested, it was for parsing command-line input:
Alhadis/MA-Scraper/blob/master/src/main.js
I caved and simply whitelisted the expected class names instead. Undesirable, but more favourable than an overly-convoluted class map.
Hah! What's wrong with APL? We could bring back the ol' typeballs and call 'em ECMABalls.
Seriously though, I find the |> operator echoes the traditional Unix pipe
quite well (obviously we can't use the bitwise | for self-explanatory reasons). I think most users would find it the easiest to familiar with.
Also don't forget that the "OR" operator can also be used for bitwise assignments (variable |= 20, etc). How much of an issue is it if it shares a character with a non-bitwise (bitunwise?) operator?
On 12/11/2015 06:13 PM, es-discuss-request at mozilla.org wrote:
Send es-discuss mailing list submissions to es-discuss at mozilla.org
To subscribe or unsubscribe via the World Wide Web, visit mail.mozilla.org/listinfo/es-discuss or, via email, send a message with subject or body 'help' to es-discuss-request at mozilla.org
You can reach the person managing the list at es-discuss-owner at mozilla.org
When replying, please edit your Subject line so it is more specific than "Re: Contents of es-discuss digest..."
Today's Topics:
1. Re: es-discuss Digest, Vol 106, Issue 30 (John Gardner)
es-discuss mailing list es-discuss at mozilla.org, mail.mozilla.org/listinfo/es-discuss
Hi, everyone. New guy here, sorry to pollute the mailing list. I was just wondering, can't the fat arrow operator be overloaded to include pipelines? The semantics are already very close -- in both cases it defines inputs and outputs -- and it would look similar to Golang's channel operator and Haskell's monadic >>= operator. Off the top of my
head, I don't see any serious ambiguity. If a left-side function is invoked, the operator is treated as a pipeline; if not, it's an arrow function.
I do fully agree with the notion that enums should generate Symbols by default... however, it seems unreasonable to prevent users from assigning other types as well.
Should enums also offer programmers the ability to assign explicit values, as per traditional enum constructs? For instance, in TypeScript, coders are able to assign values manually, or even elevate the initial index that enumerations are counted from:
const colours = { Red = 1, Green, // 2 Blue // 3 };
We can probably do away with the initial index elevation (since enum'd names would default to the Symbol representation of the identifier, anyway), but I imagine many developers would expect to be able to assign integers in JS enums the same way they can in C, et al.
On Wed, Dec 16, 2015 at 4:14 AM, John Gardner <gardnerjohng at gmail.com> wrote:
I do fully agree with the notion that enums should generate Symbols by default... however, it seems unreasonable to prevent users from assigning other types as well.
Should enums also offer programmers the ability to assign explicit values, as per traditional enum constructs? For instance, in TypeScript, coders are able to assign values manually, or even elevate the initial index that enumerations are counted from:
const colours = { Red = 1, Green, // 2 Blue // 3 };
We can probably do away with the initial index elevation (since enum'd names would default to the Symbol representation of the identifier, anyway), but I imagine many developers would expect to be able to assign integers in JS enums the same way they can in C, et al.
enum
is probably the wrong keyword for the problem we're trying to
solve by desugaring unique identifier declarations to Symbol creation.
We're defining constants that should have no relation to each other.
We're not enumerating anything as it's supposed to be a 'secret'
Symbols are special globally defined integer identities behind the
scenes.
I'd personally add both?
sym RED, GREEN, BLUE; # const RED = Symbol('RED')', GREEN =
Symbol('GREEN'), BLUE = Symbol('BLUE');
enum { RED: = 5, GREEN, BLUE }; # const RED = 5, GREEN = 6, BLUE = 7;
enum
has precedent -- C-like languages indeed use integers by default,
but see Rust for a generalized form:
doc.rust-lang.org/book/enums.html -- for a category-theory "sum"
type and IMHO handily beats sym
or sum
.
On Wed, Dec 16, 2015 at 5:44 AM, Brendan Eich <brendan.eich at gmail.com> wrote:
enum
has precedent -- C-like languages indeed use integers by default, but see Rust for a generalized form: doc.rust-lang.org/book/enums.html -- for a category-theory "sum" type and IMHO handily beatssym
orsum
.
I just mean that using enum
to define symbols/identifiers sounds
odd, as enum is short for 'enumerate' no? We wouldn't be enumerating
symbols as that suggests order behind the symbol
identifiers/references, imo :>
Rust's enum/sum value type sounds like a typed union.. the type flag is just hidden?
A union is a kind of sum type -- either this or that. Order may matter for reflective use-cases, indeed. That's a good point. Whether symbol values by default are best wants a bit more debate too, I think.
Chiming in only with one question: is interoperability between PL and different environments a concern at all?
I think having unique identifiers like Symbols is a better solution too but right yesterday I've made a PR to convert V8 Enum type to GObject enums using its integer value [1] 'cause that's how it also work the other way round [2].
How would a JS Symbol as enumerable be represented in Emscripten generated code or how could Int32 enum based PL transpile their enums to JS?
Maybe I am overlooking at this and it's not a real issue?
[1] WebReflection/gnode/commit/56c5801866452c1e4973ed0c42f80dcda9d3d8c6 [2] WebReflection/gnode/blob/master/src/value.cc#L279
I think introducing operator overloading has the potential to introduce more problems than it solves.
JavaScript is a versatile language, yes, but part of that versatility rests with simplicity. Overloading operators might make writing dynamic code easier, but it also becomes a lot harder to discern an author's intent if values are calculated unexpectedly. For example, when one typecasts something to an integer, they'd expect it to follow the logical JavaScript laws of how values are coerced to Numbers. Code becomes harder to predict, and all for very little gain.
I really don't think it's worth the added complexity. Just because we can, doesn't mean we should.
/* 2¢ */
If permutations of operators is a problem, couldn't it be resolved by assigning symbols to symbols?
For instance, to override a binary operator, one could just
Symbol['>>']['='] // >>=
Or something.
I think having some simple way of getting the last element would be a long time coming to JS. You really shouldn't need to type arr[ arr.length - 1 ] every time you want the last element. I don't care if it's .last() or .nth( -1 ) or .end() as long as it's something short and memorable.
Remember, this is something you do all the time. Imagine the amount of productivity lost by all JS developers by having to type out myActualVariableName[ myactualVariableName.length - 1 ] every time... You're talking millions of dollars worth of wasted effort.
End the madness now!
</rant>
unsubscribe
Try sending it with unsubscribe
as the subject and no text. Or you could
always go to the link and do it from there. ;-)
Sorry for quoting all that in my previous reply
I was entirely unaware of Intl.NumberFormat("it-u-nu-roman")! I don't have a mac, and those examples don't seem to work in node or firefox or chromium, but I'll take your word for it.
As for zero, for what it's worth, my implementation stringifies that as an empty string, which parses as NaN. Seemed the most sane given my non-expertise. I don't cap values; a number can have all the M's it needs. And yes, clearly fractions and negatives etc aren't represented.
Does Intl do anything to parse string representations? At first glance it seems like a one-way interface for displaying stuff rather than actually converting numbers.
"Just use a library" is a perfectly reasonable response. I'm a C guy at heart, so keeping a language itself free of frivolities feels correct. That said, the existing radix argument when parsing or toString'ing a number gives us a little peek at a world of numbers that goes beyond the base-10 Arabic digits we all use daily. Expanding on it with some well-defined behaviors for handling other representations of numbers doesn't seem like much of a stretch.
That's some interesting stuff kdex! You know a lot more about Roman numerals than I. I was hoping I'd learn some stuff like that; thank you.
Your point about Roman being nonpositional and therefore a bad fit for radix is well taken. That convinces me that at least this implementation is wrong. (The other arguments in my mind amount to "well this could get tricky, so we'd better not even try". :^)
"Nobody uses Roman numerals" is, I think, the main, real reason this won't happen. English is an even more bodged-up mess than Roman numerals, but people use it, so it's important. Jav- oops - EcmaScript itself can fairly be called messy, but humans use it a great deal, so it's important. Someday, AIs doing engineering on quantum computers will argue about whether it would make sense to sneak in a quick mode to simulate ye olde EcmaScript. You know, just as a joke. :^)
Too many operators can become quite confusing, having worked with many functional languages I find the number of operators more of a hinder than help. I’m very pro functional operators when they help me read the code explicitly rather than implicitly, for example with the |>
pipeline operator, this helps you to read the code in the order of execution. With the current |>
operator proposal you could write something similar with:
const sum = numbers |> (numbers) => reduce(numbers, (a, b) => a + b);
Passing numbers to an anonymous closure so you can have multiple arguments works quite elegantly.
An HTML attachment was scrubbed... URL: esdiscuss/attachments/20171213/d360b69c/attachment
An HTML attachment was scrubbed... URL: esdiscuss/attachments/20171216/40ce8dc0/attachment
An HTML attachment was scrubbed... URL: esdiscuss/attachments/20171216/ec183306/attachment
Le 16 déc. 2017 à 07:10, zhaoshche <zhaoshche at gmail.com> a écrit :
Who can tell me what is it? Why always send to me? How to undescribe it?
Your e-mail address is most probably subscribed to the es-discuss mailing list. Instructions to unsubscribe is found at the bottom of this page:
Just throwing in my 2 cents.
I would argue that this represents such a fundamental change to how the language works, that it would actually represent a new language, even though they're optional. One problem is in how this would be implemented. Making types optional introduces a lot of overhead in type checking. Such checks would have to occur at runtime since ES is a language capable of self-modification. What's more is that the weight of all these checks happening at runtime would cause a serious performance penalty to run speed.
On the flip side, class
already provides some of what you want to do.
The proposal for private fields brings class even closer. If decorators are
allowed to be applied to both non-objects and function parameters, then
decorators could be used to close the gap. All that would be needed is to
define a decorator for each desired type. The decorators would take care of
the type checks while performing the documentation purposes you're saying
would be one of the biggest features.
Having used many typed languages before, I cannot deny the usefulness of types. However, where Javascript is concerned, mixing typed and untyped variables leads to complexities that could be flatly impossible to optimize. Have you considered what would happen if someone does something like this:
function doSomething(param: int32) {
//Do something here
}
var a = 32;
doSomething(a);
a = "foo";
doSomething(a);
The engine would not be able to optimize this... ever. Every call would involve a type check before the call is made since the type of the parameter cannot be guaranteed to be converted to an int32. The engine would either need to disallow such a call from an untyped value, or type check every time.
If you can imagine implementing types as syntactic sugar over what can already be done in ES6 (maybe with the inclusion of ES7 or proposed features) you might find a bit less push-back.
If optional types in JS were implemented like they are in Flow, the types would be stripped before runtime. There are only there for pre-runtime type checking. So nothing related to optimization would change and there would be no runtime type checking.
R. Mark Volkmann Object Computing, Inc.
That is why we have the websites definitelytyped.org and flowtype/flow-typed. Sure they don’t have type definitions for all libraries, but more are being added all the time and developers can provide their own definitions to fill in the gaps. Being able to use types, even when some libraries do not have type definitions is still far better than not using types at all.
R. Mark Volkmann Object Computing, Inc.
And to add to that, TypeScript also supports in-package type definitions, too. (Quite a few libraries elect to do this instead.)
Isiah Meadows me at isiahmeadows.com
Looking for web consulting? Or a new website? Send me an email and we can get started. www.isiahmeadows.com
Your statements are no less true for the essential internal methods than they are for the traps of a [[ProxyHandler]]. That's my point. This work is already being done using the internal methods. I'm just asking that where calls to internal methods exist, if the object on which the call was made is a Proxy, then all calls to the internal methods need to be made via the [[ProxyHandler]] of that Proxy object. None of the storage requirements to validate the invariants will change.
[delete -- appended to wrong thread. Sorry!]
An optimization would be great because in comparison to the existing concat method, rest/spread is significantly slower at the moment (see jsperf.com/single-array-composition)
On Wed, May 23, 2018 at 5:56 PM, Alexander Lichter <es at lichter.io> wrote:
An optimization would be great because in comparison to the existing
concat
method, rest/spread is significantly slower at the moment (see jsperf.com/single-array-composition)
There's a limit to how much optimization can address the fact that
[...original, ...additions]
has to create a new array. And creating the
array is significant, as we can see in the difference between concat
and
push
: jsperf.com/concat-vs-push-number-42
I've often wanted an Array.prototype.append
(often enough I've been known
to add it on occasion; obviously only in app/page code, not in lib code). It
would be a good addition in my view. I tend to suspect the name isn't
web-safe, though. So that would need investigation and design.
The polyfill is trivial:
Object.defineProperty(Array.prototype, "append", {
value: function(...sources) {
for (let o = 0, olen = sources.length; o < olen; ++o) {
const source = sources[o];
for (let i = 0, ilen = source.length; i < ilen; ++i) {
this.push(source[i]);
}
}
return this;
},
writable: true,
configurable: true
});
I've avoided iterators and such as a first pass at optimization (and it
seems to do okay: jsperf.com/push-vs-append-number-42/1). Didn't
use push.apply
because of the limits it has on it in some
implementations. This version assumes all entries are array-like and should
have their entries added to the array on which it's called. There are a lot
of other designs one could make, though. Just one argument (you can always
chain); a flattening version; etc., etc.
-- T.J. Crowder
array.push(...sources)
, not sure why we'd need "append".
On Wed, May 23, 2018 at 1:05 PM, Jordan Harband <ljharb at gmail.com> wrote:
array.push(...sources)
, not sure why we'd need "append".
From the original email (a bit buried and hard to find due to broken
threading, admittedly):
Has anyone ever suggested Array.prototype.append as an Array.prototype.push which returns the array itself?
The point is x.append(y) returning x, whereas x.push(y) returns y.
Just one thought ...
Object.defineProperty(Array.prototype, "append", {
value(...sources) {
this.push(...[...sources]);
return this;
},
writable: true,
configurable: true
});
... but also another one ...
(array.push(...sources), array)
it seems to little of an improvement to become new Array method, I am already confused with includes and contains that append and push, when Sets have add, might cause me unnecessary headaches.
We have two threads going on this with the same (and differeing) points being raised in both, let's pick it up in the better-named "Array.prototype.append?" thread.
-- T.J. Crowder Farsight Software Ltd | 20-22 Wenlock Road, London N1 7GU | Company #8393428
tj.crowder at farsightsoftware.com | Office: +44 (0)20 3034 0132 - Mobile: +44 (0)7717 842 414
If you've received this message in error, please let us know by forwarding it to info at farsightsoftware.com and then delete it from your system. Please don't copy it or disclose its contents to anyone. Separately, note that email sent over the internet without a digital signature may be modified en route.
I partially agree with Michael Luder-Rosefield.
The main reason I started this thread is consistency. Because I can't get
the logic behind what Michael listed above.
I don't think I can support the idea of
iterOrder
. Looks like a try to create silver bullet and I don't think
it's a handy way to perform these examples.
Looks more universal than handy to me.
I like the idea to expand this proposal for
find
method as well.
Thanks.
P. S. This is my first proposal, so I apologies for bad formatting and forwarding.
W to
Sent from my iPhone
Unsubscribe
Is there a good way to get involved in the development of this proposal?
When replying, please edit your Subject line so it is more specific than "Re:
Contents of es-discuss digest..."
I have no idea which proposal you're talking about.
Please unsubscribes me
This is Plausible as is most issues.
On Jun 10, 2019 7:30 PM, <es-discuss-request at mozilla.org> wrote:
Send es-discuss mailing list submissions to es-discuss at mozilla.org
To subscribe or unsubscribe via the World Wide Web, visit mail.mozilla.org/listinfo/es-discuss or, via email, send a message with subject or body 'help' to es-discuss-request at mozilla.org
You can reach the person managing the list at es-discuss-owner at mozilla.org
When replying, please edit your Subject line so it is more specific than "Re: Contents of es-discuss digest..." Today's Topics:
- Re: Proposal: rest operator in middle of array (guest271314)
---------- Forwarded message ---------- From: guest271314 <guest271314 at gmail.com>
To: Ethan Resnick <ethan.resnick at gmail.com>
Cc: es-discuss at mozilla.org Bcc: Date: Tue, 11 Jun 2019 00:30:27 +0000 Subject: Re: Proposal: rest operator in middle of array
but imo it really masks the author's intention (in addition to being more verbose and less efficient) when compared to:
const [...xs, y] = someArray;
No intention is "masked". Yes, more verbose, though demonstrates that the expected output is now possible using current version of JavaScript.
"less efficient" is a statement which requires evidence. Compared to what? How is "efficient" benchmarked and measured and compared?
The proposed solutions for argument lists strike me as less-good.
Again, the term "less-good" is entirely subjective. There is no "good" or "bad" code, there is only code. The opinion of "good" or "bad" is in the individual human mind, not the code, and is subject to change from one moment to the next.
function match(matchers, data = matchers.pop()) {
console.log(matchers, data); } const x = [[1,2],3]; match(x); console.log(x); // [[1,2]], uh oh, x has changed.
As mentioned in the post, the same pattern as const [[...xs], y = xs.pop()] = [someArray];
can be used
function match([...matchers], data = matchers.pop()) {
console.log(matchers, data);
}
const x = [[1,2],3];
match(x);
console.log(x); // [[1,2]] no mutation
Each of the examples is already achievable using JavaScript shipped with
the browser. Could not actually determine what the issue is with the
existing code. It is already possible to compose code where the center, or
any argument identifer can be spread to an array. If the input is
individual arguments to a function, those arguments, too, can be converted
to an array within the arguments scope; as it is possible to run functions
in the argument scope which change the current and subsequent parameters.
If the concept is dynamic and arbitrary arguments you can use object
destructuring, default parameters, default values, Object.assign()
,
or if necessary, a generator function.
On Mon, Jun 10, 2019 at 11:46 PM Ethan Resnick <ethan.resnick at gmail.com>
wrote:
The proposed pattern is already possible, as demostrated at the code at esdiscuss.org/topic/proposal-rest-operator-in-middle-of-array#content-2 .
Sorry, I missed your response earlier!
Regarding
const [[...xs], y = xs.pop()] = [someArray];
That works and is quite clever, but imo it really masks the author's intention (in addition to being more verbose and less efficient) when compared to:
const [...xs, y] = someArray;
The proposed solutions for argument lists strike me as less-good.
In the case of
pad
, using one, object-valued argument would make partial application impossible, which is essential in most data-last use cases.In the rewritten version of
match
, I'm not sure how thepop()
trick is helping. If it's valid to pass in the matchers as a single, array-valued argument, then the signature can be simply:function match(matchers, data) { /* .. */ }
. The tricky bit about my original version was that it took the matchers passed in as separate arguments, rather than passing in a single array of matchers, for reasons that were more convenient in my domain.I don't see how any version of the
pop()
trick can help when there are truly multiple optional arguments (a la theopts
array inpad
). Also, thepop()
trick could be quite problematic if it ends up mutating the user's input, as it would in the given example:function match(matchers, data = matchers.pop()) { console.log(matchers, data); } const x = [[1,2],3]; match(x); console.log(x); // [[1,2]], uh oh, x has changed.
On Tue, Jun 11, 2019 at 1:13 AM guest271314 <guest271314 at gmail.com> wrote:
The proposed pattern is already possible, as demostrated at the code at esdiscuss.org/topic/proposal-rest-operator-in-middle-of-array#content-2 .
How does the code at that post specifically not provide the same functionality as proposed?
On Mon, Jun 10, 2019 at 9:20 PM Ethan Resnick <ethan.resnick at gmail.com> wrote:
This has come up several times and, while it seems pretty intuitive to me, not everyone seems to agree. You can check the archives for previous discussions.
@Andy Perhaps you can provide some links? I found two esdiscuss.org/topic/an-array-destructing-specification-choice#content-69 threads esdiscuss.org/topic/early-spread-operator — both 8 years old — that talked about this, along with one more recent one esdiscuss.org/topic/strawman-complete-array-and-object-destructuring that didn't get very far. In the first two threads, commenters brought up one case where the semantics are unclear (i.e., when there are more listed binding elements than there are elements in the iterable), and there was some talk about implementation complexity. But there was also some interest from some big contributors to the spec. So I wonder if it's time to revisit this?
if you want to extend the function with additional args, then you'll have to retroactively modify all existing calls to avoid off-by-one argcount:
@Kai I'm not sure I follow. If the new argument is required, you have to modify all existing calls whether the new argument goes at the end or as the second to last argument. If the new argument is optional, then adding it as the second to last argument doesn't break existing calls at all, assuming the function accounts for the fact that the optional arguments are all the ones after the initial required ones, and up until (but excluding) the last one. The syntax I'm proposing makes adding such extra arguments easy. In other words:
function pad(targetLength, ...opts, data) { const [paddingChar = " "] = opts; // pad data with paddingChar to targetLength; }
would, with the addition of an optional "meta" arg, become:
function pad(targetLength, ...opts, data) { const [paddingChar = " ", meta = { /* some default */ }] = opts; // pad data with paddingChar to targetLength; }
More importantly, though, this data-last calling pattern has a long history, and there are some cases where it's the best solution. My use case was similar to the common one, namely, that I was building a data pipeline, with the "settings" arguments partially applied from the left to create the function to use in each step. (And, in my case, I was building each function in a DSL where it would've been very inconvenient to add arbitrary-order partial application, even if JS were to add something like that tc39/proposal-partial-application.)
Given that functions where the last argument is significant probably aren't going away, it'd be nice imo if the JS syntax supported this better.
On Fri, Jun 7, 2019 at 7:11 PM kai zhu <kaizhu256 at gmail.com> wrote:
it matters when you have to debug/inherit other people's code (and clean up their mess). i wouldn't enjoy debugging unfamiliar-code that used this feature (admittedly, its my subjective opinion).
the maintainability argument stands -- its counter-intuitive in javascript that appending extra args to a function re-arranges arg-positioning and invalidates existing calls.
debuggability is subjective i agree.
p.s. - in general, i don't see what real painpoint rest-operator actually address, that couldn't be solved with
arguments
. variable-arg-length functions are not javascripty -- they frequently require extra ux-workflow transformations like Function.p.apply or Array.p.flatmap to the arguments being passed.On Fri, Jun 7, 2019 at 10:07 AM Isiah Meadows <isiahmeadows at gmail.com> wrote:
For your maintainability argument: adding extra arguments to those functions is something I almost never do. And you'd have the same exact issue with final rest parameters, just in a different position (in the middle as opposed to in the end).
For debuggability, I don't see how it'd be a major issue unless you already have an excessive number of positional parameters. In my experience, the debuggability issues arise approximately when there's just too many positional parameters, and factoring out the rest parameter to an array doesn't really help this situation much. (That's when object destructuring comes in handy.)
So not convinced either is any different than what it's like today.
Also, you aren't obligated to use a feature just because it exists - I hardly ever use proxies, for instance, and I rarely need maps beyond what objects give me, so I don't normally use them unless I need to have reference types or mixed types as keys.
Isiah Meadows contact at isiahmeadows.com, www.isiahmeadows.com
On Thu, Jun 6, 2019 at 2:22 PM kai zhu <kaizhu256 at gmail.com> wrote:
-1 for maintainability and debuggability
- maintainability if you want to extend the function with additional args, then you'll have to retroactively modify all existing calls to avoid off-by-one argcount:
// if extending function with additional args function pad(targetLength, ...opts, data) -> function pad(targetLength, ...opts, data, meta) // then must retroactively append null/undefined to all existing
calls
pad(1, opt1, opt2, "data") -> pad(1, opt1, opt2, "data", null)
2. debuggability when debugging, it takes longer for human to figure out which arg is
what:
// function pad(targetLength, ...opts, data) pad(aa, bb, cc, dd); pad(aa, bb, cc, dd, ee); // vs // function pad(targetLength, opts, data) pad(aa, [bb, cc], dd); pad(aa, [bb, cc, dd], ee);
On Thu, Jun 6, 2019 at 11:54 AM Ethan Resnick < ethan.resnick at gmail.com> wrote:
Long-time mostly-lurker on here. I deeply appreciate all the hard work that folks here put into JS.
I've run into a couple cases now where it'd be convenient to use a rest operator at the beginning or middle of an array destructuring, as in:
const [...xs, y] = someArray;
Or, similarly, in function signatures:
function(...xs, y) { }
The semantics would be simple: exhaust the iterable to create the array of
xs
, like a standard rest operator would do, but then slice off the last item and put it iny
.For example, I was working with some variable argument functions that, in FP style, always take their data last. So I had a function like this:
function match(...matchersAndData) { const matchers = matchersAndData.slice(0, -1); const data = matchersAndData[matchersAndData.length - 1]; // do matching against data }
Under this proposal, the above could be rewritten:
function reduce(...matchers, data) { /* ... */ }
Another example: a function
pad
, which takes a target length and a string to pad, with an optional padding character argument in between:function pad(targetLength, ...paddingCharAndOrData) { const [paddingChar = " "] = paddingCharAndOrData.slice(0, -1); const data = paddingCharAndOrData[paddingCharAndOrData.length -
1];
// pad data with paddingChar to targetLength; }
With this proposal, that could be rewritten:
function pad(targetLength, ...opts, data) { const [paddingChar = " "] = opts; // pad data with paddingChar to targetLength; }
I'm curious if this has been considered before, and what people
think of the idea.
Obviously, if
...a
appeared at the beginning or middle of a list, there would have to be a fixed number of items following it, so a subsequent rest operator in the same list would not be allowed.Thanks
es-discuss mailing list es-discuss at mozilla.org, mail.mozilla.org/listinfo/es-discuss
es-discuss mailing list es-discuss at mozilla.org, mail.mozilla.org/listinfo/es-discuss
es-discuss mailing list es-discuss at mozilla.org, mail.mozilla.org/listinfo/es-discuss
es-discuss mailing list es-discuss at mozilla.org, mail.mozilla.org/listinfo/es
I think it could be problems with this slice syntax, because if we want to use variable as an argument for the operator in some cases it could be perceived as label or object property by tokenizer. In addition, could be problem with tokenizing of slice syntax with step, because we need to look forward to check next symbol for decision between raw slice and slice with step 😶
The current answer is in the current spec for HostResolveImportedModule www.ecma-international.org/ecma-262/#sec-hostresolveimportedmodule :
Each time this operation is called with a specific
referencingScriptOrModule, specifier pair as arguments
it must return the same Module Record instance if it completes normally.
In other words: The runtime generally cannot safely collect modules because all modules are "always" referenced by the thing that imported it. It gets more strict in HTML where all modules are globally referenced in the "module map". So even when it looks like a module isn't referenced by previous kinds of references (object properties etc.), there really isn't such a thing as an "unreferenced" module as long as the realm/context is alive.
In short: There's no "module cache" in ES modules, it's an append-only module map.
This also means that if you expect something to require GC, you maybe shouldn't be loading it as a module. E.g. JSON loaded via some future import mechanism would never be GC'd. In the fictional example of the huge MMORPG with many planets, you'd likely want to load the things you care about GC'ing via fetch() or some other mechanism that allows managing lifetime yourself.[1]
I don't understand. What is the danger? Who agreed to what?