Supporting feature tests directly
I think you're referring to the eval
function?
I don’t think there are even documented best practices for ES6, yet (w.r.t. switching between native ES6 and transpiled ES6). That’d be interesting, too.
In npmjs.com/make-generator-function,
www.npmjs.com/package/make-arrow-function, and the tests for
es-shims/RegExp.prototype.flags/blob/master/test/index.js#L6-L12,
I use Function
eval to test for support of these things - one could do
the same with let
, const
, etc.
I'd be very interested to hear about any non-eval solution for this.
I think you're referring to the
eval
function?
Actually, I'm referring to proposing something new that would substitute for having to hack feature tests with eval
.
These are the initial details of my idea, a Reflect.supports(..)
method: gist.github.com/getify/1aac6cacec9cb6861706
Summary: Reflect.supports( "(()=>{})" )
or Reflect.supports( "let x" )
could test just for the ability to parse, as opposed to the compilation/execution that eval(..)
does. It'd be much closer to new Function(..)
except without the overhead of needing to actually produce the function object (and then have it be thrown away for GC).
This is inspired by developer.mozilla.org/en-US/docs/Mozilla/Projects/SpiderMonkey/Parser_API, where FF has a Reflect.parse(..)
method that is somewhat like what I'm suggesting, except that for feature tests we don't need the parse tree, just a true/false of if it succeeded.
An alternate form would be Reflect.supports( Symbol.arrowFunction )
, where the engine is just specifically saying "yes" I support that feature by recognizing it by its unique built-in symbol name.
+1 to Kyle proposal, using eval or Function is not even an option in CSP constrained environments ( unless the relative code is provided as SHA256, then we need to agree on how such code should look like and share it as polyfill )
I'd also suggest Reflect.isValidSyntax
as alternative to
Reflect.supports
'cause it's less misleading when it comes to figure out
APIs support and their implementation.
After all, that's exactly what we'd like to know, if a generic syntax will break or not.
...using eval or Function is not even an option in CSP constrained environments
...that's exactly what we'd like to know, if a generic syntax will break or not.
Furthermore, there are things which are valid syntax which cannot be directly eval
'd or Function
'd, such as import
and export
.
So why not just add a sandbox, and let uncertificated codes run in the sandbox, providing the external environment means to catch error (or interact somehow)
The combination of the loader and realm APIs should give us that and, of course, more.
So why not just add a sandbox, and ... means to catch error
Other than the import
/ export
thing I mentioned, for the exact reason
why eval(..)
and new Function(..)
are not preferred (which roughly do
the same thing)… A feature test that requires the entire
parse/compile/execute cycle is, even a little bit, slower than a feature
test that requires only the parser to answer the question at hand.
Since these tests are likely to happen in the critical path (page load), their speed is pretty important to be as optimal as possible.
I don't want or need a general means to try out a whole program to see if
it compiles or not. Don't let the eval
-looking form of the proposal
mislead as to intention. Intention is only to, feature-by-feature,
determine feature support where simple tests for identifiers is
insufficient.
For example, this is not intended to be possible:
let x;
Reflect.supports( "let x;" ); // false -- dupe declaration!
That kind of test would require running in the context of the current
lexical env, and would imply an entirely different level of integration
with the program than intended. I don't need static errors like preventing
duplicate declaration or anything of that nature. Even stuff like what
strict mode
would enforce are outside of the "scope" of what's being
proposed.
Only want to know if, in general, let x;
could parse by the current
engine. That's why Reflect.supports( Symbol.letDecl )
would be an
entirely sufficient option.
The only concern I'd have with a symbol approach is that there are likely to be engine variances in the future - in the case of "let", knowing that the syntax is supported doesn't mean that ES6's semantics apply, it just means it won't throw a SyntaxError.
If that's the sole goal - detecting SyntaxErrors efficiently without using eval - then I think this is great - but what I'd really love to see is a path towards a built-in comprehensive way to determine semantic capabilities at runtime (as opposed to all the feature detection that devs/polyfills/shims/etc have to do now).
likely to be engine variances in the future
I hope you just mean like changes that ES7 might make to an ES6 feature. And I hope those aren't syntactic as much as semantic. :)
If there was a change on syntax, I would assert that should be considered a "new feature" with its own new test, even if it was just a variation on an existing one. Like Symbol.arrowLiteral
and Symbol.conciseArrow
, where the second test might check specifically places where the grammar for arrows was relaxed to allow omission of ( )
or whatever.
knowing that the syntax is supported doesn't mean that ES6's semantics apply
That's true. But I think semantics are more a run-time concern, and thus should be checked with actually executed code (Function(..)
, etc).
Off the top of my head, things which are statically verifiable, like duplicate param names, could be checked (if that's the kind of thing a parser even checks), but things like if we relax and allow implicit symbol coercion are much more clearly run-time errors.
If that's the sole goal - detecting SyntaxErrors efficiently without using eval
Yep, that's it.
Consider it a first-pass quick feature test for syntax… if more extensive deeper run-time semantics checks are necessary, that would more be the realm of Function(..)
or other similar (future) features. At least in those deeper-check cases, you wouldn't have to worry about catching SyntaxError
s, since you could know in advance before trying the more performance-expensive tests.
On Sun, Mar 22, 2015 at 4:47 PM Getify Solutions <getify at gmail.com> wrote:
So why not just add a sandbox, and ... means to catch error
Other than the
import
/export
thing I mentioned, for the exact reason whyeval(..)
andnew Function(..)
are not preferred (which roughly do the same thing)… A feature test that requires the entire parse/compile/execute cycle is, even a little bit, slower than a feature test that requires only the parser to answer the question at hand.Since these tests are likely to happen in the critical path (page load), their speed is pretty important to be as optimal as possible.
I don't want or need a general means to try out a whole program to see if it compiles or not. Don't let the
eval
-looking form of the proposal mislead as to intention. Intention is only to, feature-by-feature, determine feature support where simple tests for identifiers is insufficient.For example, this is not intended to be possible:
let x; Reflect.supports( "let x;" ); // false -- dupe declaration!
That kind of test would require running in the context of the current lexical env, and would imply an entirely different level of integration with the program than intended. I don't need static errors like preventing duplicate declaration or anything of that nature. Even stuff like what
strict mode
would enforce are outside of the "scope" of what's being proposed.Only want to know if, in general,
let x;
could parse by the current engine. That's whyReflect.supports( Symbol.letDecl )
would be an entirely sufficient option.
The SpiderMonkey/Firefox Reflect.parse is non-standard, but may be a useful place to start.
First, "import" the "reflect.jsm" component module:
Components.utils.import("resource://gre/modules/reflect.jsm");
Then try this:
function isSyntaxSupported(syntax) { try { Reflect.parse(syntax); return true; } catch (_) { return false; } }
[
"import foo from 'bar';", // valid
"export var a = 1;", // valid
"export default class {}", // valid
"export class List {}", // valid
"async function foo() {}", // invalid
"let (x = 1) { x; }", // invalid
"module Name {}", // invalid
].forEach(function(syntax) {
console.log("%s
is %ssupported", syntax, isSyntaxSupported(syntax) ?
"" : "un");
});
Firefox 38.0a2 (2015-03-23):
"import foo from 'bar';
is supported"
"export var a = 1;
is supported"
"export default class {}
is unsupported"
"export class List {}
is unsupported"
"async function foo() {}
is unsupported"
"let (x = 1) { x; }
is supported"
"module Name {}
is unsupported"
What about checking tail call optimization support?
Imho, we need a possibility to verify engine supports it.
eval
, Function
and Reflect.parse
wont work here.
So Reflect.supports
looks more meaningful in this case.
I should stress that while my original proposal (linked earlier in thread) mentions some of the "hard" ES6 cases (like TCO), my focus is not on creating feature tests for ES6. ES6 has sailed. Any feature we could possibly conceive here is quite unlikely to land in a browser before that browser gets all (or at least most) of the ES6 stuff that one might be wanting to test for.
My goal is for us to stop adding features to JS that aren't practically feature testable. I would strenuously desire to have something like Reflect.supports(..)
(of whatever bikeshedded form) in ES2016 along with any new conceived features. That goes a thousand times more if we invent new syntax (we likely are) or new untestable semantics (like TCO).
Of course, if we had Reflect.supports(..)
now, it'd be amazingly helpful in detecting TCO, which I would dearly love. But that's not the goal. I don't think we need to muddy the waters about what the ES6 feature tests would be. At least not for now.
Taking steps to make sure new features can be feature tested is A Good Thing® but relying on something being set that says "I support X" probably isn't the best path to take.
A lot of feature detection relies on shallow tests:
i.e. if (!Array.prototype.includes) { ...
However, others need to test that features are properly supported by the engine. This is because shallow testing does not cover engine quirks.
i.e. jquery/jquery/blob/7602dc708dc6d9d0ae9982aadb9fa4615a9c49fa/external/sizzle/dist/sizzle.js#L165-L191
So while I agree that feature support should be detectable as much as possible, relying on something like Reflect.supports(...)
isn't any more useful than shallow feature detection (the engine might be lying to you).
TCO is one of the places where it is difficult to test for. However, it's a pretty rare that you would want to.
i.e.
var maybeRecursive;
if (Reflect.support('TCO')) {
maybeRecursive = function(n) {
n < 1000000 ? maybeRecursive(n++) : n;
};
} else {
maybeRecursive = function(n) {
while (n < 1000000) {
n++;
}
return n;
}
}
recursive(0);
In this case you would just write the second. This is also true for most syntax features: you wouldn't use feature detection, you would simply transpile your code down to the lowest level of support you need it to have.
Again, definitely a good idea to ensure feature support is detectable. Luckily this is fairly well covered by the tc39 process since a polyfill is required as early as stage 1.
- James Kyle
On Tuesday, Mar 24, 2015 at 3:44 PM, Kyle Simpson <getify at gmail.com>, wrote:
I should stress that while my original proposal (linked earlier in thread) mentions some of the "hard" ES6 cases (like TCO), my focus is not on creating feature tests for ES6. ES6 has sailed. Any feature we could possibly conceive here is quite unlikely to land in a browser before that browser gets all (or at least most) of the ES6 stuff that one might be wanting to test for.
My goal is for us to stop adding features to JS that aren't practically feature testable. I would strenuously desire to have something like Reflect.supports(..)
(of whatever bikeshedded form) in ES2016 along with any new conceived features. That goes a thousand times more if we invent new syntax (we likely are) or new untestable semantics (like TCO).
Of course, if we had Reflect.supports(..)
now, it'd be amazingly helpful in detecting TCO, which I would dearly love. But that's not the goal. I don't think we need to muddy the waters about what the ES6 feature tests would be. At least not for now.
A lot of feature detection relies on shallow tests:
However, others need to test that features are properly supported by the engine. This is because shallow testing does not cover engine quirks.
Of course, shallow tests are often totally sufficient, and I'm trying to have the most efficient method for doing that for places where there is no API identifier to check for.
That doesn't mean that you wouldn't also conduct some targeted deeper semantics conformance tests in places you needed to. It just means that as a first pass, a lot of FT's that otherwise require Function(..)
or eval(..)
can have a shorter more optimal path supported by the engine.
It's not intended to be an exclusive replacement for any test you could ever conceive.
relying on something like
Reflect.supports(...)
isn't any more useful than shallow feature detection
Of course not. Nothing in my proposal is supposed to indicate as such.
(the engine might be lying to you).
Good grief, why would we add a feature to ES2016+ that is intended to lie to developers or mislead them? :)
But in all seriousness, why would an engine do something like that? The bad cases in the past where this kind of thing happened are all hold-over vestiges of a bad web (a locked-in IE ecosystem, a still-too-painfully-slow-to-update-and-siloed-mobile ecosystem, etc).
Just because browsers have committed those sins in the past doesn't mean we have to assume they'll keep doing them.
TCO is one of the places where it is difficult to test for. However, it's a pretty rare that you would want to.
Totally disagree here. Anyone that's following the (Crockford) advice of not using loops anymore and writing all recursion absolutely cares if such code can be directly loaded into a browser or not.
In this case you would just write the second. This is also true for most syntax features: you wouldn't use feature detection, you would simply transpile your code down to the lowest level of support you need it to have.
Again, totally disagree. At least, that's not even remotely my intention. That's locking us in to always running transpiled code forever, which basically makes the engines implementations of features completely pointless. That sounds like a horrible future to me.
My intention is to feature test for the features/syntax that I need in my natively written code, and if tests pass, load my native code so it uses the native features. If any tests fail, I fall back to loading the transpiled code. IMO, this is the only remotely sensible go-forward plan to deal with the new transpiler-reality we're in.
I'm even building a whole feature-detects-as-a-service thing to support exactly that kind of pattern. Will anyone else follow? I have no idea. But I sure hope so. I for one hope that we're using the actual ES6+ code browser makers are implementing rather than transpiling around it forever.
That sounds like a horrible future to me.
IMO, this is the only remotely sensible go-forward plan to deal with the new transpiler-reality we're in.
I for one hope that we're using the actual ES6+ code browser makers are implementing rather than transpiling around it forever.
Ugh. Apologies for the hyperbole. Got carried away. But that is how strongly I feel about it.
I think you missed my point.
Just as people make mistakes, sometimes JavaScript engines make mistakes in their implementations (see: "the engine might be lying to you"), and there's plenty of places where we need to catch these mistakes (see my jQuery example from before). This is why something like Reflect.supports('TCO')
is just as shallow as testing for existence.
Re: Getting stuck always transpiling syntax.
This does not lock us into transpiling syntax features forever, it just means you need to have the knowledge of your targeted platform at build time rather than run time.
For example:
-
If I need to target a browser that does not support TCO, I'm going to transpile it for every browser I support.
-
If I don't need to target a browser that does not support TCO, I'm not going to transpile it at all.
-
James Kyle
Kyle Simpson wrote:
Totally disagree here. Anyone that's following the (Crockford) advice of not using loops anymore and writing all recursion absolutely cares if such code can be directly loaded into a browser or not.
LOL.
I don't think Crock was totally serious there... Let's get PTC support in all engines and then find out.
On Sun, Mar 22, 2015 at 3:59 AM, Andrea Giammarchi <andrea.giammarchi at gmail.com> wrote:
+1 to Kyle proposal, using eval or Function is not even an option in CSP constrained environments ( unless the relative code is provided as SHA256, then we need to agree on how such code should look like and share it as polyfill )
I'd also suggest
Reflect.isValidSyntax
as alternative toReflect.supports
'cause it's less misleading when it comes to figure out APIs support and their implementation.After all, that's exactly what we'd like to know, if a generic syntax will break or not.
CSS has an exactly analogous feature already, and calls it CSS.supports(). That's a decent reason to stick with supports() as the name.
For consistency sake I agree, but I come from a world where browsers also
exposed unofficially APIs so that, as James mentioned already, Array.prototype.includes
would have returned true and never worked.
I wonder how reliable is CSS.supports
not just in term of syntax, but
actual usability.
Best
On Wed, Mar 25, 2015 at 12:06 PM, Andrea Giammarchi <andrea.giammarchi at gmail.com> wrote:
For consistency sake I agree, but I come from a world where browsers also exposed unofficially APIs so that, as James mentioned already,
Array.prototype.includes
would have returned true and never worked.I wonder how reliable is
CSS.supports
not just in term of syntax, but actual usability.
You give it a full property declaration, and if the browser can parse it successfully, it returns true/false. This allows for false positives (a browser parsing a property but not actually supporting it yet), but devs (rightfully) yell and scream at browsers whenever that case (parse but no support) happens, so we do it very rarely, and only ever by accident.
It's just a standardized version of the de facto standard CSS feature test of "set the property on an element, and try to read it back; if you get something, it's supported".
Because it's based on an objective and reliable criteria tied to directly to actual support (successful parsing), it's reliable in practice as a feature test. This differentiates it from the old hasFeature() function, which was based on a map of feature strings to bools stored in the browser, with no connection to the actual features in question, and so was inevitably filled with lies and bitrot.
The reason @supports
works in CSS is because of the limited language feature-set CSS has, but this wouldn't work in JavaScript.
To reuse the TCO example:
var supportsTCO = Reflect.supports('function recursive() { recursive() }');
Of course that will parse, and executing it wouldn't be a good idea.
Transpilers can solve this problem. I've been working on Babel to get caniuse-like browser support data to be smarter about handling what to transpile.
-
"I want to support the last two versions of every browser, transpile/polyfill that for me."
-
"I want to support these features, what browser support will I have?"
On Wed, Mar 25, 2015 at 1:17 PM, James Kyle <me at thejameskyle.com> wrote:
The reason
@supports
works in CSS is because of the limited language feature-set CSS has, but this wouldn't work in JavaScript.
No, it works because most of the time, whether or not something parses is a sufficient proxy for "this is supported". This isn't always adequate - for example, you can't test whether a browser supports APNG in CSS properties with this - but that's okay.
Similarly, a JS version would let you test for anything where parsing is a proxy for support, like "function*(){...}" or "()=>{}". It
wouldn't help you with things where parsing is successful whether the feature is supported or not.
There's no good way to support those cases that parsing doesn't address besides direct feature tests, or support tables. Luckily, both of these already exist in the ecosystem, so it's okay that we're not solving them.
What this sub-discussion of CSS supports(..)
is reinforcing is what I said earlier: a capability to do feature tests in a direct, efficient, and non-hacky manner is valuable to some/many uses and use-cases, even with the recognition that it doesn't have to perfectly support all conceivable uses/use-cases/tests.
We should avoid a mindset that anything short of perfect isn't worth doing at all. Thankfully JS doesn't have such a design principle.
A Reflect.supports( Symbol.TCO )
test isn't perfect. It could accidentally or intentionally lie. But it could be better to some audiences than having no information. I personally would prefer to use it, even with its "risks", than trying a long recursive loop in a try..catch
to imply if TCO was in effect.
Nevertheless, it's the least important kind of test being advocated for here, even though it seems to be getting all the attention. If that kind of test is a bone of contention, it should be the easiest to drop/ignore.
Moreover, to reduce the risk of bitrot on feature lookup tables (that Symbol.TCO
would suffer), the Reflect.supports( "(() => {})" )
test seems like it would be preferable to a Reflect.supports( Symbol.arrowFunction )
type of test.
It's not that it's imperfect. It's that it's useless in the real world.
We can already do shallow testing of APIs. Reflect.support doesn't help there, and in some ways (that I've outlined before) it is a regression.
if (!Array.prototype.includes) { ... }
if (!Reflect.supports("Array.prototype.includes")) { ... }
You also wouldn't do testing of syntax support at runtime, as you would effectively be duplicating the code.
var myFunc;
if (Reflect.supports("TCO")) {
myFunc = recursiveImplementation;
} else {
myFunc = nonRecursiveImplementation;
// Why duplicate? You'd be saving yourself a lot of hassle by just transpiling to this
}
What's the alternative? To send down a file that tests for support and then sends it back to the server and then build the appropriate assets for that browser?
No, that'd be absurdly slow, and now you've already delegated to using a build tool.
- James Kyle
It's not that it's imperfect. It's that it's useless in the real world.
It's clear it's useless to you. It's not clear that it's useless to everyone. In fact, I for one definitely find it useful. No sense in continuing to argue over subjective opinion.
We can already do shallow testing of APIs. Reflect.support doesn't help there, and in some ways (that I've outlined before) it is a regression.
if (!Array.prototype.includes) { ... } if (!Reflect.supports("Array.prototype.includes")) { ... }
As I've repeatedly said, this proposed feature is not for those sorts of tests. It's for all the syntax tests that require try..catch
+ Function
/ eval
. Please (re)read the rest of the thread.
You also wouldn't do testing of syntax support at runtime
I already do. I fully intend to keep doing so.
as you would effectively be duplicating the code.
Nope, not duplicating code. Maintaining code in original ES6+ authored form as well as transpiled form. They're both files that can be loaded by a browser. So my intent is to decide at runtime which one is appropriate, and only load one or the other.
...send down a file that tests for support and then sends it back to the server
Yep, absolutely. Bootstrapping.
and then build the appropriate assets for that browser?
Of course not. It picks one of two already existing files.
It's not that it's imperfect. It's that it's useless in the real world.
...
What's the alternative? To send down a file that tests for support and then sends it back to the server and then build the appropriate assets for that browser?
Its possible in the AMD approach. Idk though if its useful.
The need in TCO detection is really debatable, but one might be using it, say, in two years from now to throw a proper exception while running some crazy recursive code in an old browser.
NB: it would be practical only if feature detection possibility lands to a browser with (or even before) corresponding feature. And, imho, that's what Kyle Simpson meant, starting the thread: handy feature detection available prior to feature implementation.
Two more off-topic cents: If at any point we will
- have adjustable features, like disabling at some scope
eval
,with
,Function()
or usingglobal object
- or a new
stricter mode
- or choosing new Number representation
- ... or whatever
it would be nice to be able to detect such state, without try..catch
in a
fast & semantic way.
Re: shallow testing
Yes you've said that, but this is exactly what @supports
is in CSS. There was no way to do shallow testing so they added a way to do it.
Re: Bootstrapping
This is exactly my point, you're already using multiple builds and letting a transpiler handle it for you. Why would you opt for a worse solution than letting transpilers handle even more than you?
Here's my ideal situation:
For users who want targeted builds:
- The transpiler handles building multiple files for various targeted environments.
- Using a known set of feature support (similar to caniuse).
- Server uses header information to send down the appropriate built file
- Using the same known set of feature support.
For users who want a single build:
-
The transpiler builds a single file which supports every targeted environment
-
Server sends the same file for everyone.
Neither of these are perfect solutions, but they are a lot better than needing to make multiple requests just to determine what version of the site to serve.
We're getting way afield with this whole transpilers thing. I'll indulge it for just this response, then I'll return my focus on this thread to the issue at hand: feature tests IN javascript.
...CSS. There was no way to do shallow testing so they added a way to do it.
As I have repeatedly said, the intent is not to be able to do new sorts of tests that are not currently possible.
I know perfectly well that I can do if (Array.prototype.includes) ..
tests and I can also do try { eval("(()=>{})") } catch..
tests. That's not new news to anyone here.
The intent is to take only the latter of those two and do it in a more efficient and less hacky way.
Here's my ideal situation:
Your "ideal situation" means that if I want split builds (I do!), I have to maintain my transpiler's definitions and keep up to date on usage stats for me site to decide when I care about a certain browser or when I stop caring about it, and change my configurations accordingly. I know a lot of people think that way. I most definitely do not. Pretty "not ideal" to me.
I prefer an option where I can write and deploy code, and never touch it, or even the server/tools, again (if I don't need to), and it will just continue to work "forever". For awhile, tests will end up serving both files, but eventually, as browsers evolve, the tests will all result in only the *.es6.js files being served. To me, that's "ideal".
In short, I don't actually want to think at all about what browsers do what things. To whatever extent possible, I want feature tests to handle that entirely. I think browser versions are meaningless arbitrary marketing labels. caniuse data is, at best, amusement to me. I never make real decisions based on it.
The problem with that is you're still delegating to build tools and servers to solve the entire problem. As you said previously you would not opt for the pure-javascript solution:
if (Reflect.supports('TCO')) { /* recursive */ } else { /* not recursive */ }
So while I get the sentiment of "feature tests IN javascript", the problem (as you've said) is that your solution would not exist solely in your javascript.
In order for Reflect.supports to be practical it needs a build tool/server, but as soon as you introduce those better options are available.
I'm sorry that you'll have to think about it.
As I see it, what might be more desireable than a straight shallow feature test or feature support reporting feature would be an official versioned test library, possibly including tests of pure internals, and a new standard api for asking the engine for the results it gets for running a certain test or set of tests. The engine could then either have its results collected at build time and cashed results for that particular build built into the api, or allow the user to require the result of getting the test and executing it live, with the issues that comes with that. One possible result, except the obvious success and fail, is of course that a certain test didn't exist at the time of the build and thus not tested.
Of course, the earliest something like that could be in the language would be ECMAScript 7...
This exists: test262.ecmascript.org
I don't believe test262 can yet be run in a browser (only directly against a browser engine), nor run in ES3 browsers (so that shimmed engines can be tested) so that doesn't yet solve my use cases, although I can't speak for Kyle.
On Mar 22, 2015, at 2:00 AM, Kyle Simpson <getify at gmail.com> wrote:
I think you're referring to the
eval
function?Actually, I'm referring to proposing something new that would substitute for having to hack feature tests with
eval
.These are the initial details of my idea, a
Reflect.supports(..)
method: gist.github.com/getify/1aac6cacec9cb6861706Summary:
Reflect.supports( "(()=>{})" )
orReflect.supports( "let x" )
could test just for the ability to parse, as opposed to the compilation/execution thateval(..)
does. It'd be much closer tonew Function(..)
except without the overhead of needing to actually produce the function object (and then have it be thrown away for GC).This is inspired by developer.mozilla.org/en-US/docs/Mozilla/Projects/SpiderMonkey/Parser_API, where FF has a
Reflect.parse(..)
method that is somewhat like what I'm suggesting, except that for feature tests we don't need the parse tree, just a true/false of if it succeeded.An alternate form would be
Reflect.supports( Symbol.arrowFunction )
, where the engine is just specifically saying "yes" I support that feature by recognizing it by its unique built-in symbol name.
I’m pretty skeptical about including this sort of feature testing feature as part of standard ECMAScript. Here are some of the reasons:
ECMAScript doesn’t include the concept of language subsets/supersets (other than the Annex B features). An conforming implementation of a Ecma TC39 standard is expected to implement all of the features of current standard. Given that perspective, it isn’t clear why we would want to provide a feature that was specificatlly design to enable non-conforming implementations. test262 is TC39’s support for testing the standards conformance of an implementation.
This sort of feature testing is inherently a short term need. Within a few years, all implementations will support all major features, so work that does into incorporating specific feature detection into the ES standard (for example defining something like Symbol.arrowFunction) would be throw-away work that within in few years would just be legacy baggage that could never be removed from the language. For example, I’m sure nobody today has a need to test Reflect.supports(Symbol.functionExpression) or Reflect.supports(Symbol.tryCatch).
On the other hand, a feature such as Reflect.parse which has other uses but which also has a potential applicability for feature detection seems reasonable.
doesn't yet solve my use cases, although I can't speak for Kyle.
It would not support my use-case. At least, in the sense that it's an all-or-nothing which is counter to what I'm looking for. It's also going to be way more processing intensive than just doing an eval
/ Function
test, which defeats the entire point of the proposal.
a feature that was specificatlly design to enable non-conforming implementations
That's not at all the intent of this feature. More below.
This sort of feature testing is inherently a short term need. Within a few years, all implementations will support all major features
Within a few years, all implementations will be ES6 compilant, sure. But they'll never all be entirely up to date on ES2016, ES2017, ES2018, … as they roll out.
This feature testing mechanism is intended to be a rolling window of FT's for the gap between when something is standardized (to the point that developers could rely on polyfills/transpiles for it) and when it's fully implemented in all browsers that your app is running on. This gap could be as short as 6-12 months and (considering mobile) as long as several years.
On an app-by-app, need-by-need basis, there will always be such a gap, and FT's let you know what you have available at that moment in that specific browser.
This is directly analogous to all other classes of FT's, such as modernizr (focused more on HTML/CSS, with JS only as it related to one of those).
For example, I’m sure nobody today has a need to test Reflect.supports(Symbol.functionExpression) or Reflect.supports(Symbol.tryCatch).
No, they don't. Exactly my point with the rolling window. And exactly why I stated that the intent of this feature is not about ES6 (or ES5) features, but rather about new stuff in ES2016+. It would be my hope that the feature testing API proposed could be one of the first things browsers could land post-ES6, which would mean devs could soon'ish start using those tests to track/cope with the gap between the ES2016 stamp of approval and when all those ES2016 features land. And of course the same for ES2017 and beyond.
And since what I'm asking for is stuff that, largely, can already be tested, just less efficiently, we could very quickly polyfill Reflect.supports
to let devs use it even earlier.
would be throw-away work that within in few years would just be legacy baggage
My design intent with my proposal, supporting the string syntax form, is to not have a huge table of lookup values that are legacy baggage and thrown away, but a general feature that is flexible and continues to be useful going forward.
The few exception cases, if any, like for example a Symbol.TCO
test or whatever, would be very small, and their "burden" of legacy would be quite low once we're past the window of them being useful.
a feature such as Reflect.parse which has other uses
As I mentioned near the beginning of this thread, Reflect.parse(..)
would generally suit the proposed use-case, except it does a lot of extra work (creating and returning a tree -- a value that then I'd be throwing away creating unnecessary GC) that feature testing itself doesn't need. It's unclear that Reflect.parse(..)
would provide any additional performance gains over the current eval
/ Function
approach, and could even be potentially worse.
It's also unclear that Reflect.parse(..)
would ever have any reasonable answer for the "hard" tests we've briefly touched on, such as exposing semantics like TCO or any other sorts of things we invent which can't reasonably be tested by syntax checks or pragmatically tested via runtime code. At least Reflect.supports(..)
could have an answer for that.
On 3/26/15 at 8:51 AM, getify at gmail.com (Kyle Simpson) wrote:
As I mentioned near the beginning of this thread,
Reflect.parse(..)
would generally suit the proposed use-case, except it does a lot of extra work (creating and returning a tree -- a value that then I'd be throwing away creating unnecessary GC) that feature testing itself doesn't need. It's unclear thatReflect.parse(..)
would provide any additional performance gains over the currenteval
/Function
approach, and could even be potentially worse.
I don't see a real need for high performance in these tests. AFAICS, they occur once, probably at load time. A smart JS implementation might even parse the Reflect.parse() string at the same time it is parsing the main set of JS code. As such, the extra overhead for CPU and GC will probably be swamped by the communication CPU and transmission times.
Not using eval makes it more likely that you will be able to perform the tests in "safe" subsets of JS.
Cheers - Bill
Bill Frantz | Privacy is dead, get over | Periwinkle (408)356-8506 | it. | 16345 Englewood Ave www.pwpconsult.com | - Scott McNealy | Los Gatos, CA 95032
Without the direct feature test API I'm suggesting (or something like it), how will someone feature test the two new (proposed for ES7) export
forms, for example?
leebyron/ecmascript-more-export-from
I'm not strongly opposed to going the Reflect.parse(..)
route for feature-testing (certainly more preferable than eval
/ Function
), except I'm concerned that:
- it will offer no reasonable path in the future for answering the "hard" tests, like TCO would have been. Would
Reflect.parse( Symbol.TCO )
be too janky of a hack for such things? - engines won't be able to tell (static analysis?) that the parse tree isn't needed and not wasting that memory for GC to clean up.
The advantage of an API that doesn't return anything but true
/ false
means the engine knows it doesn't need to keep the tree around and send it into JS-land. I don't know if there's any internal processing benefits, but it certainly seems there's memory benefits.
I don't see a real need for high performance in these tests
High performance? No.
But, if these feature tests slow down an app in the most critical of its critical paths (the initial load) to the point where people can't use the feature tests in the way I've proposed, then the "solution" is moot.
I could load up an entire parser written in JS and use it to parse syntax strings. That's a solution. But it's not a viable solution because it's way too slow for the purpose of feature tests during a split load.
So it should be noted that the proposal does imply that whatever solution we come up with, it has to be reasonable in performance (certainly much better than eval
/ Function
or a full JS parser loaded separately).
Has there been any consideration or discussion for direct support of feature tests for ES7+ features/syntax? I'm thinking specifically of things which are difficult or impossible to just simply test for, like via the existence of some identifier.
I have an idea of what that could look like, and am happy to discuss further here if appropriate. But I was just checking to see if there's any prior art around related specifically to JS to consider before I do?