ES Native Mode proposal

# Michaël Rouges (12 years ago)

Given the number of scripts from various sources that may be contained in a web page, there may be prototyping conflicts.

To solve this, heavy techniques are often used, such as iframes, to execute code in peace.

I'm often thinking it might be much easier to tell the browser to have a native to a given context, incidentally, to the functions from this context & nested, regarding on the last relative scope with this instruction.

What I suggest, therefore it is a complementary mode to 'use strict'... the 'use native'.

Suggested behavior :

`Object.prototype.serialize = function serialize() {
    // serialization code
};

(function () {
    'use strict', 'use native';

    Object.prototype.hasOwnProperty('serialize') // false

    Object.prototype.serialize = function serialize() {
        // serialization code
    };

    (function () {
        Object.prototype.hasOwnProperty('serialize') // true
    }());

    (function () {
        'use native';

        Object.prototype.hasOwnProperty('serialize') // false
    }());
} ());`

Thanks in advance for your advices.

# Andrea Giammarchi (12 years ago)

it's like bringing realm all over in any part of any piece of code ... a road to hell for interoperability and/or security.

I think you don't really want to:

  1. include any jurassic JS library that has no reason to be used in current JS status
  2. be sure that libraries that extend native constructor do that for good reason and most likely will never conflict with any API

Point one is because there are not many lib out there these days that pollutes global native prototypes but I have one, called eddy, that does not conflict with anything and set configurable and writable properties, but not enumerable, in a way everybody expect them to behave (de-facto EventEmitter standard)

Not a single library, neither in node.js, ever had a conflict or problem with that ^_^

As summary: choose your libraries carefully. That will do a much better job than your proposal.

my 2 cents

# Michaël Rouges (12 years ago)

I personally don't use libraries, I create tools and advice every day developers ... and it must be realistic, most don't even know what they include.

I remember, visiting the site of a web agency where, in the middle of the page, a dialog box is displayed to indicate that the webmaster had not paid for a license to use a jQuery plugin.

So yes, I agree, it's the developer responsibility, but it would be nice that there is a way to do so to ensure that the tools we propose behaves exactly as desired, whatever the practices of the developer who uses them.

No?

Michaël Rouges - Lcfvs - @Lcfvs

2013/9/25 Andrea Giammarchi <andrea.giammarchi at gmail.com>

# Eric Kever (12 years ago)

I'm no real expert here, but this seems like a proposal for a workaround for messy code rather than something that's actually necessary within the language.

# Michaël Rouges (12 years ago)

Personally, I never touch the native prototypes and for homemade methods, it's ok...

But when it comes to native methods, it gets more complicated ... Indeed, how to retrieve an original state of a native prototype? This isn't possible.

The idea is to have a stable solution to this deficiency

Michaël Rouges - Lcfvs - @Lcfvs

2013/9/25 Eric Kever <lolbummer at gmail.com>

# Rick Waldron (12 years ago)

On Wed, Sep 25, 2013 at 11:41 AM, Michaël Rouges <michael.rouges at gmail.com>wrote:

Hi all,

Given the number of scripts from various sources that may be contained in a web page, there may be prototyping conflicts.

To solve this, heavy techniques are often used, such as iframes, to execute code in peace.

I'm often thinking it might be much easier to tell the browser to have a native to a given context, incidentally, to the functions from this context & nested, regarding on the last relative scope with this instruction.

What I suggest, therefore it is a complementary mode to 'use strict'... the 'use native'.

This list has been around the new pragma block several times and in the end the following (or something similar) holds: adding a new pragma will effectively creates a new "version" of the language and that's not going to happen any time soon.

# Michaël Rouges (12 years ago)

Hum ... in this proposal case, I don't think we should talk about a real new version of the language ...

Rather, I see it as a JavaScript sandbox that, internally, should not require major developments, I believe

Michaël Rouges - Lcfvs - @Lcfvs

2013/9/25 Rick Waldron <waldron.rick at gmail.com>

# Brandon Benvie (12 years ago)

On 9/25/2013 12:20 PM, Rick Waldron wrote:

This list has been around the new pragma block several times and in the end the following (or something similar) holds: adding a new pragma will effectively creates a new "version" of the language and that's not going to happen any time soon.

Indeed. Also, a pragma isn't really a good way to fix this. A better solution is use the module system to allow for importing pristine/non-method versions of the functions. This was already proposed, as an example, for {call, bind, apply} from "@function".

# Rick Waldron (12 years ago)

On Wed, Sep 25, 2013 at 3:31 PM, Brandon Benvie <bbenvie at mozilla.com> wrote:

On 9/25/2013 12:20 PM, Rick Waldron wrote:

This list has been around the new pragma block several times and in the end the following (or something similar) holds: adding a new pragma will effectively creates a new "version" of the language and that's not going to happen any time soon.

Indeed. Also, a pragma isn't really a good way to fix this. A better solution is use the module system to allow for importing pristine/non-method versions of the functions. This was already proposed, as an example, for {call, bind, apply} from "@function".

This is definitely preferable.

There was an earlier draft for modules that had a line that I interpreted as effectively providing a fresh Global object within the module body, but that could've been wishful thinking.

# Andrea Giammarchi (12 years ago)

in line ...

On Wed, Sep 25, 2013 at 11:27 AM, Michaël Rouges <michael.rouges at gmail.com>wrote:

I personally don't use libraries, I create tools and advice every day developers ... and it must be realistic, most don't even know what they include.

which is why you don't want to promote a pattern that will be inevitable abusing so that developers will have messy code and unpredictable behaviors around because of these sort of native sandboxes in the wild.

I remember, visiting the site of a web agency where, in the middle of the page, a dialog box is displayed to indicate that the webmaster had not paid for a license to use a jQuery plugin.

This proposal of you will never solve that problem and actually should not/never solve that problem.

So yes, I agree, it's the developer responsibility, but it would be nice that there is a way to do so to ensure that the tools we propose behaves exactly as desired, whatever the practices of the developer who uses them.

There are several old projects/libraries/snippets that created "extendible natives for everyone" and that increased the mess across developers. I wrote "Elsewhere" code, @jdalton worked on fusejs, many others had their own sandbox with their extends to natives cooler than everyone else (of course) and nobody used this stuff anyway.

If you don't know what kind of environment you are running into you don't use method you don't know/expect to be there while if it's about securing your own piece of code/library there are already possible ways to guard it inside a closure and do whatever you want in there.

I hoe you have enough examples and background to realize this is a bad idea that modules solved more elegantly and still you should not extend natives if not absolutely necessary ... not even ECMAScript guys in here do that ;-)

# Michaël Rouges (12 years ago)

Hum...the closures doesn't prenvent from foreign prototyping... and all actual sorts of sandboxes are really heavy, much more than a possibly native solution, mainly about performances, like my Sandbox.jsLcfvs/Sandbox.js .

Michaël Rouges - Lcfvs - @Lcfvs

2013/9/25 Andrea Giammarchi <andrea.giammarchi at gmail.com>

# David Bruant (12 years ago)

Le 25/09/2013 17:41, Michaël Rouges a écrit :

Hi all,

Given the number of scripts from various sources that may be contained in a web page, there may be prototypingconflicts.

Be careful about what you include? To be proactive in that process, freeze all builtins beforehand. You'll know soon enough if something breaks. If you do want to enhance prototype, isolate this code and run it before freezing builtins.

The module loader API has something close to what you ask: harmony:module_loaders#loader.prototype.definebuiltins_obj

# Andrea Giammarchi (12 years ago)

it does give you private constructors ... so you can retrieve them via your Sandbox library, as example, and do all crazy things you want including refixing surrounding env if needed ... ^_^

Also, since you wrote that library, you already know that having new natives all over is not good for anyone, right? ;-)

Last, but not least, don't freeze natives if not necessary 'cause you might end up unable to have normal behaviors.

A classic example: Object.freeze(Object.prototype); then later on no class will be able to have a toString method redefined in its proto if not passing through Object.defineProperty

A library of mine that works defensive in such direction is redefine.js while the most handy, the one I prefer, is again the one that add 4 methods to the Object.prototype in a non enumerable way so you daily JS code won't be affected ^_^ (hasOwnProperty will be false and for/in are not affected ... how lovely)

Good luck though with possible solutions :-)

# Aymeric Vitte (12 years ago)

It's not easy to freeze the world like Caja is doing, and it's not easy to have a library that takes care of it securely, and the use case is not always to use modules to have a fresh global.

Some years ago, doing widgets stuff inside web pages, I had a "RestoreNativeVar" function restoring natives using strange hooks like taking them from iframes (no comments...)

The issue is probably not TC39 only, but looking at W3C security groups specs which apparently have some hard time defining something secure, maybe SES concepts are coming late in the TC39 schedule, all new Web API define more globals, this is usefull to have something that freezes the entire global when you need it instead of hacking around.

# Andrea Giammarchi (12 years ago)

I think is not ... or at least is not real-world possibility. We have two scenarios:

  1. you know what you include
  2. you are the library included

In first case you don't need anything because you know the code and you know no script will hurt so you can even do whatever you want with natives ... who cares if that's convenient for you, why not.

The second case you might arrive too early, so no other library could work in a frozen env, or too late, the env has been already modified you can freeze it and live in troubles for you and other scripts.

I am usually the guy living in the first case .. with as less external dependencies as possible and where code is OK even if extended in a reasonable way but I don't think there is a reasonable solution for the second case.

Once again we all trid in the past and failed for usability, performance, not so secured env ... etc etc ...

As summary if you have a party in your house ... you better know your guests :D

# Aymeric Vitte (12 years ago)

There are other cases, like malicious code injection.

I don't know if it's really feasible without redefining the DOM on top of it but I feel the need since a long time, something that makes sure you are using the right XMLHttpRequest or other and not a modified one.

And something that prevents a bad script to do something like setTimeout(change_globals,xxx)

,

Aymeric

Le 26/09/2013 01:16, Andrea Giammarchi a écrit :

# David Bruant (12 years ago)

Le 26/09/2013 00:48, Aymeric Vitte a écrit :

It's not easy to freeze the world like Caja is doing, and it's not easy to have a library that takes care of it securely

Good thing Caja is open source, right? ;-) In the case Michaël Rouges seems to describe, it can be enough to do that at development time. He wants to be aware of conflicts, not secure isolation of mutually suspicious libraries.

# David Bruant (12 years ago)

Le 26/09/2013 09:10, Aymeric Vitte a écrit :

There are other cases, like malicious code injection.

CSP goes a long long way in preventing these.

I don't know if it's really feasible without redefining the DOM on top of it but I feel the need since a long time, something that makes sure you are using the right XMLHttpRequest or other and not a modified one.

In environments where you don't have the option of freezing all built-ins, running your code first allows you to keep a reference to the initial XMLHttpRequest. You may then wonder: "how can I enforce running first?" => If you're in control of the webpage, it really is up to you to run

trusted code before any other scripts. If you're building a library other will include at any point in time they want, hope for a standard environment, expect nothing.

And something that prevents a bad script to do something like setTimeout(change_globals,xxx)

The problem here is exactly the same problem than viruses. The only reason viruses exist at all is the design mistake that led operating systems to consider than programs you (human being) start, run with the same authority than you. Indeed, any program you run can throw away all your files, send them over the network to whatever server, etc. Hence viruses. Seriously, how did we end up in a world where Windows Vista (and later? I've stopped Windows) think it's a good idea to ask for a confirmation pretty much after any click where could would be run? If Windows has a way to know the second click is secure, how much harder is it to make sure the first click is secure!! This is insane!

Related is this interesting talk: "The SkyNet Virus - Why it is Unstoppable; How to Stop it" www.erights.org/talks/skynet It's old and low-quality (video, not content), but it's really good.

In my opinion, the good question isn't "how to prevent a bad script to do something like ...?", but rather "how can I make sure that scripts, any script, can be loaded only with the authority it needs?". How did the bad script you're talking about above ended up with the authority to change globals? CSP is a good start and says "don't run scripts that aren't whitelisted". You prevent scripts you haven't chosen to run at all. For sure, they won't change globals. Caja and Module loader allow to run partially trusted code with fine-grain (object-level) control of the exact amount of authority you want.

In any case, the problem starts (and likely ends) with how code is loaded. When this problem is solved (and I think it pretty much is with CSP and Caja/Module Loader), then most script-related security problems are solveable.

# Aymeric Vitte (12 years ago)

I have just commented the "Safe, closure-free..." thread, and gave some thoughts about CSP, as mentioned I have some hard time understanding what it does protect, there is no mechanism for setting independently script authorities, see the example I provided and the responses I got.

For those interested I provided in the CSP thread a link to a FF bug report where it's explained how some security policy (here Websocket spec) forces me to do insecure things. I don't know what list can take care of it, there is a discussion in [1] too, for now I did not see really solid arguments showing that I could be wrong.

Maybe a solution could be combination of CSP and SES, I think SES should come now, as far as I remember it is planned for ES8, seems too late.

Solving the code loading issue is indeed the key point, but is it feasible?

,

Aymeric

[1] groups.google.com/forum/#!topic/mozilla.dev.security/6qBHmVAhtYY

Le 26/09/2013 10:22, David Bruant a écrit :

# David Bruant (12 years ago)

Le jeu. 26 sept. 2013 11:11:40 CEST, Aymeric Vitte a écrit :

For those interested I provided in the CSP thread a link to a FF bug report where it's explained how some security policy (here Websocket spec) forces me to do insecure things. I don't know what list can take care of it, there is a discussion in [1] too, for now I did not see really solid arguments showing that I could be wrong.

I answered on the webappsec thread. Firefox blocks mixed content for good reasons. When receiving an HTTPS page, the browser shows lots of signs of the page being secure. If the page starts loading code/style/content with HTTP, these are subject to man in the middle attacks and suddenly, the browser gives a false sense of security to the user. Firefox isn't forcing you to do insecure things. Firefox is forcing you to make a choice: go all the way secure (so that it can shows strong signal to the user) or use HTTP.

Maybe a solution could be combination of CSP and SES, I think SES should come now, as far as I remember it is planned for ES8, seems too late.

SES exists now... sort of... with Caja. You don't need to wait, it's already available. Module loaders are also a major step forward.

Solving the code loading issue is indeed the key point, but is it feasible?

Can you describe ways in which it isn't?

# Aymeric Vitte (12 years ago)

Le 26/09/2013 11:43, David Bruant a écrit :

Le jeu. 26 sept. 2013 11:11:40 CEST, Aymeric Vitte a écrit :

For those interested I provided in the CSP thread a link to a FF bug report where it's explained how some security policy (here Websocket spec) forces me to do insecure things. I don't know what list can take care of it, there is a discussion in [1] too, for now I did not see really solid arguments showing that I could be wrong. I answered on the webappsec thread. Firefox blocks mixed content for good reasons. When receiving an HTTPS page, the browser shows lots of signs of the page being secure. If the page starts loading code/style/content with HTTP, these are subject to man in the middle attacks and suddenly, the browser gives a false sense of security to the user.

Mixed content is not blocked today. Again, it's difficult to say which one is more insecure between http with https or https with http, the first one is subject to a mitm attack since the begining.

Firefox isn't forcing you to do insecure things. Firefox is forcing you to make a choice: go all the way secure (so that it can shows strong signal to the user) or use HTTP.

I am not saying FF is the problem, FF follows the Websocket spec, which does not allow ws with https. I am explaining why I can not use wss (routers can not have trusted certificates), so I am forced to fallback to http. It's easy to deny the issue but that's a real life use case.

Maybe a solution could be combination of CSP and SES, I think SES should come now, as far as I remember it is planned for ES8, seems too late. SES exists now... sort of... with Caja. You don't need to wait, it's already available. Module loaders are also a major step forward.

Not very intuitive to use as far as I remember.

Solving the code loading issue is indeed the key point, but is it feasible? Can you describe ways in which it isn't?

Do you know a way (even theoretical) to safely load code with web mechanisms that can defeat a mitm? This would necessarly imply another check mechanism on top of SSL/TLS

# David Bruant (12 years ago)

Le 26/09/2013 12:16, Aymeric Vitte a écrit :

Le 26/09/2013 11:43, David Bruant a écrit :

Le jeu. 26 sept. 2013 11:11:40 CEST, Aymeric Vitte a écrit :

For those interested I provided in the CSP thread a link to a FF bug report where it's explained how some security policy (here Websocket spec) forces me to do insecure things. I don't know what list can take care of it, there is a discussion in [1] too, for now I did not see really solid arguments showing that I could be wrong. I answered on the webappsec thread. Firefox blocks mixed content for good reasons. When receiving an HTTPS page, the browser shows lots of signs of the page being secure. If the page starts loading code/style/content with HTTP, these are subject to man in the middle attacks and suddenly, the browser gives a false sense of security to the user.

Mixed content is not blocked today.

It is by IE and Chrome. I don't remember for which version it is for Firefox, but it should be deployed or will very soon. It's close enough to being a de facto standard. Only the thousands of edge cases need to be agreed upon.

Again, it's difficult to say which one is more insecure between http with https or https with http, the first one is subject to a mitm attack since the begining.

None is secure.

Firefox isn't forcing you to do insecure things. Firefox is forcing you to make a choice: go all the way secure (so that it can shows strong signal to the user) or use HTTP.

I am not saying FF is the problem, FF follows the Websocket spec, which does not allow ws with https. I am explaining why I can not use wss (routers can not have trusted certificates)

Sorry, I had miss this info.

so I am forced to fallback to http. It's easy to deny the issue but that's a real life use case.

I don't know the details for your particular case, but a line has to be drawn somewhere. Web standard security is always butchered because of these real life use cases. How important are they? How hard would it be for real life infrastructure to upgrade? For sure once a standard allows to do something insecure (load HTTP content in HTTPS page) it remains pretty much forever. This is less true for your router I imagine. Blocking mixed content has been possible in Firefox because IE and Chrome were doing it long ago.

Maybe a solution could be combination of CSP and SES, I think SES should come now, as far as I remember it is planned for ES8, seems too late. SES exists now... sort of... with Caja. You don't need to wait, it's already available. Module loaders are also a major step forward.

Not very intuitive to use as far as I remember.

Let's go step by step. You were initially asking for "possible", I gave you "possible" :-) I know that the Caja team has been very reactive and patient to my questions in the past. If you have intuitiveness and API issues, I'm sure they'll be happy to hear about it and will certainly guide you through how to use Caja.

Solving the code loading issue is indeed the key point, but is it feasible? Can you describe ways in which it isn't?

Do you know a way (even theoretical) to safely load code with web mechanisms that can defeat a mitm? This would necessarly imply another check mechanism on top of SSL/TLS

My knowledge stops at a 2-layer answer. First layer is: load everything with HTTPS and you'll be fine. For the vast majority of people and use cases, this is enough. Second layer is that HTTPS has some known flaws [1], including in the certificate authority mechanism, but it takes some work to exploit them. Moxie Marlinspike proposed a solution in the form of the Converge Firefox addon convergence.io ... which is broken since Firefox 18 (an API changed and the addon wasn't updated). But the idea is interesting.

David

[1] See www.youtube.com/watch?v=Z7Wl2FW2TcA (the anecdote about SSL design at ~16' is hilarious...ly scary) and other work by Moxie Marlinspike on that topic

# Aymeric Vitte (12 years ago)

Probably it's not the right list to discuss all this, so last thoughts and I stop here, I am not raising these points for myself but for eveybody (so I am not going to use Caja but a SES built-in when available).

The project is [1], routers can not have valid certificates for the reasons you imagine. Convergence is somewhere similar to [2] and [3], which explain/solve what is not politically correct to say/do: use of untrusted certificates secures you more than use of trusted certificates.

I believe convergence has becomen impossible because major web sites keep their time changing certificates, so it's completely impossible to identify them.

As I wrote again in the bug report, browsers policy is obscure for supposed untrusted certificates, there are different rules whether that's the main page, a normal resource, an iframe, etc, you can not easily raise an exception for subsequent resources.

So if the rule has to be that you can not downgrade from https to http, which is logical somewhere if you forget that you might not trust SSL/TLS, then browsers should provide to the users the possibility to raise exceptions and certificate visibility/check (I have tried to promote the feature of exposing certificates to js in WebCrypto with no success until now)

And if you can do this, then you don't have any longer a big issue with code loading, if not you must do something else.

,

Aymeric

[1] www.peersm.com [2] www.ianonym.com [3] www.ianonym.com/intercept.html

Le 26/09/2013 12:59, David Bruant a écrit :

# Mark S. Miller (12 years ago)

On Thu, Sep 26, 2013 at 3:16 AM, Aymeric Vitte <vitteaymeric at gmail.com>wrote:

Le 26/09/2013 11:43, David Bruant a écrit :

Le jeu. 26 sept. 2013 11:11:40 CEST, Aymeric Vitte a écrit :

For those interested I provided in the CSP thread a link to a FF bug report where it's explained how some security policy (here Websocket spec) forces me to do insecure things. I don't know what list can take care of it, there is a discussion in [1] too, for now I did not see really solid arguments showing that I could be wrong.

I answered on the webappsec thread. Firefox blocks mixed content for good reasons. When receiving an HTTPS page, the browser shows lots of signs of the page being secure. If the page starts loading code/style/content with HTTP, these are subject to man in the middle attacks and suddenly, the browser gives a false sense of security to the user.

Mixed content is not blocked today. Again, it's difficult to say which one is more insecure between http with https or https with http, the first one is subject to a mitm attack since the begining.

Firefox isn't forcing you to do insecure things. Firefox is forcing you

to make a choice: go all the way secure (so that it can shows strong signal to the user) or use HTTP.

I am not saying FF is the problem, FF follows the Websocket spec, which does not allow ws with https. I am explaining why I can not use wss (routers can not have trusted certificates), so I am forced to fallback to http. It's easy to deny the issue but that's a real life use case.

Maybe a solution could be combination of CSP and SES, I think SES

should come now, as far as I remember it is planned for ES8, seems too late.

SES exists now... sort of... with Caja. You don't need to wait, it's already available. Module loaders are also a major step forward.

Not very intuitive to use as far as I remember.

Please try again and let us know what problems or confusions you run into, so we can make it more intuitive.

Any SES support coming in std ES will be based on experience and lesson from existing SES, so getting this feedback early would be most helpful. Thanks.

# Mark S. Miller (12 years ago)

On Thu, Sep 26, 2013 at 3:59 AM, David Bruant <bruant.d at gmail.com> wrote:

Le 26/09/2013 12:16, Aymeric Vitte a écrit :

Le 26/09/2013 11:43, David Bruant a écrit :

Le jeu. 26 sept. 2013 11:11:40 CEST, Aymeric Vitte a écrit :

For those interested I provided in the CSP thread a link to a FF bug report where it's explained how some security policy (here Websocket spec) forces me to do insecure things. I don't know what list can take care of it, there is a discussion in [1] too, for now I did not see really solid arguments showing that I could be wrong.

I answered on the webappsec thread. Firefox blocks mixed content for good reasons. When receiving an HTTPS page, the browser shows lots of signs of the page being secure. If the page starts loading code/style/content with HTTP, these are subject to man in the middle attacks and suddenly, the browser gives a false sense of security to the user.

Mixed content is not blocked today.

It is by IE and Chrome. I don't remember for which version it is for Firefox, but it should be deployed or will very soon. It's close enough to being a de facto standard. Only the thousands of edge cases need to be agreed upon.

Again, it's difficult to say which one is more insecure between http with

https or https with http, the first one is subject to a mitm attack since the begining.

None is secure.

Firefox isn't forcing you to do insecure things. Firefox is forcing you

to make a choice: go all the way secure (so that it can shows strong signal to the user) or use HTTP.

I am not saying FF is the problem, FF follows the Websocket spec, which does not allow ws with https. I am explaining why I can not use wss (routers can not have trusted certificates)

Sorry, I had miss this info.

so I am forced to fallback to http. It's easy to deny the issue but

that's a real life use case.

I don't know the details for your particular case, but a line has to be drawn somewhere. Web standard security is always butchered because of these real life use cases. How important are they? How hard would it be for real life infrastructure to upgrade? For sure once a standard allows to do something insecure (load HTTP content in HTTPS page) it remains pretty much forever. This is less true for your router I imagine. Blocking mixed content has been possible in Firefox because IE and Chrome were doing it long ago.

Maybe a solution could be combination of CSP and SES, I think SES

should come now, as far as I remember it is planned for ES8, seems too late.

SES exists now... sort of... with Caja. You don't need to wait, it's already available. Module loaders are also a major step forward.

Not very intuitive to use as far as I remember.

Let's go step by step. You were initially asking for "possible", I gave you "possible" :-) I know that the Caja team has been very reactive and patient to my questions in the past. If you have intuitiveness and API issues, I'm sure they'll be happy to hear about it and will certainly guide you through how to use Caja.

David, thanks! Aymeric, yes, what David said ;).

# Aymeric Vitte (12 years ago)

Le 26/09/2013 17:27, Mark S. Miller a écrit :

David, thanks! Aymeric, yes, what David said ;).

OK, I will look at it again and try it.

,

Aym