Allen Wirfs-Brock (2014-10-27T20:27:09.000Z)
On Oct 27, 2014, at 12:44 PM, C. Scott Ananian wrote:

> It seems to me that you can get the desired behavior even without
> exposing an internal "generic maximum length" method by simply
> arranging so that the mutations happen to the largest index first.
> This effectively ensures that the exception precedes mutation.
> 
> This spec trick should be applicable to any generic mutation method,
> using temporary helpers if needed if the maximum affected index is not
> immediately apparent from the parameters.  Implementers can use
> in-place update and compensating mutations on exception, regardless of
> the spec mechanism used to ensure mutations are not visible upon
> exception.
> --scott

The problem is that the "desired behavior" is an observable change from the currently observable behavior and there may be existing  code that depends upon that behavior.

For example, how do we know whether or not somewhere on the web somebody is doing this:

var q = new Int32Array(10);
var n,f;

//populated buffer
//...

while (somePredicate(q)) {
     f = q[q.length-1];
     n = process(f);
     try {q.unshift(0)} catch(e) {/* ignore exception from unshifting f off the end of q*/};
     q[0] = n;
}
   

It's odd code, that should flunk any reasonable code review.  But we see lots of odd code on the web and we generally try not to break it.

Is the benefit of respecify these algorithms worth risk of such breakages?

Allen
d at domenic.me (2014-11-05T10:16:38.029Z)
The problem is that the "desired behavior" is an observable change from the currently observable behavior and there may be existing  code that depends upon that behavior.

For example, how do we know whether or not somewhere on the web somebody is doing this:

```js
var q = new Int32Array(10);
var n,f;

//populated buffer
//...

while (somePredicate(q)) {
     f = q[q.length-1];
     n = process(f);
     try {q.unshift(0)} catch(e) {/* ignore exception from unshifting f off the end of q*/};
     q[0] = n;
}
```

It's odd code, that should flunk any reasonable code review.  But we see lots of odd code on the web and we generally try not to break it.

Is the benefit of respecify these algorithms worth risk of such breakages?