Andy Wingo (2013-05-13T08:22:32.000Z)
Hi,

On Sun 12 May 2013 21:29, Allen Wirfs-Brock <allen at wirfs-brock.com> writes:

> 1) I specified yield* such that it will work with any iterator, not
> just generators.  To me, this seems essential.  Otherwise client code
> is sensitive to other peoples implementation decision (that are
> subject to change) regarding whether to use a generator or an object
> based iterator.  This was easy to accomplish and only requires a
> one-time behavioral check to determine whether "next" or "send" should
> be used to retrieve values from the delegated iterator and an behavior
> guard on invoking "throw" on the delegated iterator.

Are you checking structurally or are you checking the internal "brand"?
I can think of situations where you would want to decorate a generator
iterator, producing an object with the same interface but not actually a
generator iterator.  Perhaps one could decorate with another generator,
though.  Relatedly:

> 2) yield* invokes the @@iterator method on its expression to obtain the
> iterator. This means you can say things like:
>     yield * [2,4,6,8,10]; //individually yield positive integers <= 10.

Is it wise to do this?  It would also be possible to define

  function* iterate(iterable) { for (let x of iterable) yield x; }

and that would provide a uniform interface to the RHS of a yield*, and a
natural point at which a throw() to a generator suspended in a yield*
would raise an exception.

> 3) yield* yields the nextResult object produced by the inner
> iterator. No unwrapping/rewrapping required.

Does this ensure that the result is an object and has "value" and "done"
properties?

> 4) I haven't (yet) provided a "close" method for generators.  I still
> think we should.

Let me try to summarize some thoughts on close().  I'll start from one
of your use cases.

>       b) Any time user code is manually draining a known generator that
> it opened and decides that it is now done with the generator. They
> really should close it.  Of course, they may not, but regardless they
> should be provided with a means to do so and it should be encouraged as
> a best practice.

I think the question to ask is, why do you think this is a good
recommendation?  It can't be for general resource cleanup issues,
because otherwise iterators would also have a close method.  So I am
inclined to think that it is because you see "finally" in the source
code, and you treat that as a contract with the user that a finally
block actually does run for its effects.

But that's precisely what we can't guarantee: unlike function
activations, the dynamic extent of a generator activation is unlimited.
We don't have finalizers, so we can't ensure that a finally block runs.
And once we allow for that possibility, it seems to me that close() is
not only less useful, but that by making a kind of promise that we can't
keep, it can be harmful.

close() also complicates a user's mental model of what happens when they
see a "yield".  I see "yield x" and I think, OK, this suspends
computation.  If it's in a position to yield a value, that tells me that
it might also produce a value.  That it could throw an exception is less
apparent, so you have to remember that.  But then you also have to
remember that it might be like a "return"!  It's that second invisible
behavior that tips my mental balance.

Of course a close() doesn't actually force a generator activation to
finish; there is the possibility of exceptions or further yields in
finally blocks.  In this case Python will re-queue the generator for
closing, so you do finally run all the finallies -- but again, we don't
guarantee that, so add that to the mental model of what "close" does...

close() also adds restrictions on the use of generator objects.  For
example it's very common to use a loop variable after a loop:

   for (i = 0; i < N; i++) {
     ...
     if (foo) break;
   }
   ...
   // here we use i   

One can imagine situations in which it would be nice to use a generator
object after a break, for example in lazy streams:

  function* fib() {
    var x = 0, y = 1;
    yield x;
    yield y;
    while (1) {
      let z = x + y;
      yield x + y;
      x = y;
      y = z;
    }
  }
  function for_each_n(f, iter, n) {
    if (n) {
      for (let x of iter) {
        f(x);
        if (--n == 0)
          break;
      }
    }
  }
  var iter = fib();
  for_each_n(x => console.log(x), iter, 10);  // first 10
  for_each_n(x => console.log(x), iter, 10);  // next 10
    
In summary my problems with close() are these:

 (1) It attempts to provide a reliable finally for unlimited-extent
     activations, when we can't do that.

 (2) It complicates the mental model of what happens when you yield.

 (3) It makes common sugar like for-of inappropriate for some uses of
     generator objects.

WDYT?  Not to distract you too much from the new draft, of course :)

Cheers,

Andy
github at esdiscuss.org (2013-07-12T02:27:21.244Z)
On Sun 12 May 2013 21:29, Allen Wirfs-Brock <allen at wirfs-brock.com> writes:

> 1) I specified `yield*` such that it will work with any iterator, not
> just generators.  To me, this seems essential.  Otherwise client code
> is sensitive to other peoples implementation decision (that are
> subject to change) regarding whether to use a generator or an object
> based iterator.  This was easy to accomplish and only requires a
> one-time behavioral check to determine whether `next` or `send` should
> be used to retrieve values from the delegated iterator and an behavior
> guard on invoking `throw` on the delegated iterator.

Are you checking structurally or are you checking the internal "brand"?
I can think of situations where you would want to decorate a generator
iterator, producing an object with the same interface but not actually a
generator iterator.  Perhaps one could decorate with another generator,
though.  Relatedly:

> 2) `yield*` invokes the @@iterator method on its expression to obtain the
> iterator. This means you can say things like:
> ```js
> yield * [2,4,6,8,10]; //individually yield positive integers <= 10.
> ```

Is it wise to do this?  It would also be possible to define

```js
function* iterate(iterable) { for (let x of iterable) yield x; }
```

and that would provide a uniform interface to the RHS of a `yield*`, and a
natural point at which a `throw()` to a generator suspended in a `yield*`
would raise an exception.

> 3) `yield*` yields the `nextResult` object produced by the inner
> iterator. No unwrapping/rewrapping required.

Does this ensure that the result is an object and has `value` and `done`
properties?

> 4) I haven't (yet) provided a `close` method for generators.  I still
> think we should.

Let me try to summarize some thoughts on `close()`.  I'll start from one
of your use cases.

> b) Any time user code is manually draining a known generator that
> it opened and decides that it is now done with the generator. They
> really should close it.  Of course, they may not, but regardless they
> should be provided with a means to do so and it should be encouraged as
> a best practice.

I think the question to ask is, why do you think this is a good
recommendation?  It can't be for general resource cleanup issues,
because otherwise iterators would also have a close method.  So I am
inclined to think that it is because you see `finally` in the source
code, and you treat that as a contract with the user that a finally
block actually does run for its effects.

But that's precisely what we can't guarantee: unlike function
activations, the dynamic extent of a generator activation is unlimited.
We don't have finalizers, so we can't ensure that a finally block runs.
And once we allow for that possibility, it seems to me that `close()` is
not only less useful, but that by making a kind of promise that we can't
keep, it can be harmful.

`close()` also complicates a user's mental model of what happens when they
see a `yield`.  I see `yield x` and I think, OK, this suspends
computation.  If it's in a position to yield a value, that tells me that
it might also produce a value.  That it could throw an exception is less
apparent, so you have to remember that.  But then you also have to
remember that it might be like a `return`!  It's that second invisible
behavior that tips my mental balance.

Of course a `close()` doesn't actually force a generator activation to
finish; there is the possibility of exceptions or further yields in
finally blocks.  In this case Python will re-queue the generator for
closing, so you do finally run all the finallies -- but again, we don't
guarantee that, so add that to the mental model of what `close` does...

`close()` also adds restrictions on the use of generator objects.  For
example it's very common to use a loop variable after a loop:

```js
for (i = 0; i < N; i++) {
 ...
 if (foo) break;
}
...
// here we use i
```

One can imagine situations in which it would be nice to use a generator
object after a break, for example in lazy streams:

```js
function* fib() {
  var x = 0, y = 1;
  yield x;
  yield y;
  while (1) {
    let z = x + y;
    yield x + y;
    x = y;
    y = z;
  }
}
function for_each_n(f, iter, n) {
  if (n) {
    for (let x of iter) {
      f(x);
      if (--n == 0)
        break;
    }
  }
}
var iter = fib();
for_each_n(x => console.log(x), iter, 10);  // first 10
for_each_n(x => console.log(x), iter, 10);  // next 10
```

In summary my problems with `close()` are these:

1. It attempts to provide a reliable finally for unlimited-extent activations, when we can't do that.

2. It complicates the mental model of what happens when you yield.

3. It makes common sugar like for-of inappropriate for some uses of generator objects.

WDYT?  Not to distract you too much from the new draft, of course :)