New Set methods - again

# Michał Wadas (7 years ago)

Hey.

Year ago, I have written proposal for new Set methods esdiscuss.org/topic/new-set-prototype-methods.

Today I have written most of it's formal spec ( ginden.github.io/set-methods ) and I would like to know your thoughts. I would especially appreciate verification a formal spec for .union method.

Michał Wadas

PS. I have submitted this proposal to TC39 comitee (folowing instructions tc39.github.io/process-document), but I didn't get feedback

about rejection or acceptance of strawman. Should I submit it again or consider no response as rejection?

# Darien Valentine (7 years ago)

I've often found myself re-implementing this functionality and I am a big fan of this proposal, minor reservations that I expressed way back about Set.prototype.union vs Set.union (etc) aside. Nice work on beginning the formal spec, I really hope this picks up steam.

Out of curiosity, what is the rationale behind the choices for which non-set-specific, currently-array-only methods to implement? For example there is map, which saves you one array cast and one set cast, and find, which saves you one array cast, but not, e.g., reduce, which would also save you one array cast.

# Peter Jaszkowiak (7 years ago)

Out of curiosity, what is the rationale behind the choices for which non-set-specific, currently-array-only methods to implement?

One rational is that these are also not Array-specific functions either, and serve as much if not more use with sets than with Arrays. I my experience, one of the main reasons that people don't use Set and Map is because they don't have these helpful utility functions.

For example there is map, which saves you one array cast and one set cast, and find, which saves you one array cast, but not, e.g., reduce, which would also save you one array cast.

reduce doesn't make sense for Sets because it is an ordered operation. Sets are unordered, so actually having forEach (without the index parameter) makes more sense than reduce.

Having these chainable utility functions on Set.prototype will make dealing with Sets much easier for developers, which will increase their usage, making proposals for Array.prototype.union, etc less relevant.

In my opinion, a lot more attention should be paid to building and strengthening the standard library of the language instead of adding more and more syntax.

A couple things I'd like to see added / changed with the proposal, though:

  • Why can't Set.prototype.add be simply extended to accept multiple parameters? This would eliminate the need for addElements. If addElements is going to be a thing, then it should accept a single Iterable parameter instead.
  • IMO, forEach should be added if all of these other functional methods are, but it should return this as opposed to Array.prototype.forEach which returns undefined. Alternatively, name it .each to signify this slight semantic difference.
  • .flatMap would be nice to have, flattening Iterables into the new Set, and accepting non-iterables as singular values added to the Set
  • I imagine that the concerns over .union, .intersect, etc being slow when accepting multiple Iterables is avoidable by adding optimizations for single Sets, multiple Sets, and single iterables

As an aside to this proposal, you mentioned an alternative of adding some of these methods to %IteratorPrototype%. I think both are good ideas, but the problem is that really, only .map, .flatMap, and .filter make sense with generic Iterators if you want to maintain laziness.

Overall, really good stuff, Michał.

# Jordan Harband (7 years ago)

Set and Map both already have forEach, and both return undefined, just like Array.prototype.forEach.

# Peter Jaszkowiak (7 years ago)

Yes I realised that right after sending the reply. Don't know what I was thinking before.

# Darien Valentine (7 years ago)

Sets are unordered, so actually having forEach (without the index parameter) makes more sense than reduce.

When we work with sets, we do often treat them as unordered, like one would in math I suppose, but the ES data structure does have a defined iteration order, and its forEach method does call its callback with indices. Set instances are given an internal [[SetData]] slot which is defined as a list and "set iterator" uses this and iterates in a deterministic order.

reduce doesn't make sense for Sets because it is an ordered operation.

Ordered-ness of sets in ES aside, there’s nothing inherent to reduction which makes it suitable only with ordered collections; consider the classic reduce example, summing numbers.

Not that I’m advocating for or against the addition of reduce here, that was just an example — just, the choices seemed to me a bit arbitrary. (Generic iterator methods would indeed be a nice, except I’m not sure how it would work since pushing another prototype into the chain between object and array, object and set, object and map etc would be web-breaking.)

# Peter Jaszkowiak (7 years ago)

its forEach method does call its callback with indices.

This is incorrect. Both the value and key arguments are equal for Sets, equal to the current element in the Set.

From a different reply which I mistakenly forget to CC esdiscuss

True, Sets are technically ordered by insertion order in their iterator,

but they aren't indexed, so certain elements can't be selected like with an Array, and they can't be sorted. reduce, in my opinion, still makes little sense given those facts. Especially since the most common use of reduce, in my experience, is to create an object from an array of pairs, this could be handled better by just new Map(...your_set_of_pairs)

Reduce seems to make more sense as an ordered operation, but in an attempt to justify adding reduce, here's a use case for an unordered reduce that can't be solved with any other method proposed here, including .flatMap.

const product = numbers.reduce((prev, x) => prev * x, 1);

This can't be done without reduce, unless something like Math.product gets introduced, which I think it should, but what about multiple exponents? What I want to do a^b^c^d^e^f etc, I'd still need something like reduce for that.

I tried to think of a more complex use case for an index-less reduce method, but I couldn't. I thought maybe combinations of elements, but without a way to select a certain subset like Array#slice, it's not simple at all. For instance, with an array, you can do the following:

// this works because we have a placeholder `i`
Array.isArray(players) === true;

players.reduce((matches, player, i) => {
  return [...matches, ...players.slice(i + 1).map(vsPlayer => new

Match(player, vsPlayer)));
}, []);

// without the index it is not trivial
// it's also much less efficient, probably better to just cast to an Array
first
Set.isSet(players) === true;

players.reduce((matches, player, key) => {
  key === player;
  return [...matches, ...players.filter(
  vsPlayer => matches.some(match => !match.players.includes(player) ||
!match.players.includes(vsPlayer))
  ).map(vsPlayer => new Match(player, vsPlayer))];
}, new Set());

The only other orderless operatons that I can think of are things like summing or multiplying numbers, which I do think should be added to the standard library as Math.sum and Math.product, but it should be possible to do without. So in that case, I guess I agree that .reduce should exist on Set and Map.

I don't see many use cases for .reduce, but for consistency, it should be added.

# Darien Valentine (7 years ago)

Both the value and key arguments are equal for Sets

Derp, forgot about that, you’re right. The internal algorithm references the indices, but they aren’t exposed.

[examples of unordered reduce]

There are quite a few I think — in fact reductions that are order-agnostic are a pretty important concept for parallelism, as seen in the wild in places like MongoDB’s aggregation pipeline. Not super applicable here, but just throwing it out there. I didn’t mean to draw attention specifically to this method.