Extensible destructuring proposal

# Samuel Hapák (9 years ago)

Hello there,

I have written proposal to allow extend standard behavior of object destructuring: vacuumlabs/es-proposals/blob/master/extensible-destructuring.md

Main idea is, that currently object destructuring is useless for people who use Map or Immutable.Map (facebook.github.io/immutable-js) as their main data structure.

This would allow authors of libraries to create map like structures that would support destructuring. It extends the language in a similar way iterator does.

The main idea is, that if object defines Symbol.get method, it gets used to access properties of object instead of [] when destructuring.

What do you think?

# Andreas Rossberg (9 years ago)

On 21 July 2015 at 08:14, Samuel Hapák <samuel.hapak at vacuumapps.com> wrote:

I have written proposal to allow extend standard behavior of object destructuring:

vacuumlabs/es-proposals/blob/master/extensible-destructuring.md

Main idea is, that currently object destructuring is useless for people who use Map or Immutable.Map (facebook.github.io/immutable-js) as their main data structure.

This would allow authors of libraries to create map like structures that would support destructuring. It extends the language in a similar way iterator does.

The main idea is, that if object defines Symbol.get method, it gets used to access properties of object instead of [] when destructuring.

What do you think?

People reading code will (rightfully) expect destructuring to be syntactic sugar for property access. I don't think it's worth breaking that equivalence. If you want user-defined patterns then they should be syntactically distinct from existing forms.

Also, destructuring patterns are meant to match, and be the logical inverse of, literal syntax. Consequently, from my perspective at least, extensible destructuring would require first introducing extensible literal syntax in a symmetric manner. I think it would be unnecessarily surprising if for random patterns they didn't match up (worse, if you cannot even tell the meaning syntactically, but it depends on whatever object you happen to get).

# Claude Pache (9 years ago)

Le 21 juil. 2015 à 08:14, Samuel Hapák <samuel.hapak at vacuumapps.com> a écrit :

Hello there,

I have written proposal to allow extend standard behavior of object destructuring: vacuumlabs/es-proposals/blob/master/extensible-destructuring.md

Main idea is, that currently object destructuring is useless for people who use Map or Immutable.Map (facebook.github.io/immutable-js) as their main data structure.

This would allow authors of libraries to create map like structures that would support destructuring. It extends the language in a similar way iterator does.

The main idea is, that if object defines Symbol.get method, it gets used to access properties of object instead of [] when destructuring.

What do you think?

Cheers,

Samuel

An alternative to Symbol.get is a Proxy with a "get" trap. Maybe worth experimenting with Proxies?

For example, one can imagine something like:

    const {author: {name: {first, last}, birthdate}} = book.toObject()

where .toObject() is a method returning a proxy, or, in implementations lacking proxies, falling back to convert eagerly the structure into a plain object.

# Samuel Hapák (9 years ago)

On 21.7.2015, at 9:34, Andreas Rossberg <rossberg at google.com> wrote:

People reading code will (rightfully) expect destructuring to be syntactic sugar for property access. I don't think it's worth breaking that equivalence. If you want user-defined patterns then they should be syntactically distinct from existing forms.

This is already not true for array destructuring. You can use it with any iterables, not only with arrays.

@@get should be implemented only by “Map like” data structures. Destructuring is usually used to extract deeply nested data from Object. I find hard to believe that someone would deliberately use destructuring on Map or similar structure to extract methods:

// very strange use of destructuring, don’t believe anyone is doing that
const {get, has, set} = myMap // instance of Map()

Much more common use case is that you have nested structure of “Map like” structures and want to easily retrieve values.

Btw. this change is backward compatible, because it does not change behavior of existing code. It just gives you ability to provide users with new datastructures.

Also, destructuring patterns are meant to match, and be the logical inverse of, literal syntax. Consequently, from my perspective at least, extensible destructuring would require first introducing extensible literal syntax in a symmetric manner. I think it would be unnecessarily surprising if for random patterns they didn't match up (worse, if you cannot even tell the meaning syntactically, but it depends on whatever object you happen to get).

As I have pointed out, this is not true for iterable destructuring.

[a, b, c] = someIterable; // someIterable does not have to be Array

This can be similarly surprising: someIterable[0] !== a // true

Thanks,

Samuel

# Domenic Denicola (9 years ago)

For maps you can just do

const [[k1, v1], [k2, v2], ...rest] = map.entries();

There is no need to add more complexity to object destructuring (which should only be used for objects and their string keys).

# Bergi (9 years ago)

Samuel Hapák schrieb:

The main idea is, that if object defines Symbol.get method, it gets used to access properties of object instead of [] when destructuring.

Aw, when I read "extensible destructuring" I had hoped to see an extension to the destructuring syntax, not to see how semantics of destructuring objects are changed.

What do you think about extractor functions, like in Scala www.scala-lang.org/old/node/112?

Instead of unapply or unapplySeq methods we'd probably use an @@extractor (Symbol.extractor) method or so, allowing us to do

 let Map({a, b, c}) = myMap;

 let List(a, b, c) = myList;

 const Map({author: Map({name: {first, last}, birthdate})}) = book;

which would desugar to

 let [{a, b, c}] = Map[Symbol.extractor](myMap);

 let [a, b, c] = List[Symbol.extractor](myList);

 const [{author: __author}] = Map[Symbol.extractor](book);
 const [{name: {first, last}, birthdate}] = 

MapSymbol.extractor;

where each extractor method would return an Iterable that is assigned to the "arguments" of the extractor. (A Proxy to lazily destructure object literals in there is a good idea, thanks @Claude).

This would even allow us to call functions on destructuring, think about module imports:

 // a.js
 export default function factory(options) {
     …
     return moduleInstance;
 }

 // b.js
 const NoOptions = {
     [Symbol.extractor](factory) { return [factory()]; }
 };
 const WithOptions = (...args) => ({
     [Symbol.extractor](factory) { return [factory(...args)]; }
 });

 import NoOptions(A) from "a";
 import WithOptions("config")(A) from "a";
 // A === moduleInstance

What do you think about that?

Bergi

# Samuel Hapák (9 years ago)

On 21.7.2015, at 12:42, Claude Pache <claude.pache at gmail.com> wrote:

For example, one can imagine something like:

   const {author: {name: {first, last}, birthdate}} = book.toObject()

There are two issues I see here:

  • book.toObject() would have to do deep conversion and you may not want it. Let’s say birthdate is immutable map {day, month, year}*. This would convert birthdate to Object even though you didn’t want that.

  • this can handle only String keys. My proposal could be extended so this would work:

   const {[someNonStringKey]: value} = someMap;

We already have a syntax for that in es6.

PS: * I know we have Date() for that purpose. Just for demonstration.

Samuel

# Samuel Hapák (9 years ago)

On 21.7.2015, at 15:09, Domenic Denicola <d at domenic.me> wrote:

For maps you can just do

const [[k1, v1], [k2, v2], ...rest] = map.entries();

There is no need to add more complexity to object destructuring (which should only be used for objects and their string keys).

Could you please explain it on example?

Let’s say I have

let book = Map{author: {name: “John”, surname: “Doe”}, birthdate: “10-10-1990”};

How would you extract birthdate? How would you extract name?

Thanks,

Samuel

# Samuel Hapák (9 years ago)

On 21.7.2015, at 16:23, Bergi <a.d.bergi at web.de> wrote:

Aw, when I read "extensible destructuring" I had hoped to see an extension to the destructuring syntax, not to see how semantics of destructuring objects are changed.

Thanks Bergi,

I find your idea really interesting. If get it, benefit of your approach is that user explicitly specifies how to extract data instead of relying on extraction method defined by object being extracted.

This way, you can create multiple competing extractors, so your solution is a lot more flexible than mine. Actually, mine proposal is just special case of yours. I could implement E[@@extractor] that just tries to find __get__ method on object being extracted and uses it.

The only downside I see is the verbosity.

Could you please elaborate more on the use cases for this? I am not used to Scala so my imagination is currently very limited:) I know that I would really really need different destructuring for different object types (otherwise it is insanely verbose to use Immutable Maps), but I can’t think of use case for having multiple different destructurings for single type.

Could you show some examples? Thanks!

Samuel

# Domenic Denicola (9 years ago)

From: Samuel Hapák [mailto:samuel.hapak at vacuumapps.com]

Could you please explain it on example?

Let’s say I have

let book = Map{author: {name: “John”, surname: “Doe”}, birthdate: “10-10-1990”};

How would you extract birthdate? How would you extract name?

Your syntax here is invalid in at least four different ways. Let's assume you wrote something with valid syntax, like

let book = new Map([["author", { name: "John", surname: "Doe" }], ["birthdate", "10-10-1990"]]);

Then:

const [[, { name }], [, birthdate]] = book.entries();

If this is still confusing, you may wish to turn to StackOverflow for clarification, instead of es-discuss. You can test such things in Firefox, although you need to use var instead of const or let.

# Bergi (9 years ago)

Domenic Denicola schrieb:

For maps you can just do

const [[k1, v1], [k2, v2], ...rest] = map.entries();

The problem with this is that you would need to know the order of the keys in the map. Your code does only extract the "first" and "second" key-value pairs, allowing us to get the key values of them, but this syntax does not allow us to extract the value for a given key from somewhere in the map. Predictability is all fine, but I still consider maps to be inherently unordered.

Bergi

# Domenic Denicola (9 years ago)

Well, the spec says they are ordered, so I'm not sure where you're getting that from.

From: Bergi <a.d.bergi at web.de>

Sent: Jul 21, 2015 8:53 AM To: Domenic Denicola; es-discuss Subject: Re: Extensible destructuring proposal

Domenic Denicola schrieb:

For maps you can just do

const [[k1, v1], [k2, v2], ...rest] = map.entries();

The problem with this is that you would need to know the order of the keys in the map. Your code does only extract the "first" and "second" key-value pairs, allowing us to get the key values of them, but this syntax does not allow us to extract the value for a given key from somewhere in the map. Predictability is all fine, but I still consider maps to be inherently unordered.

Bergi

# Ben Newman (9 years ago)

I too am fond of Scala's extensible pattern matching. Before I knew about Scala's approach, I thoughtlessly agreed with the conventional wisdom that pattern matching and object-oriented programming are necessarily at odds. Letting objects define their own destructuring semantics shows that wisdom to be mistaken, and your Symbol.extractor method feels like a perfectly ECMAScript-y way to implement it.

That said, virtually every time I've written an unapply or unapplySeq method in Scala, it has been with multi-case pattern matching in mind, and we're only talking about destructuring assignment here, which I suppose is like a single-case pattern match.

If you have time/motivation to put together a proposal for Symbol.extractor, I would very much encourage you to think about the possibility of introducing multi-case pattern matching as well, as I think that would add significant value to the proposal, as well as highlighting some tricky issues.

I'd be happy to review and/or help champion such a proposal, if it works out as nicely as I hope!

Ben

# Bergi (9 years ago)

Samuel Hapák schrieb:

I find your idea really interesting. If get it, benefit of your approach is that user explicitly specifies how to extract data instead of relying on extraction method defined by object being extracted. This way, you can create multiple competing extractors

That wasn't my primary goal (indeed most data structures would have only a single extractor). The main benefits I see are:

  • explicit is better than implicit
  • works anywhere in a destructuring pattern, not only the topmost level
  • can be chosen not to be used (e.g. for let {size} = myMap;)

The only downside I see is the verbosity.

That's an advantage imo :-)

Could you show some examples? Thanks!

I don't think there's much more than the ones I already gave. Notice that those NoOptions and WithOptions() was even an example for multiple extractors to be used for "destructuring" factory functions (by applying them), though I don't know whether that's the best solution for this import problem.

Bergi

# Bergi (9 years ago)

Ben Newman schrieb:

That said, virtually every time I've written an unapply or unapplySeq method in Scala, it has been with multi-case pattern matching in mind, and we're only talking about destructuring assignment here, which I suppose is like a single-case pattern match.

Right, we don't really have pattern matching in ES yet. I don't want to introduce that as well, it would be a pretty heavy change imo.

If you have time/motivation to put together a proposal for Symbol.extractor, I would very much encourage you to think about the possibility of introducing multi-case pattern matching as well, as I think that would add significant value to the proposal, as well as highlighting some tricky issues.

I've seen that Scala returns an Option for a pattern. My first thought was to make @@extractor return an empty iterator, but I'm not sure whether that's semantically sound. We then could consider an iterator to be "not matching" if any of the target elements are left undefined, i.e. when the iterator is exhausted before all elements have gotten a value, similar to default initialisers are handled today. Do you think that is viable?

Bergi

# Bergi (9 years ago)

Domenic Denicola schrieb:

Well, the spec says they are ordered, so I'm not sure where you're getting that from.

Yes, the spec defines an order to make iteration predictable and consistent, but that doesn't mean anyone would use a Map for an ordered structure. I would consider

 new Map(Object.entries({a: 1, b: 2}))

and

 new Map(Object.entries({b: 2, a: 1}))

to be equivalent for all purposes of an algorithm that uses commutative operators.

Bergi

# Andreas Rossberg (9 years ago)

I think the example actually reveals a deeper issue with the motivation: the desire to destructure maps like here is rooted in a category error. Destructuring is designed to apply to objects from the program domain, while maps are typically meant to encode data from the problem domain.

Or, in other words: destructuring is only useful when you know the keys at programming time (i.e., statically). But if that is the case, there is rarely a good reason to use a map.

# Samuel Hapák (9 years ago)

On 21.7.2015, at 17:19, Domenic Denicola <d at domenic.me> wrote:

From: Samuel Hapák [mailto:samuel.hapak at vacuumapps.com]

Could you please explain it on example?

Let’s say I have

let book = Map{author: {name: “John”, surname: “Doe”}, birthdate: “10-10-1990”};

How would you extract birthdate? How would you extract name?

Your syntax here is invalid in at least four different ways. If this is still confusing, you may wish to turn to StackOverflow for clarification, instead of es-discuss.

I apologize for the syntax mistakes I made. I totally understand you receive tons of proposals from people who don’t bother to learn JavaScript. I understand that this pissed you off and I am sorry for that. My bad.

Here is the valid syntax:

let book = new Map([['author', Map([['name', 'John'], ['surname', 'Doe']]), ['birthdate', '10-10-1990']]);
let book2 = new Map([['birthdate', '10-10-1990'], ['author', Map([['name', 'John'], ['surname', 'Doe']])]]);

Now, I wish to extract birthdate and name. I can't rely on the order of elements, because the code should accept any book that has same shape. Let's say, ordering of the items in the map is not the part of the contract. So, to clarify this, I want to write function that accepts book or book2 and inside, it extracts birthdate and name. The code should work with both book and book2.

I feel, that your solution is unable to achieve this, but I can be mistaken.

Thanks!

Samuel

# Samuel Hapák (9 years ago)

On 21.7.2015, at 20:16, Andreas Rossberg <rossberg at google.com> wrote:

Or, in other words: destructuring is only useful when you know the keys at programming time (i.e., statically). But if that is the case, there is rarely a good reason to use a map.

Actually, the main motivation is to use Immutable Map (facebook.github.io/immutable-js, facebook.github.io/immutable-js).

All examples shown here displayed only keys known statically (at programming time). I just mentioned that it could be nice to allow non-string keys and that we could reuse syntax that currently works for "dynamic" keys. But that definitely is not the core of the discussion here.

Samuel

# Matúš Fedák (9 years ago)

I think the example actually reveals a deeper issue with the motivation: the desire to destructure maps like here is rooted in a category error. Destructuring is designed to apply to objects from the program domain, while maps are typically meant to encode data from the problem domain. Or, in other words: destructuring is only useful when you know the keys at programming time (i.e., statically). But if that is the case, there is rarely a good reason to use a map. /Andreas

  • Most programs acquire, transform, store, search manage, transmit data
  • Data is raw, immutable information
  • Many langs turn into something much more elaborate - with types, 'methods' etc esp. OO conflate process constructs and information constructs Rich Hickey, the author of Clojure programing language reference : youtu.be/VSdnJDO-xdg?t=811

What is the difference between object from domain and data from domain? If I get a json (from some service) and I statically know what keys it has I have an ability to destructure it. But if I convert JSON to an immutable structure (or to a Map) I'm losing this ability somehow. That's strange, since I didn't change abstraction, only actual implementation of the underlying data structure.

// this is possible because json return object and array like data structure
const {name, time} = getJsonMap();
// this is not possible because now it will be stored as ImmutableMap
const {name, time} = Immutable.fromJS(getJsonMap());

// but if I would have an array then this is currently possible
const [first, second] = getJsonArray();
// and also this is possible because array destructuring work with
iterables and immuatableArray can be iterated
const [first, second] = Immutable.fromJS(getJsonArray());

To sum it up, I would be really happy to see generalization of object destructuring. I don't have a strong opinion about original proposal and Scala-flavoured one. I consider array proposal general enough (since your custom object can support it). Only debatable advantage of Scala-proposal is that you don't lose ability to destructure objects if they define their custom destructuring logic.

// Original proposal
let {a, b} = new Map([["a", 2], ["b": 3]]); // this will be completely ok
let {size} = new Map([["a", 2], ["b": 3]]); // this will not be possible
// Scala proposal
let Map{a, b} = new Map([["a", 2], ["b": 3]]); // this will be completely ok
let {size} = new Map([["a", 2], ["b": 3]]); // this will be also doable

However, I don't see a point in having possibility to destructure functions from Map or ImmutableMap or other primitives that decide to implement custom object-destructuring logic.

Maty

# tomas kulich (9 years ago)

This proposal is great!

There is already a lot of valid use-cases, when using es6-Map or ImmutableMap (instead of just Object) makes perfect sense, so it would be really sad if one could not easily destructure these.

BTW, what's the reason for having extensible destructuring on (general) iterables, but only on one very specific Map-like structure (Object)? It seems at least inconsistent.

# Samuel Hapák (9 years ago)
# Samuel Hapák (9 years ago)

So, do you still have objections against this proposal? Could we summarize them?

@Andreas, do you still think that there is category error involved?

Thanks,

Samuel

# Andreas Rossberg (9 years ago)

On 31 July 2015 at 20:09, Samuel Hapák <samuel.hapak at vacuumapps.com> wrote:

So, do you still have objections against this proposal? Could we summarize them?

@Andreas, do you still think that there is category error involved?

If you want to overload existing object pattern syntax, then yes, definitely. I strongly disagree with that. It breaks regularity and substitutability (you cannot pass a map to where a generic object is expected).

As for a more Scala-like variant with distinguished syntax, I'm fine with that semantically. But I still don't buy the specific motivation with maps. Can you give a practical example where you'd want to create a map (immutable or not) for something that is just a record of statically known shape? And explain why you need to do that? Surely it can't be immutability, since objects can be frozen, too.

To be clear, I wouldn't reject such a feature outright. In fact, in a language with abstract data types, "views", as they are sometimes called in literature, can be valuable. But they are also notorious for a high complexity cost vs minor convenience they typically provide. In particular, it is no longer to write

let {a: x, b: y} = unMap(map)

as you can today, then it would be to move the transformation to the pattern:

let Map({a: x, b: y}) = map

So I encourage you to come up with more convincing examples in the JavaScript context. Short of that, I'd argue that the complexity is not justified, at least for the time being.

# Isiah Meadows (9 years ago)

Damnit...forgot to fix the subject.

# Andreas Rossberg (9 years ago)

On 5 August 2015 at 09:27, Isiah Meadows <impinball at gmail.com> wrote:

Damnit...forgot to fix the subject.

On Wed, Aug 5, 2015 at 3:20 AM, Isiah Meadows <impinball at gmail.com> wrote:

Wait...this got me thinking... The proposal itself doesn't bring along a lot of merits, but it seems like it could be a great stepping stone to a limited pattern matching syntax. This would probably be a little more justifiable IMHO than merely a custom destructuring syntax. Maybe something like this:

I intentionally did not bring up pattern matching. That indeed is what views are usually wanted for. But then you need to be much more careful in designing a mechanism that avoids re-transforming the scrutinee for every tested case! Because that would be very costly. For that reason, I fear that the feature as proposed would interact badly with any future pattern matching mechanism, in the sense that it would encourage very costly usage that cannot be optimised.

# Isiah Meadows (9 years ago)
# Samuel Hapák (9 years ago)

Thank you for your reply Andreas!

So, let's split this discussion into two parts:

i) Whether there is a good use case for non-standard data structures like Immutable.js in place of standard Objects.

ii) If so, how to proceed with simplifying destructuring for these data structures.

Let's discuss the i) first.

On 4.8.2015, at 14:26, Andreas Rossberg <rossberg at google.com> wrote:

As for a more Scala-like variant with distinguished syntax, I'm fine with that semantically. But I still don't buy the specific motivation with maps. Can you give a practical example where you'd want to create a map (immutable or not) for something that is just a record of statically known shape? And explain why you need to do that? Surely it can't be immutability, since objects can be frozen, too.

Some people from FRP community believe that using non-mutable values is superior over using mutating values. There are lot of resources online why functional people prefer non-mutable values, one of them is famous Rich Hickeys' talk: www.infoq.com/presentations/Value-Values, www.infoq.com/presentations/Value-Values

But we've got Object.freeze(), right? It should be remedy to all our problems, right? Well, nope.

The reason, why people use libraries like: facebook/immutable-js, facebook/immutable-js (7760 stars)

or swannodette/mori, swannodette/mori (1692 stars)

is not object freezing. Actually, they don't freeze objects at all! They are about how to manipulate immutable data easily and efficiently.

Following code in traditional mutable programs:


let book = {title: "Harry Potter", author: {name: {first: "Joanne", last: "Rowling"}, age: 49}}

// It was J. K. Rowling's birthday at July 31st, let's increase the age

book.author.age++

Is hard to do when you deeply freeze the structure. But you can introduce some helpers for the task:

let book = deeplyFreeze({title: "Harry Potter", author: {name: {first: "Joanne", last: "Rowling"}, age: 49}})

// We use some convenience updateIn helper that clones the frozen structure and creates new updated:
book = updateIn(book, ['author', 'age'], (age) => age + 1)

// results in
// deeplyFreeze({title: "Harry Potter", author: {name: {first: "Joanne", last: "Rowling"}, age: 50}})

That was nice, but slooooooow:( Cloning objects on each tiny change. That's hardly efficient.

Fortunatelly, Phil Bagwell and Rich Hickey came up with cool concept of structural sharing that can make immutable (also known as persistent) data structures very efficient. It gives practically O(1) for both updates and lookups. This is nice introductory post: hypirion.com/musings/understanding-persistent-vector-pt-1, hypirion.com/musings/understanding-persistent-vector-pt-1

The reason why people use immutable.js and mori in JavaScript and why all default data structures in Clojure are based on Bagwell/Hickey algorithms, is that these libraries provide efficient way to manipulate immutable datastructures in a convenient way.

And there is growin React community that would like to use immutable.js as primary data structures in their programs.

facebook.github.io/react/docs/advanced-performance.html#immutable-js-to-the-rescue, facebook.github.io/react/docs/advanced-performance.html#immutable-js-to-the-rescue

So, let me sum up:

There is a growing functional/reactive community amongst JavaScript programmers. Ones of key building blocks of functional programming are efficient immutable datastructures. These are not native in JavaScript.

Currently the only drawback of using immutable datastructures in javascript is how cumbersome it is to use them. And part of the reason is that destructuring does not work and we have to write:

const first = book.getIn(['author', 'name', 'first'])
const last = book.getIn(['author', 'name', 'last'])
const age = book.getIn(['author', 'age'])

In particular, it is no longer to write

let {a: x, b: y} = unMap(map)

as you can today, then it would be to move the transformation to the pattern:

let Map({a: x, b: y}) = map

But this does not work for nested structures. Like on the example above.

Have I made my motivations more clear now?

Thanks for reading this horribly long email:)

Cheers,

@samuha twitter.com/samuha (+421 949 410 148)

skype: samuelhapak

Check my availability at freebusy.io/[email protected]

# Isiah Meadows (9 years ago)

I meant "statically compiled" in the sense of AOT compiled, in the veins of C++, Java, TypeScript, etc.

# Isiah Meadows (9 years ago)

The alternative is proxies.

On Wed, Aug 5, 2015, 15:26 Samuel Hapák <samuel.hapak at vacuumapps.com> wrote:

Thank you for your reply Andreas!

So, let's split this discussion into two parts:

i) Whether there is a good use case for non-standard data structures like Immutable.js in place of standard Objects.

ii) If so, how to proceed with simplifying destructuring for these data structures.

Let's discuss the i) first.

On 4.8.2015, at 14:26, Andreas Rossberg <rossberg at google.com> wrote:

As for a more Scala-like variant with distinguished syntax, I'm fine with that semantically. But I still don't buy the specific motivation with maps. Can you give a practical example where you'd want to create a map (immutable or not) for something that is just a record of statically known shape? And explain why you need to do that? Surely it can't be immutability, since objects can be frozen, too.

Some people from FRP community believe that using non-mutable values is superior over using mutating values. There are lot of resources online why functional people prefer non-mutable values, one of them is famous Rich Hickeys' talk: www.infoq.com/presentations/Value-Values

But we've got Object.freeze(), right? It should be remedy to all our problems, right? Well, nope.

The reason, why people use libraries like:

facebook/immutable-js (7760 stars)

or

swannodette/mori (1692 stars)

is not object freezing. Actually, they don't freeze objects at all! They are about how to manipulate immutable data easily and efficiently.

Following code in traditional mutable programs:


let book = {title: "Harry Potter", author: {name: {first: "Joanne", last:
"Rowling"}, age: 49}}

// It was J. K. Rowling's birthday at July 31st, let's increase the age

book.author.age++

Is hard to do when you deeply freeze the structure. But you can introduce some helpers for the task:


let book = deeplyFreeze({title: "Harry Potter", author: {name: {first:
"Joanne", last: "Rowling"}, age: 49}})

// We use some convenience updateIn helper that clones the frozen structure
and creates new updated:

book = updateIn(book, ['author', 'age'], (age) => age + 1)

// results in

// deeplyFreeze({title: "Harry Potter", author: {name: {first: "Joanne",
last: "Rowling"}, age: 50}})

That was nice, but slooooooow:( Cloning objects on each tiny change. That's hardly efficient.

Fortunatelly, Phil Bagwell and Rich Hickey came up with cool concept of structural sharing that can make immutable (also known as persistent) data structures very efficient. It gives practically O(1) for both updates and lookups. This is nice introductory post:

hypirion.com/musings/understanding-persistent-vector-pt-1

The reason why people use immutable.js and mori in JavaScript and why all default data structures in Clojure are based on Bagwell/Hickey algorithms, is that these libraries provide efficient way to manipulate immutable datastructures in a convenient way.

And there is growin React community that would like to use immutable.js as primary data structures in their programs.

facebook.github.io/react/docs/advanced-performance.html#immutable-js-to-the-rescue

So, let me sum up:

There is a growing functional/reactive community amongst JavaScript programmers. Ones of key building blocks of functional programming are efficient immutable datastructures. These are not native in JavaScript.

Currently the only drawback of using immutable datastructures in javascript is how cumbersome it is to use them. And part of the reason is that destructuring does not work and we have to write:


const first = book.getIn(['author', 'name', 'first'])

const last = book.getIn(['author', 'name', 'last'])

const age = book.getIn(['author', 'age'])

In particular, it is no longer to write

let {a: x, b: y} = unMap(map)

as you can today, then it would be to move the transformation to the pattern:

let Map({a: x, b: y}) = map

But this does not work for nested structures. Like on the example above.

Have I made my motivations more clear now?

Thanks for reading this horribly long email:)

Cheers,

@samuha twitter.com/samuha (+421 949 410 148)

skype: samuelhapak

Check my availability at freebusy.io/[email protected]

# Samuel Hapák (9 years ago)

Proxies have problem that you are unable to distinguish proto properties and proxied properties. Imagine situation of storing key filter into Immutable.Map. How would you distinguish it from function filter defined on Immutable.Map.prototype?

Samuel

On 6.8.2015, at 20:35, Isiah Meadows <isiahmeadows at gmail.com> wrote:

The alternative is proxies.

On Wed, Aug 5, 2015, 15:26 Samuel Hapák <samuel.hapak at vacuumapps.com <mailto:samuel.hapak at vacuumapps.com>> wrote:

Thank you for your reply Andreas!

So, let's split this discussion into two parts:

i) Whether there is a good use case for non-standard data structures like Immutable.js in place of standard Objects.

ii) If so, how to proceed with simplifying destructuring for these data structures.

Let's discuss the i) first.

On 4.8.2015, at 14:26, Andreas Rossberg <rossberg at google.com <mailto:rossberg at google.com>> wrote:

As for a more Scala-like variant with distinguished syntax, I'm fine with that semantically. But I still don't buy the specific motivation with maps. Can you give a practical example where you'd want to create a map (immutable or not) for something that is just a record of statically known shape? And explain why you need to do that? Surely it can't be immutability, since objects can be frozen, too.

Some people from FRP community believe that using non-mutable values is superior over using mutating values. There are lot of resources online why functional people prefer non-mutable values, one of them is famous Rich Hickeys' talk: www.infoq.com/presentations/Value-Values, www.infoq.com/presentations/Value-Values But we've got Object.freeze(), right? It should be remedy to all our problems, right? Well, nope.

The reason, why people use libraries like:

facebook/immutable-js, facebook/immutable-js (7760 stars)

or

swannodette/mori, swannodette/mori (1692 stars)

is not object freezing. Actually, they don't freeze objects at all! They are about how to manipulate immutable data easily and efficiently.

Following code in traditional mutable programs:


let book = {title: "Harry Potter", author: {name: {first: "Joanne", last: "Rowling"}, age: 49}}

// It was J. K. Rowling's birthday at July 31st, let's increase the age

book.author.age++

Is hard to do when you deeply freeze the structure. But you can introduce some helpers for the task:


let book = deeplyFreeze({title: "Harry Potter", author: {name: {first: "Joanne", last: "Rowling"}, age: 49}})

// We use some convenience updateIn helper that clones the frozen structure and creates new updated:

book = updateIn(book, ['author', 'age'], (age) => age + 1)

// results in

// deeplyFreeze({title: "Harry Potter", author: {name: {first: "Joanne", last: "Rowling"}, age: 50}})

That was nice, but slooooooow:( Cloning objects on each tiny change. That's hardly efficient.

Fortunatelly, Phil Bagwell and Rich Hickey came up with cool concept of structural sharing that can make immutable (also known as persistent) data structures very efficient. It gives practically O(1) for both updates and lookups. This is nice introductory post:

hypirion.com/musings/understanding-persistent-vector-pt-1, hypirion.com/musings/understanding-persistent-vector-pt-1 The reason why people use immutable.js and mori in JavaScript and why all default data structures in Clojure are based on Bagwell/Hickey algorithms, is that these libraries provide efficient way to manipulate immutable datastructures in a convenient way.

And there is growin React community that would like to use immutable.js as primary data structures in their programs.

facebook.github.io/react/docs/advanced-performance.html#immutable-js-to-the-rescue, facebook.github.io/react/docs/advanced-performance.html#immutable-js-to-the-rescue So, let me sum up:

There is a growing functional/reactive community amongst JavaScript programmers. Ones of key building blocks of functional programming are efficient immutable datastructures. These are not native in JavaScript.

Currently the only drawback of using immutable datastructures in javascript is how cumbersome it is to use them. And part of the reason is that destructuring does not work and we have to write:


const first = book.getIn(['author', 'name', 'first'])

const last = book.getIn(['author', 'name', 'last'])

const age = book.getIn(['author', 'age'])

In particular, it is no longer to write

let {a: x, b: y} = unMap(map)

as you can today, then it would be to move the transformation to the pattern:

let Map({a: x, b: y}) = map

But this does not work for nested structures. Like on the example above.

Have I made my motivations more clear now?

Thanks for reading this horribly long email:)

Cheers,

@samuha twitter.com/samuha (+421 949 410 148)

skype: samuelhapak

Check my availability at freebusy.io/[email protected], freebusy.io/[email protected]


es-discuss mailing list es-discuss at mozilla.org <mailto:es-discuss at mozilla.org> mail.mozilla.org/listinfo/es-discuss, mail.mozilla.org/listinfo/es-discuss

@samuha twitter.com/samuha (+421 949 410 148)

skype: samuelhapak

Check my availability at freebusy.io/[email protected]