Speculations on Erlang (was: Weak callbacks?)

# Jason Orendorff (12 years ago)

(factoring out this part of the conversation because it seems like a bit of a sidetrack)

Mark and me:

Why do you believe manual deallocation decisions will be easier in distributed systems than they are locally?

I don't. That's why I cited several examples of systems that require neither fine-grained manual deallocation nor distributed GC.

I didn't say "fine-grained". Erlang requires manual deallocation of processes.

I don't think it does in practice, any more than UNIX does. How does a UNIX admin (or the kernel) decide when to kill a process?

Also, you did specify "untrusted". Distributed Erlang does not qualify, exactly because pid are forgeable.

That's fair, but let me clarify what I mean.

It's true that Erlang doesn't use an ocap model for security. An Erlang process that doesn't check auth on incoming messages must be protected by some sort of wrapper that does. Typically some front-end code plus a firewall.

This particular factoring of responsibilities is not Erlang-specific. The Web is chock full of JS code (client and server) doing remote API calls. These systems generally don't use object capabilities. They don't use any distributed GC equivalent either, except perhaps in the sense of "leases": server-side objects often live in sessions that expire if unused for some period of time.

(Offhand, I would expect leases/sessions to continue being good enough even if many systems did migrate to distributed objects and capabilities. I don't think distributed GC was anyone's favorite feature of RMI.)

What do you mean then by "strong reference"? If Erlang pids are not strong references, then I don't understand what you are saying.

I just meant "strong reference" in the usual sense: en.wikipedia.org/wiki/Strong_reference

A pid doesn't keep the referred-to process alive. A pid has no effect at all on the process it addresses, local or remote.

# Mark S. Miller (12 years ago)

On Tue, Nov 12, 2013 at 11:23 AM, Jason Orendorff <jason.orendorff at gmail.com> wrote:

I don't think it does in practice, any more than UNIX does. How does a UNIX admin (or the kernel) decide when to kill a process?

Good question. How do they?

A pid doesn't keep the referred-to process alive. A pid has no effect at all on the process it addresses, local or remote.

Same thing with a remote "strong" reference in any of the distributed GC systems I mention. The owner of the machine on which a hosting process or vat is running can still shut down the vat, whether it hosts objects remotely referred to or not. These remote references do not "keep" the object alive. The distributed GC protocols are merely a way for the client to inform the host, if it wishes, that it is no longer interested. It is up to the client whether it wishes to tell the host about such a lack of interest. It is up to the host how to react to such reports of lack of interest.

A host that takes down an object or a vat prior to such reports of disinterest may be disrupting clients, but that's its decision. Likewise with terminating an Erlang process. The difference is only whether there's a standard way for a client to inform a host that it should no longer worry about disrupting this client if it takes down a particular object.

So I put scare quotes around "strong" above because I'm not sure whether you'd call them strong. I don't see any basis for considering them to be "stronger" than an Erlang pid.

# Brendan Eich (12 years ago)

Mark S. Miller wrote:

Good question. How do they?

Old-style Unix admin involves dedicating process seats, doing shallow supervision-tree-style management, monitoring and pagers and pain.

New-style devops cannot lose the pager, but the developer carries it (not the admin). Lots of automation helps on resource monitoring, provisioning, changing.

There's also teh OOM-killer, which guns down what looks like the guiltiest and most bloated among processes when VM (RAM + swap) runs low.

No silver bullets. Static allocation still gives the best predictability. Some things never change!