Tuesday, April 19, 2011

Models of Linear Logic: Length Spaces

One of the things I've wondered about is what the folk model of linear logic is. Basically, I've noticed that people use linear logic an awful lot, and they have very clear intuitions about the operational behavior they expect, but they are often not clear about what equational properties they expect to hold are.

Since asking that question, I've learned about a few models of linear logic, which are probably worth blogging about. Morally, I ought to start by showing how linear algebra is a model of linear logic, but I'll set that aside for now, and start with some examples that will be more familiar to computer scientists.

First, recall that the general model of simply-typed functional languages are cartesian closed categories, and that the naive model of functional programming is "sets and functions". That is, types are interpreted as sets, and terms are functions between sets, with the cartesian closed structure corresponding to function spaces on sets.

Next, recall that the general model of intuitionistic linear logic are symmetric monoidal closed categories. So the question to ask is, what are some natural symmetric monoidal closed category given by sets-with-structure and functions preserving that structure? There are actually many answers to this question, but one I like a lot I learned from a 1999 LICS paper by Martin Hofmann, and is called "length spaces". It is very simple, and I find it quite beautiful.

To understand the construction, let's first set out the problem: how can we write programs that respect some resource bounds? For example, we may wish to bound the memory usage of a language, so that every definable program is size-bounded: the output of a computation is no bigger than its input.

Now, let's proceed in a really simple-minded way. A length space is a pair $(A, \sigma_A)$, where $A$ is a set and $\sigma_A : A \to \N$ is a size function. Think of $A$ as the set of values in a type, and $\sigma_A$ as a function which tells you how big each element is. These pairs will be the objects of our category of length spaces.

Now, define a morphism $f : (A, \sigma) \to (B, \tau)$ to be a function on sets $A \to B$, with the property that $\forall a \in A.\; \tau(f\;a) \leq \sigma(a)$. That is, the size of the output is always less then or equal to the size of the input.

Here's the punchline: length spaces are a symmetric monoidal closed category.

Given two length spaces $(A, \sigma)$ and $(B, \tau)$, they have a monoidal product $(A, \sigma) \otimes (B, \tau) = (A \times B, \fun{(a,b)}{\sigma(a) + \tau(b)})$. The unit is $I = (\{*\}, \fun{*}{0})$, and the associators and unitors are inherited from the cartesian product. The intuition here is that strict pair, requires resources equal to the sum of the resources of the components.

The function space construction is pretty clever, too. The exponential $(B, \sigma) \multimap (C, \tau) = (B \to C, \fun{f}{\min\comprehend{a \in \N}{\forall b \in B.\;\tau(f\;b) \leq \sigma(b) + a}})$. The intuition here is that if you have a morphism $A \otimes B \to C$, when we curry it we want a morphism $A \to (B \multimap C)$. However, the returned function may have free variables in $A$, so the size of the value it can return should be bounded by the size of its input $b \in B$ plus the size of its environment.

We also have a cartesian product, corresponding to lazy pairs. We define $(A, \sigma) \times (B, \tau) = (A \times B, \fun{(a,b)}{\max(\sigma(a), \tau(b)})$. The intuition is that with a lazy pair, where we have the (linear) choice of either the first or second component, we can bound the space usage by maximum of the two alternatives.

Note how pretty this model is -- you can think about functions in a totally naive way, and that's fine. We just carry around a little extra data to model size information, and now we have a very pretty extensional model of an intensional property -- space usage. This also illustrates the general principle that increasing the observations you can do creates finer type distinctions.