Lambda calculus (1)

My previous blog post on functional programming was about continuations. This is a more advanced topic. For some readers there might be basics missing. One of these could be the lambda calculus. When I first encountered the lambda calculus I didn’t quite get it. It all seemed kind of magical, that was for sure. I couldn’t make much use of it and it was far from practical. This was my mistake. The lambda calculus is not intended as a ‘real’ programming language, in the sense that you would in most cases rather use something else.

A model for computation

When should you use the lambda calculus? We use the lambda calculus to formally reason about computation, because it is one of the simpler models of it. This leads us to the next question: what is computation? My first intuition is to think of a mathematical sum like 10 + 20 * 30. The ‘computation’ is performed by first multiplying twenty and thirty and then adding ten. We get a result in some canonical form, in this case 610. This is already close to the essence of computation: reducing to canonical form. When we see such a arithmetical sum, we immediately get the sense that it is not ‘done’. It can be reduced.

A sum is not yet a good model for computation. It still has to much in it. Multiplication and addition are very concrete operations. We want something more abstract. This is where we go from elementary to high school. We get a cool new toy called a function. We can now write thing like f(x, y, z) = x + y * z. When we want to use a function we call it on an it’s arguments, in our case numbers: f(10, 20, 30) = 10 + 20 * 30. We have recovered our previous sum!

What we have shown with the function example are the three forms constituting the lambda calculus:

• variable
• abstraction
• application

The notation is slightly different from what you are used to in maths. We use a lambda (in our case a \) to capture our variables and an arrow -> instead of an equals sign. Here is the formal syntax definition:

t ::= x | \ x -> t | t t v ::= \ x -> t

where t indicates terms and v indicates values. The | upright line means that we can choose any one of the syntactic definitions. The only values in our language are abstractions (functions). Terms indicate all the forms we can construct with this syntax. Values are terms that have finished computation. Not all terms that have finished computing are values though, we get back to this fact later. Notice that there is no notion of arithmetic operations anywhere. How can we ever use this to model our initial sum?

Getting sum back

We can recover our sum if we want, and that is what we will do here. First, we will gain some insight by using Haskell to produce some terms. Haskell has built in support for lambdas. We can even mix them with our other functions, but we won’t do that here. Let’s take a look at one of my personal favorites, the identity function:

id' = \ x -> x

As we can see from the defintion, this abstraction returns the argument it is passed. If we would apply the identity function to the number ten, we would get the number ten back. When you apply a function to its arguments, the arguments are substituted in the function’s body. For example we can apply the argument 42 to our new friend the identity function:

fortytwo = id' 42

We can continue with some more interesting terms, but I recommend reading a text book like the one by Benjamen Pierce (Types and Programming Languages). We will now consider how to represent numbers with our three basic forms. We will be concerned with just the natural numbers [0..]. Negative numbers only complicate things.

To define all natural numbers we need two things: the constant zero, and the succession operator. We abstract over both these things in the lambda calculus. We do this because we don’t have a ‘real’ representation of these things, we only have our three forms. Without further ado, here is zero:

zero = \ s z -> z

here s is our succession operator and z is the zero abstraction. It is kind of weird that the definition of zero has zero in it. You can think of our zero as the abstraction that when applied to instances of the succession operator and zero makes zero, by ignore the succession operator. Let’s do one more:

one = \ s z -> s z

Does that make more sense? Our original goal was to add two numbers. One added to one should be two, we get something like one + one = \ s z -> s (s z). You see what happens? The one gets plugged into the z place of the other one. This is something we can do!

add = \ m n s z -> m s (n s z)

We are almost there. We now use our addition abstraction to create our sum:

sum3 = \ x y z -> add (add x y) z
sum3' = \ x y z -> x add y add z