{-# LANGUAGE FlexibleInstances #-}

Having An Effect

This blog post will be on denotational semantics, the sister of operational semantics, and how to make it more modular.

Denotational semantics is a compositional mapping from expressions in a language to something else. (Oleg Kiselyov, 2017)

What that something else is, the author of the denotational semantics may decide. It is frequently some collection of mathematical objects. We often call these something else ‘values’. The other part of the definition that needs explanation is the compositional mapping. This can be explained as the denotation of an expression composed of other expression only relies on the denotation of these subexpressions. We will see an example of this property shortly.

Now I will give a small example of a denotational semantics for a toy language. Its only use is explaining things. Here it is:

data Exp = EInt Int | EAdd Exp Exp deriving Show

In order to give these toy expressions a denotational semantics we will need a domain of values to map them to. In this case the domain will be a haskell value.

data Dom = DInt Int deriving Show

We want to create a mapping from Exp to Dom. A noisy functional programmer will immediately start yelling “function!”, but this is not always the case as we will see later on. This compositional mapping function often bears the name eval, but for clarity we will cal it denote. It can also called an interpreter. I will use the words denoter and interpreter interchangeably.

denote :: Exp -> Dom
denote (EInt i) = DInt i
denote (EAdd e1 e2) =
  case denote e1 of
    DInt i1 ->
      case denote e2 of
        DInt i2 ->
          DInt (i1 + i2)

The first definition is trivial: integer expressions map to integer values. What this tells us is that expressions that can not be reduced (these are also sometimes called values, confusingly), need a value in the domain of the denotational semantics. The second defintion shows us how the compositional property works in practice: we can define denote recursively. Our denotational semantics has no types, and that is fine for now, but if we would for example add boolean expressions to our language we would also have need for a DError data-type in our domain. This is because we do not allow boolean values to be added to each other (some languages do allow this, for example C).

There is another benefit of denotational semantics: we can prove that two expressions are denotationally equivalent. When two expressions are denotationally equivalent, we then have grounds for replacing one with the other. This can be used by compilers to optimize code. In our toy language we can interchange EInt 42, EAdd (EInt 13) (EInt 29), EAdd (EInt 29) (EInt 13), without loosing any ‘meaning’.

If we want to extend our language with more expressions, we will need to write a new expression data-type, or add these expressions to our existing data-type. In any case we will need to recompile our code. For now this is just and artifact of the Haskell embedding I proposed. If we wrote our semantics on paper, we could just refer to the page of our previous definition and put some ellipses somewhere. This holds for now, but there are semantics which even break the ellipses. We will solve this unstable denotation problem in Haskell by using typeclasses for now.

class EArith d where
  int :: Int -> d
  add :: d -> d -> d

We have defined an interface EArith that depends on the type variable d which represents the type of our domain. We can now give our compositional mapping by declaring an instance with our previous Dom data-type (NOT a function):

instance EArith Dom where
  int = DInt
  add (DInt i1) (DInt i2) = DInt (i1 + i2)

The compositionality of the denotational semantics now becomes even more apparent. Let’s prove our point by extending our expressions and semantics. We first define the typeclass:

class EMul d where
  mul  :: d -> d -> d
instance EMul Dom where
  mul (DInt i1) (DInt i2) = DInt (i1 * i2)

We can now write expressions which use int, add, and mul, they will all have type Dom.

If we were to add boolean values to our expressions, we would also have to extend our domain, but this is a trivial change and more importantly we don’t have to rewrite our denotations. There is a more fundamental change that arises when we add state to our expression language, which does require changing all of our denoters, which was the thing we were trying to avoid. Here is the culprit:

class GlobalIntState d where
  getG :: d
  putG :: d -> d

If we try to declare an instance for these expressions, we get stuck. We need some state variable to decide what integer to get for getG. Represented in a type:

type DomIntState = Int -> (Dom,Int)

Now we can declare our instance:

instance GlobalIntState DomIntState where
  getG   = \ s -> (DInt s,s)
  putG e = \ s -> case e s of
    (DInt s',_) -> (DInt s',s')

To solve our domain problems we will pick a new domain. To be more precise, we put a nice layer on top. Our new domain will consist of computations, which are made up of two parts: values and requests. Kiselyov uses a specific abstraction for this, that shows nicely the relation between our previous work and the improved version. The function with type DomC -> Comp is called a continuation.

data Comp = Done DomC | Req ReqT (DomC -> Comp)

data DomC = DCInt Int
data ReqT = ReqState StateReq
data StateReq = Get | Put Int

We then find that Comp is a monad:

bind :: Comp -> (DomC -> Comp) -> Comp
bind (Done x) k    = k x
bind (Req r k1) k2 = Req r (\ x -> bind (k1 x) k2)

We can finally write our denotations:

instance EArith Comp where
  int = Done . DCInt
  add e1 e2 = bind e1 $ \ x1 -> case x1 of
    DCInt a -> bind e2 $ \ x2 -> case x2 of
      DCInt b -> Done (DCInt (a + b))
instance EMul Comp where
  mul e1 e2 = bind e1 $ \ x1 -> case x1 of
    DCInt a -> bind e2 $ \ x2 -> case x2 of
      DCInt b -> Done (DCInt (a * b))
instance GlobalIntState Comp where
  getG   = Req (ReqState Get) Done
  putG e = bind e $ \ x -> case x of
    DCInt x -> Req (ReqState (Put x)) Done

We will have to write a handler for state. The drawback of this approach is that we can not use do-notation, because Comp is not parametarized over a type variable. It can only use DomC as instructions. There is a relation betweeen ReqT and the continuation supplied to Req. Currently we can use any ReqT with any function, which is not what we want to allow.


This blog post is mostly a transcription of the following lecture by Kiselyov. I highly recommend watching it. He also goes into the history of the subject and there are a lot of golden knowledge nuggets to be found.

[In English] Functional Thursday #Special – Having an Effect by Oleg Kiselyov, https://www.youtube.com/watch?v=GhERMBT7u4w

I also used some concepts from this web page. The whole blog post is just a freer monad view.

The Operational Monad Tutorial https://apfelmus.nfshost.com/articles/operational-monad.html

Lambda for all

Lambda calculus (1)

My previous blog post on functional programming was about continuations. This is a more advanced topic. For some readers there might be basics missing. One of these could be the lambda calculus. When I first encountered the lambda calculus I didn’t quite get it. It all seemed kind of magical, that was for sure. I couldn’t make much use of it and it was far from practical. This was my mistake. The lambda calculus is not intended as a ‘real’ programming language, in the sense that you would in most cases rather use something else.

A model for computation

When should you use the lambda calculus? We use the lambda calculus to formally reason about computation, because it is one of the simpler models of it. This leads us to the next question: what is computation? My first intuition is to think of a mathematical sum like 10 + 20 * 30. The ‘computation’ is performed by first multiplying twenty and thirty and then adding ten. We get a result in some canonical form, in this case 610. This is already close to the essence of computation: reducing to canonical form. When we see such a arithmetical sum, we immediately get the sense that it is not ‘done’. It can be reduced.

A sum is not yet a good model for computation. It still has to much in it. Multiplication and addition are very concrete operations. We want something more abstract. This is where we go from elementary to high school. We get a cool new toy called a function. We can now write thing like f(x, y, z) = x + y * z. When we want to use a function we call it on an it’s arguments, in our case numbers: f(10, 20, 30) = 10 + 20 * 30. We have recovered our previous sum!

What we have shown with the function example are the three forms constituting the lambda calculus:

  • variable
  • abstraction
  • application

The notation is slightly different from what you are used to in maths. We use a lambda (in our case a \) to capture our variables and an arrow -> instead of an equals sign. Here is the formal syntax definition:

t ::= x | \ x -> t | t t v ::= \ x -> t

where t indicates terms and v indicates values. The | upright line means that we can choose any one of the syntactic definitions. The only values in our language are abstractions (functions). Terms indicate all the forms we can construct with this syntax. Values are terms that have finished computation. Not all terms that have finished computing are values though, we get back to this fact later. Notice that there is no notion of arithmetic operations anywhere. How can we ever use this to model our initial sum?

Getting sum back

We can recover our sum if we want, and that is what we will do here. First, we will gain some insight by using Haskell to produce some terms. Haskell has built in support for lambdas. We can even mix them with our other functions, but we won’t do that here. Let’s take a look at one of my personal favorites, the identity function:

id' = \ x -> x

As we can see from the defintion, this abstraction returns the argument it is passed. If we would apply the identity function to the number ten, we would get the number ten back. When you apply a function to its arguments, the arguments are substituted in the function’s body. For example we can apply the argument 42 to our new friend the identity function:

fortytwo = id' 42

We can continue with some more interesting terms, but I recommend reading a text book like the one by Benjamen Pierce (Types and Programming Languages). We will now consider how to represent numbers with our three basic forms. We will be concerned with just the natural numbers [0..]. Negative numbers only complicate things.

To define all natural numbers we need two things: the constant zero, and the succession operator. We abstract over both these things in the lambda calculus. We do this because we don’t have a ‘real’ representation of these things, we only have our three forms. Without further ado, here is zero:

zero = \ s z -> z

here s is our succession operator and z is the zero abstraction. It is kind of weird that the definition of zero has zero in it. You can think of our zero as the abstraction that when applied to instances of the succession operator and zero makes zero, by ignore the succession operator. Let’s do one more:

one = \ s z -> s z

Does that make more sense? Our original goal was to add two numbers. One added to one should be two, we get something like one + one = \ s z -> s (s z). You see what happens? The one gets plugged into the z place of the other one. This is something we can do!

add = \ m n s z -> m s (n s z)

We are almost there. We now use our addition abstraction to create our sum:

sum3 = \ x y z -> add (add x y) z
sum3' = \ x y z -> x `add` y `add` z

Functional Programming 1

Continuation-passing style (CPS)

Do you want to program in style? Well, you probably already do if you write programs at all. The most common style of programming is called ‘direct style’. In this style functions are called with some arguments and return some values (or nothing). When you want to combine functions, you can do so by just calling them individually or inside other functions. You can also exploit the composition operator.

So is there any other way to go about returning from a function? How can a function not just return a value that it has computed? How can you use such a function at all? A function written in continuation-passing style returns a continuation instead. So what is this ‘continuation’ thing? Let’s look at it’s type first:

type C r a = a -> r

This is just a function from type a to type r. What makes it so special? Because it can be used as a higher-order function. Moreover, it gains importance when used in the continuation monad. If you don’t know what a monad is, it doesn’t matter. Here is its type:

type Cont r a = (a -> r) -> r

where r stands for return value and a for intermediate value. The monad type is a higher-order function from a function that takes a value of type r and returns a value of type r to a value of type r. You could also say it is a function that takes a function and returns the result of that function. This sounds to me like it does nothing at all! Well it does do something, namely apply a function to a value of type a.

I suggest to now think really hard about functions that have this Cont type. We know it takes a function as its argument and applies something to it. Here is a simple example:

foo :: (Int -> r) -> r
foo c = c 42

foo' :: Cont r Int
foo' c = c 42

foo'' :: C r Int -> r
foo'' = \ c -> c 42

I have written the function with three different notations, they do exactly the same thing. In this blog post I will stick mostly with the first way of writing these functions. The types will get a bit messy later on, but I want to be as explicit as possible. These three functions take an argument c, that represents the current continuation and apply it to the integer value 42.

How do we get our number back? By simply applying foo to the identity function like so:

fortyTwo :: Int
fortyTwo = foo id

fortyTwo' :: Int
fortyTwo' = foo (\ fortyTwo -> fortyTwo)

We call the identity function with the value 42; returning 42. Can we use other functions with foo? Yes, we can. Can we use all functions with foo? No, we can’t. We can only use functions that support integers as their first argument. Luckily, there are many of these functions:

fortyThree :: Int
fortyThree = foo (+1)

fortyTwoStr :: String
fortyTwoStr = foo show

So far we have called foo with functions written in direct style, causing it to return a value. However, we can call it with other functions written in CPS. You may now ask yourself: “Can I call foo with itself?”. Yes, you can, but we have to make a slight modification. Remember what foo does: apply the continuation to the value 42. If foo is that continuation, this is analogous to applying foo to 42. We can use a lambda abstraction to deal with this:

foofoo :: (Int -> r) -> r
foofoo = foo (\ fortyTwo -> foo)

foofoo' :: (Int -> r) -> r
foofoo' = (\ fortyTwo -> (\ c -> c 42)) 42

and create an immensely useless function, because exactly nothing has happened and we get our original function back. You can also come to the conclusion that foo can not be applied to foo by the reasoning that foo does not support integers as its first argument, but functions from integers to r. It would look like the illegal expression (\ c -> c 42) 42.

How to play the CPS game

A continuation can be seen as a todo. This part of our code may be completed later. In direct style this occurs frequently. When writing a program a certain function can be left undefined or with a placeholder: to be completed later. It may be called by some other function. This is what our c is: a placeholder for someone to complete later. We completed our foo function by calling it with the id function.

The person writing the implementation of the placeholder needs to only consider the intermediate value a passed to the continuation. She can remain agnostic about the rest of the original function. If she is lazy herself, she can even insert some continuations (todos) in her code. She then passes on the code to someone else, and this person continues… You get the idea. We can keep using continuations inside our continuations. CPS is just a long list of todo‘s.

When writing in CPS we will have two new rules:

  • Functions take an extra continuation argument; c in our case.
  • Functions are applied to variables or other functions (i.e., lambda expressions).

This has the consequence that the innermost part of an expression must be evaluated first. Making the order of evaluation explicit.

We will encounter two main usages of continuations:

  • Apply the continuation to something.
  • Apply something to the continuation.

If you want you may think about the possible types of these two somethings. The first one restricts the type of the continuation, as it needs to accept the thing it is applied to. In the foo example this is an integer, making the intermediate value an integer as well. The second one restricts the type of the something applied to the continuation, namely it needs to be a function that accepts continuations as arguments. When applying a function to a continuation, we are actually applying the continuation to the result of that function. It is like performing the function first and after it has finished just continuing on.

We will show some idioms of CPS. They will make more sense once you have read the entirety of this blog post.

  • bind to result of continuation
  • bind to continuation
  • modify continuation

We saw an example of this with our foo function, where we bound the value 42. More generally we can bind the result of any continuation.

bindResult :: ((a -> r) -> r) -> (a -> r) -> r
bindResult m c = m (\ a -> c a)

This can be ead as: perform computation m and bind a to the result. What it does is pass the intermediate result to the current continuation. This becomes useful later, when we want to do some computation first and then continue with the result of that computation. We can also bind continuations to other functions. Like so:

bindContinuation :: ((a -> r) -> (a -> r) -> (a -> r) -> r) -> (a -> r) -> r
bindContinuation k c = k c c c

The type of modification function looks like a combination of the two other idioms:

withc :: ((b -> r) -> r) -> ((a -> r) -> b -> r) -> (a -> r) -> r
withc m f c = m (f c)

Chains that bind

With this knowledge we can now start chaining functions like no other. Let’s first define a useful function which helps us do this:

unit :: a -> (a -> r) -> r
unit a c = c a

fortyTwo'' :: Int
fortyTwo'' = unit 42 id

With this function we can now make a continuation from any value of type a. Multiple unit‘s can be chained together.

unitChain :: a -> (a -> r) -> r
unitChain a = (unit a) unit unit

Note that unit does not need a lambda abstraction to chain it together, because it takes a first argument of type a. We can also create an at first weird looking expression:

uu :: ((a -> (a -> r1) -> r1) -> r2) -> r2
uu = unit unit

this can be interpreted as a function that takes a function and applies unit to it. We apply id to get our unit function back. You can do the same thing with unitChain.

uuid :: a -> (a -> r) -> r
uuid = uu id

We can use the intermediate value of a CPS function and keep passing it on. What we can not do is use all of the previous intermediate values. For this reason we define bind to paste continuations together. We saw something similar when defining foofoo. What will be the type of this function? It will take a continuation, a function from the intermediate value of that continuation to a new continuation, and return this new continuation

bind :: ((a -> r) -> r) -> (a -> ((b -> r) -> r)) -> ((b -> r) -> r)
bind m f c = m (\ a -> f a c)

This function is a bit magical, but can be translated to English. Perform computation m and bind a to the result; apply f to the result a and apply that to the continuation c. The main difference in this function

is that the contination c is not applied to something, but f is applied to c. We do this to pass our current continuation. Consider functions that just return their passed continuation:

continueCont :: ((a -> r) -> r) -> (a -> r) -> r
continueCont m c = m c

continueCont' :: ((a -> r) -> r) -> (a -> r) -> r
continueCont' m c = m (\ a -> c a)

this can also be implemented with the id function. Here we see the resemblance with our bind function immediately.

Implementation example

I will give some functions to illustrate unit and bind. Since we are in the habit of making useless functions…

csqrd :: Floating a => a -> (a -> r) -> r
csqrd a c = c (a * a)

csqrt :: Floating a => a -> (a -> r) -> r
csqrt a c = c (sqrt a)

bar :: Floating a => a -> (a -> r) -> r
bar a = csqrd a csqrt

bar' :: Floating a => a -> (a -> r) -> r
bar' a = csqrt a csqrd

Now let’s say we want both the square and the square root

bothResults :: Floating a => a -> ((a, a) -> r) -> r
bothResults a = csqrt a `bind` \ x ->
                csqrd a `bind` \ y ->
                unit (x, y)

Here bind binds our intermediate results and unit helps us return the result. We can not use a normal tuple as final expression here? (Why?)

Control Flow

The examples in the previous section work, but are mostly clearer in direct style. What is something you can do in continuation style that you can not do in direct style? We can use CPS for most control flow statements! A few examples:

  • jmp
  • break
  • continue
  • try, throw, catch
  • while
  • if then else
  • yield
  • threading
  • lazy evaluation

We may now pause and think about the reason why these things are possible. How can we just go to any part in our function from our function? This is where we must realize the power of higher-order functions. Calling functions with other functions gives us the possibility to interpret the passed functions as continuations (and call then).

Forward jumps

We will implement the cjmp, lbl pair to skip certain parts of our code. This is a forward jump. Backwards jumps are also possible, but a bit more elaborate. Let’s first think about the type of the conditional jump instruction. It should have a condition, which can be represented by a Bool, a continuation to jump to, and the current continuation, like all CPS function.

cjmp :: Bool -> a -> (a -> r) -> (a -> r) -> r
cjmp True  a c' _ = c' a
cjmp False a _  c = c  a

The True branch ignores the current continuation c and jumps to c'. The other branch does the same thing the other way around. Notice the similarity with the unit function.

The lbl function should take a label and return a continuation. What is a label? A label is function from two continuations to a result. We get the following:

lbl :: ((a -> r) -> (a -> r) -> r) -> (a -> r) -> r
lbl k c = k c c

With our fancy new function we can now do some flow control:

flow :: (String -> r) -> r
flow c = lbl (\ label ->
              unit "hello" `bind` \ _ ->
              unit "no jump"
           ) c

this returns "jump" when applied to id.

TODO – Continuation assignment

We will implement the function callcc, which stands for call with current continuation. It provides us with a function that ignores its continuation. We see the use of all idioms coming together.

callcc :: ((a -> (b -> r) -> r) -> (a -> r) -> r) -> (a -> r) -> r
callcc h c = h (\ a _ -> c a) c

fix :: (a -> a) -> a
fix f = let x = f x in x -- Y-combinator

getcc :: ((((a -> r) -> r) -> r) -> r)
getcc = callcc (unit . fix)

getcc' :: a -> (((a, a -> ((b -> r) -> r)) -> r) -> r)
getcc' x0 = callcc (\ c -> let f x = c (x, f) in unit (x0, f))


We have shown that CPS is a powerful tool for creating (sometimes hard to understand) functions with extra flow control. We have shown a couple of implementation examples. The takeway is that higher order functions have a lot of power to them. Monads and CPS are actually interchangable, if CPS is expressed as a monad like the one at the start of this post. This is exactly what we did in this post without calling attention to it. We defined the unit and bind function; the monad constructor can be emulated by defining a CPS function. Together they gives us a triple constituting a monad.


Wadler, Philip. “The essence of functional programming.” Proceedings of the 19th ACM SIGPLAN-SIGACT symposium on Principles of programming languages. 1992.






Chess programing 1

It may come as a surprise to you, but I like chess. I am a member of my local chess club where I play occasionally. I am not very good at chess. I would call myself a beginner that knows all the rules. This will be the first in a series of posts (jinx) on chess. The main objective will be to implement a chess engine that I can not beat. Most of the information in these posts can be found in some way or another on https://www.chessprogramming.org/Main_Page.

Before we do anything we have to define the game of chess. This is like explaining chess to someone who has never played. A computer is not a person, so in order to explain chess to it we need to map them to structures in the programming language of our choice (C). Each chess concept will be explained with its accompanying structure.

I think the best place to start is the chessboard. Chess is played on an 8 by 8 grid. The rows are called ranks and the columns files. The ranks are given the number 1-8 and the files the letters a-h. A cell of the grid is called a square. By convention we name a square by its file first and then its rank. For example: e4, e5, f4. The squares on the chessboard also have a color, either black or white. The black and white pattern of the board is called checkerboard pattern, after another much played game. Remember that the a1 square is black.

What is the best computer representation for the chessboard? A grid screams to be represented as an array. We then should link indices with square names, such as a1 is equal to the 0 element of the array. Indices and more generally numbers, have arithmetic operations defined on them. We can add, subtract, multiply, etc. This could come in handy later.

We should take a step back (or up) to mathematics. What a board really is, is a function (or a map) from the domain of squares to the range of pieces. More formally: \[ b : S \to P \] where \(b\) is the board function, \(S\) is the set of chessboard squares, and \(P\) is the set of chess pieces.

Since we are working in C, we have access to native, beafy 64-bit, unsigned integers. These bad boys will fit all of our squares exactly. However, we can only encode ones or zeros on each square. We will store multiple ‘bitboards’ and an array of pieces.

my way or the Conway

Another day, another implementation. Today I have implemented Conway’s game of life. It is a very simple model that can show very complicated results. It is also a lot of fun. What are the rule to this game? Well, I will tell you. First, let us start with the components of the system, we have

  • a 2-dimensional grid
  • a set of rules
    • alive and don’t have two or three alive neighbours -> dead
    • dead and three alive neighbours -> alive

Now we only have to define neighbours. In a 2D grid these are the 8 squares around a particular square. I have drawn a little ASCII graphic.

| O | O | O |
| O | X | O |
| O | O | O |

The O’s mark the neighbours and the X the particular square in question. Now all we have to do is implement it.

it’s complicated

I have been busy with complex numbers in JavaScript. I wanted to implement the Durand Kerner root finding algorithm. This algorithm requires complex arithmetic in order to work. So, in order to implement something, you need to implement something else. You just have to hope that you have arrived at the lowest level.

What is a complex number? It is a number of the type: \[ z = a + bi \] this is called rectangular form, and it is the form most people are familiar with. There is another form, called polar form, which looks like this: \[ z = re^{i\theta} \]

To translate from polar form to rectangular form we use Euler’s formula: \[ re^{i\theta} = r(\cos \theta + i \sin \theta). \] To translate from rectangular to polar coordinates, we use the special function \(\arctan\): \[ a + bi = \sqrt{a^2 + b^2}e^{i \arctan(b, a)}. \]

Why do we need these two forms? Well, some of the arithmetic operations make more sense in one form than in the other and, moreover, are easier to compute. To implement most of the functions we are familiar with we need to start with the three basic operators:

  • addition
  • multiplication
  • exponentiation

My implementation assumes the rectangular form as canonical. All operators are written for this form. For exponentiation we translate to polar, perform the operations, and translate back to rectangular.

Now for the mathematical definitions: \[ (a + bi) + (c + di) = (a + c) + (b + d)i \] \[ (a + bi) \cdot (c + di) = (a \cdot c – b \cdot d) + (a \cdot d + b \cdot c)i \] To define exponentiation we will first define the logarithm and the exponent. Notice how these operations are very similar to the translation operations defined before. \[ \log(a + bi) = \log(\sqrt{a^2 + b^2}e^{i \arctan(b, a)}) = \frac{1}{2}\log(a^2 + b^2) + \arctan(b, a)i \] \[ e^{a + bi} = e^a e^{bi} = e^a(\cos b + i\sin b) \] Now we can establish the general complex power: \[ (a + bi) ^{c + di} = e^{(c + di)\log(a + bi)} \] Which is defined in terms of our previous operations \(\cdot,\log,\exp\).

And that is most of it. To approximate trigonometric functions we use the identity (also found by using Euler’s formula): \[ \cos z = \frac{e^{iz} + e^{-iz}}{2} \] Calculating the inverse is left as an exercise to the reader. My code can be found here.


I wrote my own little brainfuck interpreter. You can read all about brainfuck on the esoteric programming website. They have a whole list of weird programming languages. So what is brainfuck? They tell you all about it on the Esolang website. But I’m gonna tell you anyway.

Brainfuck is a programming language designed to be as small as possible. This does not mean that programs written in brainfuck are neccesarily small. No, it means that the interpreter/compiler can be written in a small number of bytes.

There are only 8 commands in the language:

  • > move pointer right
  • < move pointer left
  • + increase value at pointer
  • - decrease value at pointer
  • . output character at pointer
  • , input character at pointer
  • [ jump past ] if value at pointer is 0
  • ] jump to [ if value at pointer is not 0.

This language is Turing complete, which means that any program may be written in it. This is not recommended however.

I implemented my interpreter in the C programming language, because of its native pointer support. Let’s look at the code:

void eval(char *s, char *p) {
  int i;

  for (; *s; s++)
         if (*s == '>') ++p;
    else if (*s == '<') --p;
    else if (*s == '+') ++*p;
    else if (*s == '-') --*p;
    else if (*s == '.') printf("%c", *p);
    else if (*s == ',') scanf("%c", p);
    else if (*s == '[' && !*p) {
      for (i = 1, s++; i; s++)
        i += (*s == '[') - (*s == ']');
    else if (*s == ']' && *p)
      for (i = 1, s--; i; s--)
        i += (*s == ']') - (*s == '[');

Doesn’t that look nice and neat? I was very pleased with myself. Until I found out that a lot of people have written brainfuck interpreters in brainfuck itself, that are shorter and of course more elegant. The *s pointer points to the program text. The *p pointer points to the actual data array.

The structure of the program should be familiar if you have ever written any semantic sucking code. We try to match one of the 8 commands and execute logic accordingly. The s-- at the end of the ] is there to ensure the string pointer does not go to far past the matching ].

To actually make this code do something, some additional I/O code needs to be written. Try making an interpreter for yourself and see what happens!

Double trouble

Following up on the post about Lorenz systems I have made a visualization of the double pendulum. Both of these systems are prime examples of systems studied in chaos theory. That is: systems that are highly sensitive to initial conditions.

Many of these systems can also be seen in nature. For example the weather. One of the problems is that numbers in nature are non computable. They simpy can not be estimated to within arbitrary precision by any program. This is kind of a scary thought.

Back to the system at hand. I used the wiki, the explanation there is probably better. But here is my take anyway. The double pendulum can be described by 7 variables:

  • gravity constant
  • length both rods
  • mass both rods
  • angle first rod
  • angle second rod
  • momentum first rod
  • momentum second rod.

The total energy (Langrangian) in the system consists of:

  • linear kinetic energy \(E_k = \frac{1}{2} m v^2\)
  • rotational kinetic energy \(E_r = \frac{1}{2} I \omega^2\)
  • potential energy \(E_p = mgh\)

We use some other identities to fill in these formula’s:

  • moment of inertia thin rod \(I = \frac{1}{12} m l^2\)
  • x-center of mass rod \(x = \frac{l}{2} \sin \theta\)
  • y-center of mass rod \(y = \frac{l}{2} \cos \theta\)
  • 2d velocity \(v = \dot{x} + \dot{y}\)

We throw all this together to get our pendulum applet. Have a happy new year!

Lorenz system

I have been doing some stuff with differential equations. They provide some unexpected results and show random behavior. I made a little visualization with the html canvas of a Lorenz (not Lorentz) systemhttps://en.wikipedia.org/wiki/Lorenz_system

  c = document.getElementsByTagName("canvas")[0];
  (ctx = c.getContext("2d")), (c.width = 500), (c.height = 500);
  (x = y = z = 1), (a = 10), (b = 28), (c = 8 / 3), (dt = 0.005);
  setInterval(() => {
      [x, y, z] = [
	  x + a * (y - x) * dt,
	  y + (x * (b - z) - y) * dt,
	  z + (x * y - c * z) * dt
      ctx.fillRect(200 + 5 * x, 200 + 5 * y, 1, 1);

You can see it live here:https://05dd8515-5617-4479-92db-df9de14327bc.htmlpasta.com/

My mandelbrot

I have tried implementing a zoomable mandelbrot set for a while. Today I created a satisfactory implementation. There is no native support for complex numbers in JavaScript, so we keep both real and imaginary parts of the complex number separate. The mandelbrot recurrence relation is:

z_{n+1} = z_{n} + c

The value of z_ 0 is c. We iterate over these values. The mandelbrot set is the set of all complex numbers for which this recurrence relation converges. We will write a function for this relation and then we are basically done. The rest of the code will simply draw the set and handle zoom events.

function f(a0, b0) {
      [a, b] = [a0, b0]
      for (i = 0; i < N && a*a+b*b<4; i++)
	  [a, b] = [a*a - b*b + a0, 2*a*b + b0]
      return i === N

This is our function. The implementation can be seen live at https://e835fe86-83d5-4143-83c7-6e3fef8c6704.htmlpasta.com/. Use the spacebar to zoom and the arrow keys to move.