TangleHaskell and F# have a peculiar way of turning a quite generic mathematical idea into something more tangible (and more usable too) for programming purposes. This article highlights how monads (in F#) are just one example in a zoo of algebraically related structures. In essence, monads are closely related to categories (cfr. the Kleisli triple in previous article) and a category is a doorway to, well, a whole universe of well-known stuff. By ‘well known’ I don’t mean necessarily easy stuff, rather that it’s an established field of research (cobordism, braids, theory of abstract languages, DNA topology, quantum fields, whatnot) and I want to give you a taste of how things look like when looked through the category microscope.

It’s indeed qite remarkable that an abstract algebraic construct (category and functor) is now suddenly not that far away from (everyday) programming. What makes categories fascinating is not so much that it helps to solve concrete problems (no, it’s not gonna make your Black-Scholes algorithm any faster) but its potential to map regions; it’s a cartographer’s tool if you prefer. Consider for example the tangle (strictly speaking a 2-tangle) below which transforms a state X to another state Y via a tangle f. The collection of all these tangles can be made into a category Tang2


Which at first sight doesn’t seem like having much to do with an F# algorithm in the category Type (.Net data types), but it does! There is a so-called natural transformation (isofunctor) α so that (see [4] for details)



CobordismIsomorphisms (and anything iso in fact; isofunctor, isotopy, …) are just a way of saying that things behave similarly despite their difference in appearance. The idea of ‘representations’ (like in group representations) is not far away here, but on a different abstraction level.

Another category where things seems to be different (but are not) is the space of boundaries and morphisms interpolating between these boundaries, creating so-called cobordims;


If you ever have picked up an article on string theory (no, not ‘string’ in a programming context but in the context of elementary particles) you will immediately recognize that this resembles much a Feynman diagram of a string process. The collection of cobordisms can be made into a category Cob which again has features which can be mapped with Tang or Type.

Feynman diagram

These isomorphism is what I mean by a cartographer’s tool; they concretize the relationships between seemingly very different domains.

Look closely at a F# snippet like


type SaveDivision() =
 member this Bind ((x:Results), (res: float->Result))
  match x with
  |Success(x) -> rest x
  | Infinite -> Infinite
 member this.Return x = x

and the pictures above; intuitively there is an obvious pattern (transformation). Category theory and functors make this pattern concrete. It’s a mathematically precise way of saying ‘something transforms into something else’ in different contexts.

Another way of looking at the F# code above by saying that it transforms an instance of a data type into an instance of another data type. Actually, any procedure (whether in VB.Net or ObjectiveC) fits the description and there is indeed a functorial transcription which makes the analogy precise. One can go beyond this by looking at purely logical statements, say something like

True and true results into true (if a is true and b is true then a∧b is also true)

And another statement like

If a∧b is false then either a is false or b is false

A path from the first statement to the second one is called a ‘proof’ and is literally a path through the Boolean category (the space of logical statements if you wish). A proof perceived in this way is a morphism and one can describe a functorial relationship between logical proofs and programming procedures in the category Type. So, there is for sure something magical in looking at all these spaces as variations (representations) of the very same and simple structure. One could potentially go even further and attempt to see ‘paths’ (proofs, procedures) as a basis for a differential or integral structure and in this way define something like a path integral in the space of proofs. The functorial translation into the Type category of this would be something like combining procedures and thus leading to…programs and applications? Who knows. In any case, there are ample ways to define a geometric structure on discrete structures (see [3]), notably through what is called non-commutative geometry.

These simalirities go wide and deep. For a long time path integrals and tools from quantum field theory have been used in biochemistry to model topological aspects of DNA, which on its own is closely linked to evolution (mutation, recombination of bits of info). Now it looks like the similarities get more and more an algebraic basis, see [9] [10].

In fact, just think of some bits and strings of long molecules (like RNA or DNA) and imagine they topologically are intertwined. They get knotted and by playing with some strands you will easily discover that the following picture is ‘true’.

Reidemeister moves

This picture is (besides yet another transformation from one state to another) one of the so-called Reidemeister moves and is closely related to knot theory. It’s also a statement about various categories (the so called closed cartesian categories) and a stepping stone towards a whole exciting field of research. Can you think out what this might mean in the Type category? Maybe something about reordering type parameters in a procedure?

To some extend the reason why category theory works so well in many domains is because graphs are an efficient way of representing things, from traffic problems to protein folding, from NoSQL systems to food recipes. Graphs are in their own way a category but graphs are also a good generic way of representing other categories. For example the following graph represents a simple programming language based on a subset of the F# language. It’s complete (closed under operations) and forms in fact a very simple category. You can easily invent your own closed programming flavor (in JavaScript, Eiffel or any existing language) and draw a graph of the operations. If the graph you end up with ‘looks’ like the one below then there is a high change you can define an isofunctor and thus conclude that they are the same thing on a higher (algebraic) level. There is more beyond all these than my hand waving here but I don’t want to go into too much mathematical details, you can find more enchanting stuff in [6].



You can also use the space of regular expressions and play with it on a more abstract level. It’s known for example that it accepts a differential structure (yes, you can define a derivative of a regular expression) and that it satisfies all the things you would expect of a derivative. Bet you never thought of regular expressions as a way to do differential geometry!

Now, I could modulate and describe so many other variations on this theme (did I mention the thermodynamics of data types and the entropy of .Net applications? [2]) but I would hope you have caught by now a glimpse of this wide categorical landscape. As I said above, there is no direct benefit for a developer to see these abstract relationships (although no one can deny that abstract programming patterns are invaluable) but it offers you a bird’s eye view on a landscape wherein programming (and F# in particular) sits not unconnected from other developments and endeavours.