Okay, so stuff on logical relations still isn't sinking in today and I'm feeling pretty foolish. So, let's get back a little more on my home territory: category theory. Specifically, a paper from a few years ago from

Hyland & Power about Lawvere theories and their connection to computer science.

Okay, so first off what's a Lawvere theory? Well, basically, it's a category with only one "base" object that generates all other objects via finite products of the base object with itself. Each such category, up to iso obvs, corresponds to a single-sorted algebraic theory. There's a good explanation of them

here that I recommend reading. In order to "get it" I had to work through their example of the theory of groups and sketch out the equivalence.

...so, what's the connection to computer science? Well, first let's go back to monads. Err...real monads, not the burritos that appear in the over 9000 Haskell monad tutorials. So, in Moggi's work monads were useful because they were - basically - generalized let statements. These let statements allow us to sequence effects in our computations, such as state, IO, etc. which is really useful for describing the semantics of a language! Okay, but what about the actual morphisms that perform the side-effects? Well, they're left out in the cold so to speak. They're called "non-proper" morphisms and are subject only to some naturality conditions. Now, to be fair, that's nothing to sneeze at but it doesn't capture most equations you would want: just the bare minimum to make sense.

Well, the neat thing is that if you describe side-effects by their operations as some kind of Lawvere theory, you can recover the monad of side-effects as a coend that kinda "rolls up" all the operations into the structure that they operate on. That description wasn't very good, was it? I'll have to think of some better picture at some point. Now, since monads are a more general structure than Lawvere theories you get a bit of a loss of information, i.e. the less stringent equations on operations; however, it does mean that from a more rigid description we can recover the "generalized let" we want from sequencing.

Now here's where I get murky - I haven't done any serious examples so I don't see entirely how one would directly use this to describe the semantics of a programming language with side-effects. I get conceptually that it's more specific, but I'm not entirely clear on

*how much* it buys you and if its worth it. I probably need to read the half-dozen papers written by Plotkin et al. on the topic. :-p

Now, okay, so I've told a little bit of a lie so far - if one does these constructions in "ordinary" category theory, this whole trick of recovering the monad from the Lawvere theory only really works on the category Set, which isn't terribly useful for programming languages that allow general recursion...instead, we have to move over the enriched category theory which I'm still wrapping my head around.

The basic point of enriched category theory is that rather than having sets of morphisms, one has hom-objects from another monoidal closed category. The normal rules about composition still, basically, hold the way you'd expect in the hom-objects. Again, the nitty-gritty of what changes in enriched category theory is lost on me at the moment; however, I know that there has been work to rebuild the work on Lawvere theories in the enriched setting and that this is what's actually used in the Plotkin et al. papers.

So, yeah, this was a bit longer than I intended but I had a lot to talk out with this paper.