Wednesday, August 31, 2011


So for the past couple of months I've been reading more about directed type theory/homotopy type theory and trying to find a manageable sub-problem that I can honestly tackle at the moment.

I've been interested in what the general induction story should be for higher-dimensional type theory, particularly 2-dimensional type theory as presented either here or here. On the surface, writing down the rules of W-types and how they should transform seems rather straight forward - an iso between shapes should create an appropriate iso between W-types as should an iso between contents. The thing I'm not sure about is how to fit W-types into Garner's 2-dimensional type theory semantics. Of course, I think part of the problem is that I'm not really familiar with the general story of models for W-types in type theory.

Other than that, I helped organize and teach the very first Portland intro to programming for women workshop. Feedback from students has been extremely positive so far and I'm really glad to have been involved and look forward to continue organizing these events.

Thursday, June 2, 2011

One of those days

My update for the day is fairly skimpy. I skimmed "A View From The Left", started a paper on the omega-groupoid interpretation of ITT, and subsequently tripped over my lack of intuition about operads and landed flat on my fibration. I think I sprained a syntactic category, or something equivalent to it.

In all seriousness, I clearly need to read some relevant chapters both from Jacobs's Giant Yellow Book and from Leinster's elaboration of operads and higher-order categories. I was skimming along fine with my current level of mathematical knowledge, but need to buff up a bit now. I've read a bit of the machinery for the syntactic category of a dependent theory but it was long enough ago I can barely hum the tune. I probably could have faked it slightly better today had I slept well, but it was one of those days when I woke up <6 for no apparent reason.

I did, though, see a talk about relativistic programming and chatted briefly with one of the grad students working on it. It turns out that their last submission outlines a kind of induction principle for reasoning about concurrent programs that use their primitives for blocking. I was told that they hoped someone could pick up the note and mechanize the reasoning in some fashion, but I was good and didn't offer up my services. :-) Maybe it'd be a cute thing to do at some point during my PhD, but at the moment I'm too excited about digging into OTT/HTT/etc.

Wednesday, June 1, 2011

Papers of the Day: Moar Types

This is the second day of my binging on type theory. For today's reading I read the first couple of chapters of Dan Licata's thesis, Hoffman & Streicher's paper on groupoid models of type theory, and Altenkirch's original paper on OTT. 

As far as the thesis went, I only read the introduction and the background chapters so I haven't gotten into the heart of it yet. Funnily enough, the background chapter did have some food for thought in the distinction between derivability and admissibility. I don't know if this is a common way of looking at things that I would have seen if I had ever really taken courses in logic, but in any case I think it makes a lot of sense. I'll probably try to read a chapter three tomorrow.

Alright, so now we come to the things I'm still chewing on. First, there's the Hoffman & Streicher paper. The basic point of the paper was really about proving that uniqueness of identity proofs doesn't follow from intensional type theory by constructing a model where it didn't hold, but I can see that there was some deeper insight in terms of using groupoids to model types - setting the stage for the later omega-groupoid models which I intend to read about tomorrow - as well as it being a good illustration of the general technique of constructing models to help determine if a property of the logical system is necessarily true. I'll probably review some of the constructions again tomorrow, though, before I read some of the papers on omega-groupoid models.

Speaking of models, that's the main thing I've gotten so far out of the other Altenkirch paper: the idea that one can construct a model of your proposed theory within another type theory to determine if it is even possible to make. I think there's more I should be getting out of it, but honestly it was the last thing I read today and I was a bit burned out. I think I'll put it back on the pile for tomorrow for another attack.

Tuesday, May 31, 2011

Paper(s) of the day: Type Theory Frenzy

Today was my first real day back in graduate school, starting my "second"* year and I had a meeting with my advisor where we discussed what my focus for the next few years will be so that I can start publishing and graduate. 

The idea I had over the weekend was to really dig into the cutting edge of type theory. It's got math, it's foundational, and it's relevant to mechanizing all the mathematics I love so much. It makes a lot of sense as a direction to go in. Of course, the next task was to figure out where the edge is so that I can start pushing. 

I started with two papers on observational type theory and a survey on homotopy type theory. 

I'm still very much processing what I read today, but I'll try to summarize things to the extent that I understand them.

First, let's discuss observational type theory. As background, consider the difference between intensional and extensional type theory. In both, you have a propositional equality defined by a constructor representing reflexivity and a definitional equality used by the typechecker that compares the normal forms of the objects in question. The difference between intensional and extensional type theory is if you are allowed to use the propositional equality as a witness for the definitional equality, making it relevant to typechecking! This has the well-known effect of making typechecking undecidable as well as making execution hairier business by allowing ill-formed terms to typecheck in a context that contains falsity, thus also allowing them to be evaluated. I know that some people have found ways around the scary bits of extensional type theory in order to exploit the good properties it has, but I think it's not controversial to say that neither intensional nor extensional seems terribly satisfying as a stopping point. Now we come to observational type theory, which attempts to hit a sweeter spot that has more of the power of the extensional theory (e.g. functional extensionality) but doesn't lose the computational properties of the intensional theory. How? Well, I don't think I have a good way to explain it than thus; observational type theory is based off of a heterogeneous equality, based on coercions, that takes into account the actual type structure what is being compared - allowing for a term to be safely "cast" between two types exactly when the two types are "equal" computationally. Basically, it respects a notion of iso between types instead of a more evil (in the technical sense, mind you) equality. They don't quite put it that way in the two papers I read, but it seems like the way to take things. Having this less evil notion of equality allows us to have our cake and typecheck it too. The actual details of some of the coercions seem both a bit over my head yet not terribly exciting. I hope to scrutinize the actual Agda implementation a bit in order to get a feel for the way the isos are constructed.

Alright, well speaking of isos now we come to homotopy type theory. From what I can tell homotopy type theory is, at the moment, an exploration of the connections between intensional type theory and higher-order category theory. For example, the structure of identity proofs over a type - including identity proofs on the type of identity proofs and the type of proof on the type of proof on the type of proofs on ... - has the structure of a weak omega-groupoid, a kind of "group-like" structure whose each level obeys certain laws only up to transformations by the next level up, in a way very similar to the actual omega-groupoid generated by the tower of homotopies on a space. I'm still really shaky on what's going on here, other than the fact that work done over the past 2-3 years has started to solidify the idea that generalized "homotopy-like" spaces are (probably?) sound and complete models for intensional type theory, meaning that type theory is the internal language of homotopy theory. The main thing that excites me about this is that it puts words to a gut feeling I had when I first started to learn type theory as I was getting into CS. My background at that point was a mish-mash of diff geo and category theory, and the relationship between operational and denotational semantics (for example) reminded me very much of two categories - leading me to an old paper by R.A.G. Seely that maybe I'll ramble on another time - and I wondered what higher order transformations might mean. In my head it looked a lot like, well, homotopies! I know that's a little different than the situation we're talking about here with homotopy type theory, but I'm excited to think that my first loves in mathematics have deep connections to CS & logic.

What about models of observational type theory? Do they also have rich higher-order category theoretic structure? I don't know! I'd like to know, whether that means I investigate myself or read someone else's work! Someone directed me to a recent dissertation on directed type theory which, I guess, is an approach to type theory based on higher-order categories without the restrictions to groupoids.

* Technically, third, but I lost roughly a year total due to medical stuff & some heinousness and thankfully my department is respecting that these events were pretty much outside of my control.

Friday, May 27, 2011

Paper of The Day: Monads

Okay, so since I was travelling for five hours for Memorial Day weekend then I wanted to do something kind of easy in the car on the way here.

It's a paper that everyone should have (and probably has) read: Moggi's paper on monads and the computational lambda calculus.

The most basic point of this paper is a pretty well known story: strong monads act as generalized let-statements in the internal language of a category. Why do they need to be strong? Well, the strength essentially acts as a way of ensuring that you can "copy" the context from the binding part of the let-statement to be available in the final expression.

Okay, yay, monads!

Well, no, there's more to it than that. One thing that really jumped out at me when I was rereading this paper in the car was the methodical, elegant development that built up the internal language alongside the models by adding richer categorical constructions. It was really cute, but also an approach I see very rarely done in CS literature even though to me it feels very constructive and enlightening. I feel like that's the kind of approach I want to have to my own research once, y'know, I actually start publishing (*cough cough*).  So, anyway, even if you already know the whole story of using monads in semantics I think there's something to be said for sitting down and working through this paper. It's really well structured, which I think is probably a good bit of why it was influential.

Thursday, May 26, 2011

Paper of the day: Category Theoretic Solution of Recursive Domain Equations

Okay, this paper was a reread but it was something that I needed to digest better than the first time. Smyth & Plotkin's classic paper. So what's the point of this paper?

Okay, so Dana Scott had this classic result where he built - as a colimit - a domain D\inf (think partially ordered set with some nice properties) that was isomorphic to the set of continuous functions from D\inf to D\inf, thus showing a particular technique in domain theory for building models of the untyped lambda calculus. Another way to put this is that D\inf is the fixed point of the functor F(D) = D -> D.

Well, the problem is that the original construction is fairly rigid, taking place only in a particular category of domains. So the point of this paper is try generalize it to all categories and functors that satisfy certain properties. So what properties do we require?

Well, first we can generalize the notion of omega chain from domain theory to category theory. It's basically the same thing if you think of an arrow from objects A and B to be the same as "A < B", so it's a countable diagram of objects with an arrow leading from the previous to the next object in the sequence.

So then what's a omega-continuous functor? Well, it's a functor that maps colimits of omega chains to colimits of the omega chain induced by the action of the functor. It's basically the interchange of function and limit we're used to from domain theory. Yay!

So the paper establishes one thing right off the bat: if you have a continuous functor and colimits of omega chains exist, then you'll have a fixed point of that functor. So the problem becomes figuring out when you have an omega-complete category.

However, requiring that the category we work in be omega-complete is actually a bit more restrictive than we want. Instead, we work in what are called O-categories which are categories enriched CPO. The bulk of the paper is about showing how most O-categories allow us to find the colimits in the subcategory of embeddings, which has nicer properties and can more readily be omega-complete, and then pull those colimits back to the original category.

It's actually really cool to work through, although I feel like I haven't done the best job of explaining it! The math isn't terribly deep diagram chasing, so if you're at all familiar with category theory you can follow it with a bit of work. I'll confess, though, that the first time I tried to read it I was sitting and scratching my head wondering that the point was, but the point seems to basically be that by generalizing the D\inf construction to its minimal requirements you can get away with using slightly weaker categories than used to be considered prior to this paper.

Wednesday, May 25, 2011

Paper of the day: Lawvere Theories and Monads

Okay, so stuff on logical relations still isn't sinking in today and I'm feeling pretty foolish. So, let's get back a little more on my home territory: category theory. Specifically, a paper from a few years ago from Hyland & Power about Lawvere theories and their connection to computer science.

Okay, so first off what's a Lawvere theory? Well, basically, it's a category with only one "base" object that generates all other objects via finite products of the base object with itself. Each such category, up to iso obvs, corresponds to a single-sorted algebraic theory. There's a good explanation of them here that I recommend reading. In order to "get it" I had to work through their example of the theory of groups and sketch out the equivalence., what's the connection to computer science? Well, first let's go back to monads. Err...real monads, not the burritos that appear in the over 9000 Haskell monad tutorials. So, in Moggi's work monads were useful because they were - basically - generalized let statements. These let statements allow us to sequence effects in our computations, such as state, IO, etc. which is really useful for describing the semantics of a language! Okay, but what about the actual morphisms that perform the side-effects? Well, they're left out in the cold so to speak. They're called "non-proper" morphisms and are subject only to some naturality conditions. Now, to be fair, that's nothing to sneeze at but it doesn't capture most equations you would want: just the bare minimum to make sense.

Well, the neat thing is that if you describe side-effects by their operations as some kind of Lawvere theory, you can recover the monad of side-effects as a coend that kinda "rolls up" all the operations into the structure that they operate on. That description wasn't very good, was it? I'll have to think of some better picture at some point. Now, since monads are a more general structure than Lawvere theories you get a bit of a loss of information, i.e. the less stringent equations on operations; however, it does mean that from a more rigid description we can recover the "generalized let" we want from sequencing.

Now here's where I get murky - I haven't done any serious examples so I don't see entirely how one would directly use this to describe the semantics of a programming language with side-effects. I get conceptually that it's more specific, but I'm not entirely clear on how much it buys you and if its worth it. I probably need to read the half-dozen papers written by Plotkin et al. on the topic. :-p

Now, okay, so I've told a little bit of a lie so far - if one does these constructions in "ordinary" category theory, this whole trick of recovering the monad from the Lawvere theory only really works on the category Set, which isn't terribly useful for programming languages that allow general recursion...instead, we have to move over the enriched category theory which I'm still wrapping my head around.

The basic point of enriched category theory is that rather than having sets of morphisms, one has hom-objects from another monoidal closed category. The normal rules about composition still, basically, hold the way you'd expect in the hom-objects. Again, the nitty-gritty of what changes in enriched category theory is lost on me at the moment; however, I know that there has been work to rebuild the work on Lawvere theories in the enriched setting and that this is what's actually used in the Plotkin et al. papers.

So, yeah, this was a bit longer than I intended but I had a lot to talk out with this paper.