Haven't posted for a few days because Barcamp was, while good, exhausting. Crowds are, honestly, not the easiest thing for me and it's taken a couple of days to feel rested and able to think creatively again.
Back to reading papers and working on my Agda conversion of Software Foundations! I may not have a paper to post about tonight, but starting tomorrow I'll have at least one.
Formal methods, denotational semantics, category theory, dependent types, rants about kyriarchy, and anything else relevant to my PhD.
Tuesday, May 24, 2011
Wednesday, May 18, 2011
Paper of the day: Ad-hoc!!
Okay, I decided to go easy on the writing today because I'm trying to digest other papers. So, yeah, I decided to read and write about the classic paper How To Make Ad-hoc Polymorphism Less Ad-hoc. Well, there's not a ton of material to sink your teeth into these days, but I enjoyed reading it for the parts about explicitly translating typeclasses to dictionaries. What's neat to me is the idea that for the original proposal of typeclasses, this really did describe a suitable semantics that was entirely equivalent and you could think of overloading as syntactic sugar for dictionary building/passing. Haskell's typeclasses have evolved since then, and there's also the extensions my colleagues in Hasp have created for Habit, and I don't think that a simple dictionary translation within Haskell/Habit seems like it would be good enough to capture what qualified types really can do. Maybe I'm wrong about that? I suppose I haven't really tried to write down such a translation, but I'm not particularly hopeful about the endeavor.
A proper semantic framework for qualified types in general is something that's been bothering me for awhile, though I don't think it's a question I'll be answering but rather a fellow grad student in Hasp. I look forward to seeing what he comes up with (and mechanizing it...)
A proper semantic framework for qualified types in general is something that's been bothering me for awhile, though I don't think it's a question I'll be answering but rather a fellow grad student in Hasp. I look forward to seeing what he comes up with (and mechanizing it...)
Tuesday, May 17, 2011
Talking Things Out
Okay, so I'm starting what is essentially my second year of my PhD. Err...there's kinda personal reasons why I was only enrolled for one quarter this past academic year, but suffice it to say that things are resolved enough that I'm starting back up again.
I don't think I've explicitly said where I am and what I'm doing, though. So, to get things out of the way I'm a student at Portland State University. You may not have heard of it. That's okay, I hadn't either but I came here because of the HASP group. Formerly the Programatica team that brought you things like House. Since my interests are in logic, formal methods, type theory, and all that my piece in the research team is to work on defining the semantics of Habit, the language our team has designed and is building a certified compiler for. It's a Haskell-ish language that's strict, has a more interesting typeclass system, and is missing a number of type system extensions that - while interesting in terms of language design in Haskell - aren't really needed for Habit and would just make the semantics and compilation more annoying.
My background really isn't in computer science, it's mostly math and physics, and I didn't start programming until I was in my early 20s. So, I'm not the most typical candidate for a PhD in CS but that's how things go!
I used to be a bit intimidated by technical blogging, but after the past couple of years the idea of someone randomly showing up in my personal writing space and picking a fight with me just isn't scary anymore.
Paper(s) of the day: Building and Translating
Okay, so I'm going to write about two papers today. The first is Build, Augment, Destroy Universally and the second is Verifying Haskell Programs Using Constructive Type Theory, which I didn't even remember downloading. Slightly odd!
So the first paper is actually really cute. So most functional programming peeps know that initial algebras give you the basic semantics of folds for datatypes in a language where types are defined inductively*. This initial algebra semantics also gives you the convenient fusion laws that a compiler can use in order to improve performance. Now, the major contribution of the paper is to show how for (co)inductive types you can consider the initial algebra(terminal coalgebra) to also be a (co)limit of a forgetful functor and that you get the build/destroy operators which help you build an entirely different kind of fusion law! I think it's really cute and the math is pretty simple. I think I'd like to try writing some small interpreter with explicit build/fold/etc keywords built in for the datatypes.
The second paper is from back in 2005 and uses oldskool Agda, not Agda2, and came out pretty close to the paper I mentioned last night that uses coinduction to embed general recursive functions. Something I'm still turning around in my head it's cool for them to use Maybe as a monad for partiality when, as discussed in Capretta's paper, that's not really sound because it allows you to "decide" whether or not programs terminate which means that you can't really be talking about the computable functions. Maybe I'm just missing something? In any case, I need to look and see if people have tried to extend/update this translation for Agda2 since it actually has coinduction. One thing that bothers me a little is that I can't find them explain their restrictions on Haskell datatypes that are allowed. Clearly they can't shallowly embed all Haskell types, because then you'd have a copy of the untyped lambda calc as a type in Agda which is just a little unsound logically.
There's some other things that I was thinking about and working on today - mostly picking up some pieces about the idea of a Pointed typeclass and thinking about how one would automatically translate a set of definitions for Haskell/Habit typeclasses into inductive universes in type theory. It seems a bit better than System F dictionaries...but that's just a gut feeling. I actually need to read about such translations a bit more.
Tomorrow will, hopefully, really be about logical relations. :-p
* Yes, in Haskell things are a bit conflated between induction and coinduction because initial algebras and terminal coalgebras are the same thing in most models of Haskell-ish systems in categories of complete partial orders.
Monday, May 16, 2011
Paper of the day: General Recursion Via Coinductive Types
Okay, so I read bits and pieces of a number of papers today - a bit of a draft by Conor McBride, the recent arrows papers by Lindley et al (just cuz...not because they're relevant to my research at all), one of the first containers papers by Abbott et al, reread Build Augment Destroy, but actually read all of this cool sounding paper by Venanzio Capretta General Recursion Via Coinductive Types. So, yeah, I think I'll talk about that instead as my Paper O' Teh Day.
So, basically, this paper describes in great length the trick for actually describing the lifting monad constructively in type theory. Basically, the idea is that you use a coinductive type to represent partial computations. You can then show that you can embed the partial recursive functions into your type theory as functions on the lifted type. I feel like that was a really crappy way to describe that, so let me try again.
So in order to have a logically useful type theory you probably want to restrict yourself to only normalizing terms (...you don't have to, by the way, but implementation gets interesting). General recursive functions, naively encoded, aren't necessarily normalizing. What to do!? Well, you use coinduction to describe a potentially non-terminating function by the explicit process of stepping through the computation...and since this is coinduction, there can be an infinite number of steps!
So, basically, that's how you encode general recursion into it. There's more details related to how the Y-combinator works that I feel like I haven't entirely grokked, but I want to try and implement some of this structure in Agda. What's nice is that once you have your encoding of the computable functions this way, you can start building up some actual domain theory. Which I have just a tiny bit of interest in, coincidentally!
Okay, so for tomorrow I should finish the build augment destroy paper and the initial semantics is enough paper. I kinda want to write a little implementation of a small language that has actual fold/build/etc. keywords as a proof of concept for something I want to propose for the Habit language.
So, basically, this paper describes in great length the trick for actually describing the lifting monad constructively in type theory. Basically, the idea is that you use a coinductive type to represent partial computations. You can then show that you can embed the partial recursive functions into your type theory as functions on the lifted type. I feel like that was a really crappy way to describe that, so let me try again.
So in order to have a logically useful type theory you probably want to restrict yourself to only normalizing terms (...you don't have to, by the way, but implementation gets interesting). General recursive functions, naively encoded, aren't necessarily normalizing. What to do!? Well, you use coinduction to describe a potentially non-terminating function by the explicit process of stepping through the computation...and since this is coinduction, there can be an infinite number of steps!
So, basically, that's how you encode general recursion into it. There's more details related to how the Y-combinator works that I feel like I haven't entirely grokked, but I want to try and implement some of this structure in Agda. What's nice is that once you have your encoding of the computable functions this way, you can start building up some actual domain theory. Which I have just a tiny bit of interest in, coincidentally!
Okay, so for tomorrow I should finish the build augment destroy paper and the initial semantics is enough paper. I kinda want to write a little implementation of a small language that has actual fold/build/etc. keywords as a proof of concept for something I want to propose for the Habit language.
Sunday, May 15, 2011
Paper(s) of the day: Bananas
Okay, so as I said I would I ended up reading Bananas, Envelopes, and Barbed Wire as well as Bananas In Space.
Wow, so I don't think the original paper has aged well in some ways. The notation and equations are very different than any other way I've seen the material presented, and it's no wonder I feel like I didn't get much out of this paper five years ago when I was just learning category theory and knew essentially no computer science ("Oh, so you know computer science now?" "Stop that, Caylee!"). So, I read Freyd's papers on fixed points of functors a lot more recently and I think they're clearer if much harder to find. I've never found them available on the Interwebz, but I grabbed copies from my university library. There's no category theorists at Portland State, but we have copies of the category theory journals. Go figure!
So, anyway, the original paper is cute from a historic perspective as it shows how natural fusion laws from the category of sets and functions can be extended to the the categoy of cpos with some strictness side conditions. It basically says, but doesn't show with commuting diagrams, that these laws come from universal mapping properties of limits and colimits.
The Bananas in Space paper is really cute though because it shows how to describe these constructions as Haskell (well, Gofer...) code. The punchline is giving a demonstration of Freyd's difunctors as code. Which, honestly, makes sense if you think of Haskell programs as existing in some subcategory of CPO that's clearly going to be enriched in the ways that Freyd's work needs. I mean, clearly, that's not really rigorous math but it's a really cute example of the construction.
So what did I really gleen from these papers? Well, I guess I didn't pick up stuff that I hadn't already known about. I suppose one thing I'm wondering about is why I haven't seen more systems for reasoning about programming languages using equational methods like these. I have seen papers talking about how you can do it, but I haven't seen many mechanized systems for doing the work and proving serious properties of programs. I've wondered before if that's because most of the languages we'd want to reason about don't have a formal semantics.
I think I just made an argument for why I need to get back to working on a semantics for Habit...*scurry scurry*
Tomorrow will be reviewing Plotkin's generalization of the D-infinity construction as well as Pitt's original paper on logical relations.
Wow, so I don't think the original paper has aged well in some ways. The notation and equations are very different than any other way I've seen the material presented, and it's no wonder I feel like I didn't get much out of this paper five years ago when I was just learning category theory and knew essentially no computer science ("Oh, so you know computer science now?" "Stop that, Caylee!"). So, I read Freyd's papers on fixed points of functors a lot more recently and I think they're clearer if much harder to find. I've never found them available on the Interwebz, but I grabbed copies from my university library. There's no category theorists at Portland State, but we have copies of the category theory journals. Go figure!
So, anyway, the original paper is cute from a historic perspective as it shows how natural fusion laws from the category of sets and functions can be extended to the the categoy of cpos with some strictness side conditions. It basically says, but doesn't show with commuting diagrams, that these laws come from universal mapping properties of limits and colimits.
The Bananas in Space paper is really cute though because it shows how to describe these constructions as Haskell (well, Gofer...) code. The punchline is giving a demonstration of Freyd's difunctors as code. Which, honestly, makes sense if you think of Haskell programs as existing in some subcategory of CPO that's clearly going to be enriched in the ways that Freyd's work needs. I mean, clearly, that's not really rigorous math but it's a really cute example of the construction.
So what did I really gleen from these papers? Well, I guess I didn't pick up stuff that I hadn't already known about. I suppose one thing I'm wondering about is why I haven't seen more systems for reasoning about programming languages using equational methods like these. I have seen papers talking about how you can do it, but I haven't seen many mechanized systems for doing the work and proving serious properties of programs. I've wondered before if that's because most of the languages we'd want to reason about don't have a formal semantics.
I think I just made an argument for why I need to get back to working on a semantics for Habit...*scurry scurry*
Tomorrow will be reviewing Plotkin's generalization of the D-infinity construction as well as Pitt's original paper on logical relations.
Computation & Cardinality
So about a month ago I spoke at code'n'splode, a women-focused tech group. Yes, women-focused as in "primarily for women" but men can attend when given the capability. Of course, like any proper capability system revocation is supported.
My goal was to start with the cardinality of sets and get through Church encodings of common data types in the untyped lambda calculus.
I think the talk was, for the most part, a success. I think I lost about half the room by the end, but given that I didn't get to give a practice talk and that my dying laptop prevented me from using the interpreters I wrote for examples I think it went fairly well.
One of the main things I hope everyone understood right from the beginning is that the "set" of all mathematical functions is absurdly, ridiculously, mind-bogglingly bigger than the set of computable functions. There are only countable computable functions, but the set of all functions from the naturals to the naturals is already the size of the real numbers.
In that light, computer science seems like an awesome - in the old sense of the word - pursuit!
My goal was to start with the cardinality of sets and get through Church encodings of common data types in the untyped lambda calculus.
I think the talk was, for the most part, a success. I think I lost about half the room by the end, but given that I didn't get to give a practice talk and that my dying laptop prevented me from using the interpreters I wrote for examples I think it went fairly well.
One of the main things I hope everyone understood right from the beginning is that the "set" of all mathematical functions is absurdly, ridiculously, mind-bogglingly bigger than the set of computable functions. There are only countable computable functions, but the set of all functions from the naturals to the naturals is already the size of the real numbers.
In that light, computer science seems like an awesome - in the old sense of the word - pursuit!
Subscribe to:
Posts (Atom)