There is a tension between being too strongly (statically) typed and too weakly (statically) typed. In this post, I show how to use existential types to “derive” untyped syntax from typed syntax. This allows you to use strongly typed syntax and selectively escape to an untyped world, e.g. to do parsing or non-local transformations, without needing to maintain two different syntax definitions. This solution arose as part of discussions with Matthew Farkas-Dyck, Daniel Winograd-Cort, Moritz Drexl, and the problem was originally posed to me by Daniel King.
In this post I’m going to discuss a problem that I ran into while working with shape indexed types in Haskell and an elgant solution that is inspired by dependent type theory. The problem is implementing functions that produce values of shape-indexed types without accepting a value to do case analysis on in order to refine the index. A simple example arrises when implementing an
Applicativeinstance for length indexed vectors.
One of the biggest complaints about proof assistants such as Coq is performance. Large proofs can be slow to both construct and to check which reduces the “interactive” nature of interactive proof assistants and makes maintenance painful. However, while the richness of dependent type theory is partially to blame for this slowness, it also allows us to use computational reflection to extend the proof checker to build and check proofs quickly. In computational reflection, we convert proof objects in to computations and run them on Coq’s bytecode virtual machine to achieve orders of magnitude improvements in proof construction and checking. In this talk I walk through how computational reflection works using a few small examples. Time permitting, I will discuss how these ideas can be used to build larger scale reflective automation (the MirrorCore project) including an Ltac-like DSL for reflective automation that provides some of the speed benefits of computational reflection without some of the drawbacks.
Coq provides two ways to finish definitions
Defined. The former is meant for “proofs” and makes the definition opaque while the later is meant for “definitions” and leaves the definition transparent. While the “proof” and “definition” dicotomy initially makes a lot of sense to users, but as users start to use dependent types more and more, they realize that the distinction is quite arbitrary, in which case, should we abandon
Qed? In this post, I argue that yes abandoning
Qedis a perfectly sensible thing to do.
gmalecha at gmail
Mullings on computer science and math: programming languages, compilers, and program analysis.