Just What Is It That Makes Martin-L Of's Type Theory So Different, So Appealing?
Just What Is It That Makes Martin-L Of's Type Theory So Different, So Appealing?
Just What Is It That Makes Martin-L Of's Type Theory So Different, So Appealing?
Neil Leslie
Class. Here come our friends Form and Letter. Boys, we are having a most
interesting discussion on intuitionism.
Letter. Could you speak about anything else with good old Int? He is
completely submerged in it.
Int. Once you have been struck by the beauty of a subject devote your life
to it!
Form. Quite so! Only I wonder how there can be beauty in so indefinite
a thing as intuitionism. None of your terms are well-defined, nor do you
give exact rules of derivation. Thus one for ever remains in doubt as to
which reasonings are correct and which are not.
A. Heyting Intuitionism: An introduction.
1 Introduction
Martin-Löf’s type theory (M-LTT) was developed by Per Martin-Löf in the
early 1970s, as a constructive foundation for mathematics.
foundational.
With
apologies to both Per Martin-Löf and Richard Hamilton
1
Because the theory is foundational we should not expect a formal expla-
nation of it. If we were to give a formal explanation of the theory then we
should be appealing to another theory. Then that would be a better theory
to use as a foundation.
Other key features are:
The theory is open-ended, in the sense that we are free to add new
types.
The theory is clear and simple. Because of this it has been used for
everything from program derivation to handling donkey sentences.
2
We will write
a:A
for the judgement that a is a proof (witness) of the proposition A.
We can give alternative readings of this judgement. The two most obvi-
ous are:
a has type A;
a is a program which meets the specification A.
Having made this very strong identification of types, propositions and
specifications we shall not worry about, for example, what proposition the
type of natural numbers is.
An important point to note (for computational purposes anyway) is that
proofs are programs. These programs are expressed in a lazy functional
language. The semantics we give for the theory is an operational one (it’s
a constructive theory, so we assume that we know what computation is
about before we started!) and laziness is quite important.
3
4.1 Aritied expressions
We need to be able to write expressions down formally. We use a syntactic
theory of aritied expressions. The arities just tell us how to plug bits of
syntax together. Each variable and (primitive) constant has an arity.
are arities.
formation rule;
introduction rules;
computation rules;
elimination rule;
equality rules.
The rules are written in a natural deduction format, and we have skipped
a great deal of material to get here.
5 Digression
At this point we should have a long and detailed discussion about proof-
theoretic justification of logical laws. Sadly, however we do not have the
time to do this. We shall simply assert that we are using a Prawitz-Dummett
style, proof-theoretic meaning theory, and make a few of minor comments
4
here. It is worth noting that Dummett starts off by considering what an
adequate theory of meaning might be. Having constructed such a thing we
find that constructive logic comes out naturally. Dummett does start out to
justify claims for a special place for constructive logic. However, he seems
quite happy with the revisionist aspect of his work. Somewhere or other
he writes:
and:
“The prejudices of mathematicians do not constitute an ar-
gument, however: the important question for us is whether con-
structive mathematics is adequate for applications.”
This is all very well, but we must avoid tonk. The rule for tonk intro-
duction is:
A
Tonk I
A tonk B
Rule 1: Tonk Intro
5
and the rule for tonk elimination is:
A tonk B
Tonk E
B
Rule 2: Tonk Elim
1
[ A]
Tonk I
A tonk B
Tonk E
B
Hook I1
A B
Proof 1: A B
We are rescued at this point (I believe) by the computation rules. One
way to look at tonk is to consider it as allowing us to write non-normalisable
proofs. Since we know that proof normalisation and program evaluation
(computation) are intimately connected, we should not be surprised that
the computation rules are what rescues us.
Thus for each connective we will have:
Nat formation
Nat type
Rule 3: Nat formation
Nat Intro
zero : Nat
Rule 4: Nat Intro
6
and:
n : Nat
Nat Intro
succ( n ) : Nat
Rule 5: Nat Intro
Rules 4 and 5 tell us that zero is a canonical element of Nat, and that if n
is an element of Nat then succ( n ) is a canonical element of Nat. We are lazy
here as n does not have to be a canonical element of Nat.
The rules for introducing equal canonical elements are:
Nat Intro
zero = zero : Nat
Rule 6: Nat Intro
and:
n = m : Nat
Nat Intro
succ( n ) = succ( m ) : Nat
Rule 7: Nat Intro
Typically, when we present rules for a type we do not present the rules
for introducing equal canonical elements as these are usually obvious.
How do we compute with a natural number? We use structural recur-
sion, because this reflects the way that we have (inductively) defined the
type. It is worth noting that all functions in M-LTT are total, and that typ-
ing implies termination.
We call the structural recursion operator for natural numbers natrec,
and it has the following computation rules:
n ! zero d !d 0
natrec comp
natrec( d ; e; n ) !d 0
and:
n ! succ(m) e ( m ; natrec( d ; e; m )) !e 0
natrec comp
natrec( d ; e; n) !e 0
We call natrec the non-canonical constant associated with the natural num-
bers.
7
The elimination rule is now, effectively, a typing rule for natrec( d ; e; n),
and it is:
y : Nat
[x : Nat] z : C( y)
n : Nat C ( x) type d : C ( zero) e ( y; z ) : C ( succ( y ))
Nat elim
natrec( d ; e; n) : C ( n )
Rule 10: Nat elim
[x : Nat]
C ( x) type d : C ( zero)
Natrec eq 1
natrec( d ; e; zero) = d : C (n)
Rule 11: natrec eq 1
and:
y : Nat
[x : Nat] z : C( y)
n : Nat C ( x) type d : C ( zero) e ( y; z ) : C ( succ( y ))
Natrec eq 1
natrec( d ; e; succ( n )) = e ( n ; natrec( d ; e; n )) : C ( succ( n ))
Rule 12: natrec eq 2
Again we typically don’t give the rules for equality between expression
formed by the non-canonical constant because they are usually so straight-
forward.
8
presented an elimination rule, which we justified by reference to the
introduction and computation rules.
This is the pattern that we follow for all types.
7 “Logical” types
In this section we will present the rules for the types which we can interpret
as logical connectives.1
7.1 + types
The formation rule for the + types is:
A type B type
+ Form
A + B type
Rule 13: + Form
a:A
+ Intro L
inl ( a ) : A + B
Rule 14: + Intro L
b:B
+ Intro R
inr( b ) : A + B
Rule 15: + Intro R
f ! inl (l ) d(l ) !d 0
when Comp
when( d ; e; f ) !d 0
and:
f ! inr(r) e ( r) !e 0
when Comp
when( d ; e; f ) !e 0
9
The elimination rule for + is:
[z : A + B] [x : A] [y : B]
f : A+B C ( z ) type d ( x ) : C ( inl ( x)) e ( y ) : C ( inr( y ))
+ Elim
when( d ; e; f ) : C ( f )
Rule 18: + elim
[x : A] [y : B]
f : A+B C prop d( x) : C e ( y) : C
+ Elim Prop
when( d ; e; f ) : C
Rule 19: + elim prop
Apart from the premiss C prop, and the decoration with the formal
_
proof objects, this is just the rule for elimination. Rules 14 and 15 are
_
just the introduction rules for . Hence we make the definition:
A _ B =def A + B
The other connectives are handled in a similar fashion.
7.2 types
If B is a family of types over A we can form ( A ; B):
[x : A]
A type B ( x) type
Form
( A ; B) type
Rule 20: Form
The values in ( A ; B) are pairs:
[x : A]
a:A b( x) : B( x)
I
pair( a ; b) : ( A ; B)
Rule 21: Intro
10
The non-canonical constant associated with the types is called split
and its computation rule is:
c ! pair(a ; b) d ( a ; b) !d 0
split Comp
split( d ; c ) !d 0
c : ( A ; B) C ( z ) type d ( x; y) : C ( pair( x ; y))
Elim
split( d ; c) : C ( c )
Rule 23: elim
The elimination rule for the types introduces a new feature we have
not yet seen, the “hypothetical assumption”.
If we follow the same reasoning that allowed us to define we see that _
we can make this definition:
( 9 x : A) B (x) =def ( A ; B)
A&B =def ( A ; B)
11
x:A
y:B
c : A&B d ( x; y) : A
& Elim l
split( d ; c ) : A
Rule 25: & elim l
c : A&B
& Elim l
split(( a ; b )a; c ) : A
Rule 26: & elim l
We can define:
snd( p ) =def split(( a ; b)b; p)
and recover the other “usual” elimination rule for &.
7.3 types
If B is a family of types over A we can form ( A ; B):
[x : A]
A type B ( x) type
Form
( A ; B) type
Rule 27: Form
The values in ( A ; B) are functions:
[x : A]
b ( x ) : B ( x)
I
( b) : ( A ; B)
Rule 28: Intro
12
Here we see function abstraction, which allows us to form functions.
This is a different notion from the notion of syntactic abstraction that we
have been using.
The rule for introducing equal values of ( A ; B) is:
[x : A]
b ( x ) = c( x ) : B ( x)
Equal I
( b) = ( c ) : ( A ; B)
Rule 29: Equal Intro
f ! ( b) e(b) !e 0
funsplit Comp
f unsplit( e ; f ) !e 0
Notice that this computation rule is uniform with the other ones we
have given. It is not the rule for using appl y, which is this rule:
f ! ( b) b( x) !b 0
apply Comp
apply( f ; x) !b 0
[x : ( A ; B)] [ y ( z) : B ( z )[ z : A ]]
f : ( A ; B) C ( x) type e ( y ) : C ( ( y))
Elim
f unsplit( e ; f ) : C ( f )
Rule 32: elim
Once again a little thought will convince us that we can define:
( 8a : A) B (x) =def ( A ; B)
A B =def ( A ; B)
13
Modus ponens comes out as a case of the elimination rule.
Again this is in contrast with the situation in classical logic where 8 is
seen as a generalisation of &.
Empty form
fg type
Rule 33: Empty form
a ! xi di ( xi ) !d 0
The computation rule for casefg breaks this pattern slightly. The con-
stant casefg takes one argument and has one computation rule. We need to
suppose that ! is a hypothetical value of the empty type. Then the compu-
tation rule tells us that if we could evaluate a to ! then we could evaluate
casefg ( a ) to anything:
a !! d (! ) !d 0
casefg comp
casefg ( a ) !d 0
[x : fg]
a: fg C ( x ) type
fg elim
casefg ( a ) : C ( a )
Rule 36: fg elim
14
Now we can define:
?=def fg
and then define negation:
: A =def A ?
A :: A
The proof looks like
[ A] [A ?]
Modus Ponens
?
I
( A ? ) ?
I
A :: A
Proof 2: A proof
[x : A] [f ?]
:A
Modus Ponens
apply( f x) : ?
I
;
(( f ) apply( f x)) : ( A ? ) ?
I
;
Had we used the “basic” elimination rule (Rule 32) we would have
shown:
15
9 Summary
In this note we have attempted to present and give a flavour of Martin-
Löf’s Type Theory. We have tried to highlight the use of proof theoretical
justification of the rules, without explaining this in full detail. We presented
the rules for various types, trying to emphasise the uniformity of the pre-
sentation. The types that we presented included those needed to express
?_ 9
the logical constants , , &, , , and . 8
16