Sets Logic Computation - Open Logic Project PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 279
At a glance
Powered by AI
The text discusses sets, logic, and computation through chapters on sets, relations, functions, and other topics. It is an open-source logic textbook intended for intermediate-level students in philosophy and computer science.

The Open Logic Project aims to create an open-source, collaborative textbook of formal logic that is rigorous yet accessible. It operates in the spirit of open source by making the LaTeX source code freely available under a Creative Commons license.

The text currently covers topics like sets, relations, functions, proofs, orders, graphs, and operations on relations. The project plans to expand coverage to more areas of formal logic and formal methods over time.

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/313970375

Sets, Logic, Computation

Book · January 2017

CITATIONS READS

0 498

1 author:

Richard Zach
The University of Calgary
76 PUBLICATIONS   749 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Many-valued Logics View project

All content following this page was uploaded by Richard Zach on 24 February 2017.

The user has requested enhancement of the downloaded file.


Sets, Logic, Computation
An Open Logic Text

Winter 2017
Sets, Logic, Computation
The Open Logic Project

Instigator
Richard Zach, University of Calgary

Editorial Board
Aldo Antonelli,† University of California, Davis
Andrew Arana, Université Paris I Panthénon–Sorbonne
Jeremy Avigad, Carnegie Mellon University
Walter Dean, University of Warwick
Gillian Russell, University of North Carolina
Nicole Wyatt, University of Calgary
Audrey Yap, University of Victoria

Contributors
Samara Burns, University of Calgary
Dana Hägg, University of Calgary
Sets, Logic, Computation
An Open Logic Text

Remixed by Richard Zach

Winter 2017
The Open Logic Project would like to acknowledge the generous
support of the Faculty of Arts and the Taylor Institute of Teaching
and Learning of the University of Calgary.

This resource was funded by the Alberta Open Educational Re-


sources (ABOER) Initiative, which is made possible through an
investment from the Alberta government.

Illustrations by Matthew Leadbeater, used under a Creative Com-


mons Attribution-NonCommercial 4.0 International License.

Typeset in Baskervald X and Universalis ADF Standard by LATEX.

Sets, Logic, Computation by Richard


Zach is licensed under a Creative
Commons Attribution 4.0 Interna-
tional License. It is based on The Open
Logic Text by the Open Logic Project,
used under a Creative Commons At-
tribution 4.0 International License.
Contents
Preface xi

I Sets, Relations, Functions 1

1 Sets 2
1.1 Basics . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Some Important Sets . . . . . . . . . . . . . . . . 4
1.3 Subsets . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Unions and Intersections . . . . . . . . . . . . . . 6
1.5 Proofs about Sets . . . . . . . . . . . . . . . . . . 8
1.6 Pairs, Tuples, Cartesian Products . . . . . . . . . 10
1.7 Russell’s Paradox . . . . . . . . . . . . . . . . . . 11
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2 Relations 14
2.1 Relations as Sets . . . . . . . . . . . . . . . . . . . 14
2.2 Special Properties of Relations . . . . . . . . . . . 16
2.3 Orders . . . . . . . . . . . . . . . . . . . . . . . . 18
2.4 Graphs . . . . . . . . . . . . . . . . . . . . . . . . 21
2.5 Operations on Relations . . . . . . . . . . . . . . 22
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . 24

v
CONTENTS vi

3 Functions 26
3.1 Basics . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.2 Kinds of Functions . . . . . . . . . . . . . . . . . 28
3.3 Inverses of Functions . . . . . . . . . . . . . . . . 29
3.4 Composition of Functions . . . . . . . . . . . . . 30
3.5 Isomorphism . . . . . . . . . . . . . . . . . . . . . 31
3.6 Partial Functions . . . . . . . . . . . . . . . . . . . 31
3.7 Functions and Relations . . . . . . . . . . . . . . 32
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . 34

4 The Size of Sets 35


4.1 Introduction . . . . . . . . . . . . . . . . . . . . . 35
4.2 Countable Sets . . . . . . . . . . . . . . . . . . . . 35
4.3 Uncountable Sets . . . . . . . . . . . . . . . . . . 41
4.4 Reduction . . . . . . . . . . . . . . . . . . . . . . 45
4.5 Equinumerous Sets . . . . . . . . . . . . . . . . . 46
4.6 Comparing Sizes of Sets . . . . . . . . . . . . . . 48
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . 51

II First-order Logic 55

5 Syntax and Semantics 56


5.1 Introduction . . . . . . . . . . . . . . . . . . . . . 56
5.2 First-Order Languages . . . . . . . . . . . . . . . 58
5.3 Terms and Formulas . . . . . . . . . . . . . . . . 60
5.4 Unique Readability . . . . . . . . . . . . . . . . . 63
5.5 Main operator of a Formula . . . . . . . . . . . . 67
5.6 Subformulas . . . . . . . . . . . . . . . . . . . . . 68
5.7 Free Variables and Sentences . . . . . . . . . . . . 70
5.8 Substitution . . . . . . . . . . . . . . . . . . . . . 71
5.9 Structures for First-order Languages . . . . . . . . 73
5.10 Covered Structures for First-order Languages . . 75
5.11 Satisfaction of a Formula in a Structure . . . . . . 76
CONTENTS vii

5.12 Extensionality . . . . . . . . . . . . . . . . . . . . 82
5.13 Semantic Notions . . . . . . . . . . . . . . . . . . 83
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . 87

6 Theories and Their Models 89


6.1 Introduction . . . . . . . . . . . . . . . . . . . . . 89
6.2 Expressing Properties of Structures . . . . . . . . 92
6.3 Examples of First-Order Theories . . . . . . . . . 93
6.4 Expressing Relations in a Structure . . . . . . . . 96
6.5 The Theory of Sets . . . . . . . . . . . . . . . . . 98
6.6 Expressing the Size of Structures . . . . . . . . . 101
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . 103

7 Natural Deduction 105


7.1 Introduction . . . . . . . . . . . . . . . . . . . . . 105
7.2 Rules and Derivations . . . . . . . . . . . . . . . . 107
7.3 Examples of Derivations . . . . . . . . . . . . . . 111
7.4 Proof-Theoretic Notions . . . . . . . . . . . . . . . 120
7.5 Properties of Derivability . . . . . . . . . . . . . . 121
7.6 Soundness . . . . . . . . . . . . . . . . . . . . . . 125
7.7 Derivations with Identity predicate . . . . . . . . 128
7.8 Soundness of Identity predicate Rules . . . . . . . 130
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . 130
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . 131

8 The Completeness Theorem 133


8.1 Introduction . . . . . . . . . . . . . . . . . . . . . 133
8.2 Outline of the Proof . . . . . . . . . . . . . . . . . 134
8.3 Maximally Consistent Sets of Sentences . . . . . . 136
8.4 Henkin Expansion . . . . . . . . . . . . . . . . . . 138
8.5 Lindenbaum’s Lemma . . . . . . . . . . . . . . . 140
8.6 Construction of a Model . . . . . . . . . . . . . . 141
8.7 Identity . . . . . . . . . . . . . . . . . . . . . . . . 143
8.8 The Completeness Theorem . . . . . . . . . . . . 147
CONTENTS viii

8.9 The Compactness Theorem . . . . . . . . . . . . 147


8.10 The Löwenheim-Skolem Theorem . . . . . . . . . 150
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . 152
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . 153

9 Beyond First-order Logic 154


9.1 Overview . . . . . . . . . . . . . . . . . . . . . . . 154
9.2 Many-Sorted Logic . . . . . . . . . . . . . . . . . 155
9.3 Second-Order logic . . . . . . . . . . . . . . . . . 157
9.4 Higher-Order logic . . . . . . . . . . . . . . . . . 162
9.5 Intuitionistic Logic . . . . . . . . . . . . . . . . . 165
9.6 Modal Logics . . . . . . . . . . . . . . . . . . . . 171
9.7 Other Logics . . . . . . . . . . . . . . . . . . . . . 173

III Turing Machines 177

10 Turing Machine Computations 178


10.1 Introduction . . . . . . . . . . . . . . . . . . . . . 178
10.2 Representing Turing Machines . . . . . . . . . . . 181
10.3 Turing Machines . . . . . . . . . . . . . . . . . . . 186
10.4 Configurations and Computations . . . . . . . . . 187
10.5 Unary Representation of Numbers . . . . . . . . . 189
10.6 Halting States . . . . . . . . . . . . . . . . . . . . 190
10.7 Combining Turing Machines . . . . . . . . . . . . 191
10.8 Variants of Turing Machines . . . . . . . . . . . . 193
10.9 The Church-Turing Thesis . . . . . . . . . . . . . 195
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . 196
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . 197

11 Undecidability 199
11.1 Introduction . . . . . . . . . . . . . . . . . . . . . 199
11.2 Enumerating Turing Machines . . . . . . . . . . . 201
11.3 The Halting Problem . . . . . . . . . . . . . . . . 203
11.4 The Decision Problem . . . . . . . . . . . . . . . 205
11.5 Representing Turing Machines . . . . . . . . . . . 206
CONTENTS ix

11.6 Verifying the Representation . . . . . . . . . . . . 209


11.7 The Decision Problem is Unsolvable . . . . . . . . 214
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . 215
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . 216

A Induction 219
A.1 Introduction . . . . . . . . . . . . . . . . . . . . . 219
A.2 Induction on N . . . . . . . . . . . . . . . . . . . 220
A.3 Strong Induction . . . . . . . . . . . . . . . . . . 223
A.4 Inductive Definitions . . . . . . . . . . . . . . . . 224
A.5 Structural Induction . . . . . . . . . . . . . . . . . 227

B Biographies 229
B.1 Georg Cantor . . . . . . . . . . . . . . . . . . . . 229
B.2 Alonzo Church . . . . . . . . . . . . . . . . . . . 230
B.3 Gerhard Gentzen . . . . . . . . . . . . . . . . . . 231
B.4 Kurt Gödel . . . . . . . . . . . . . . . . . . . . . . 233
B.5 Emmy Noether . . . . . . . . . . . . . . . . . . . 235
B.6 Bertrand Russell . . . . . . . . . . . . . . . . . . . 236
B.7 Alfred Tarski . . . . . . . . . . . . . . . . . . . . . 238
B.8 Alan Turing . . . . . . . . . . . . . . . . . . . . . 239
B.9 Ernst Zermelo . . . . . . . . . . . . . . . . . . . . 241

Glossary 245

Photo Credits 251

Bibliography 253

About the Open Logic Project 258


Preface
This book is an introduction to meta-logic, aimed especially at
students of computer science and philosophy. “Meta-logic” is so-
called because it is the discipline that studies logic itself. Logic
proper is concerned with canons of valid inference, and its sym-
bolic or formal version presents these canons using formal lan-
guages, such as those of propositional and predicate, a.k.a., first-
order logic. Meta-logic investigates the properties of these lan-
guage, and of the canons of correct inference that use them. It
studies topics such as how to give precise meaning to the ex-
pressions of these formal languages, how to justify the canons
of valid inference, what the properties of various proof systems
are, including their computational properties. These questions
are important and interesting in their own right, because the lan-
guages and proof systems investigated are applied in many differ-
ent areas—in mathematics, philosophy, computer science, and
linguistics, especially—but they also serve as examples of how
to study formal systems in general. The logical languages we
study here are not the only ones people are interested in. For
instance, linguists and philosophers are interested in languages
that are much more complicated than those of propositional and
first-order logic, and computer scientists are interested in other
kinds of languages altogether, such as programming languages.
And the methods we discuss here—how to give semantics for for-
mal languages, how to prove results about formal languages, how

xi
PREFACE xii

to investigate the properties of formal languages—are applicable


in those cases as well.
Like any discipline, meta-logic both has a set of results or
facts, and a store of methods and techniques, and this text cov-
ers both. Some students won’t need to know some of the results
we discuss outside of this course, but they will need and use the
methods we use to establish them. The Löwenheim-Skolem the-
orem, say, does not often make an appearance in computer sci-
ence, but the methods we use to prove it do. On the other hand,
many of the results we discuss do have relevance for certain de-
bates, say, in the philosophy of science and in metaphysics. Phi-
losophy students may not need to be able to prove these results
outside this course, but they do need to understand what the
results are—and you really only understand these results if you
have thought through the definitions and proofs needed to es-
tablish them. These are, in part, the reasons for why the results
and the methods covered in this text are recommended study—in
some cases even required—for students of computer science and
philosophy.
The material is divided into three parts. Part 1 concerns it-
self with the theory of sets. Logic and meta-logic is historically
connected very closely to what’s called the “foundations of math-
ematics.” Mathematical foundations deal with how ultimately
mathematical objects such as integers, rational, and real num-
bers, functions, spaces, etc., should be understood. Set theory
provides one answer (there are others), and so set theory and
logic have long been studied side-by-side. Sets, relations, and
functions are also ubiquitous in any sort of formal investigation,
not just in mathematics but also in computer science and in some
of the more technical corners of philosophy. Certainly for the
purposes of formulating and proving results about the semantics
and proof theory of logic and the foundation of computability it
is essential to have a language in which to do this. For instance,
we will talk about sets of expressions, relations of consequence
and provability, interpretations of predicate symbols (which turn
out to be relations), computable functions, and various relations
xiii

between and constructions using these. It will be good to have


shorthand symbols for these, and think through the general prop-
erties of sets, relations, and functions in order to do that. If you
are not used to thinking mathematically and to formulating math-
ematical proofs, then think of the first part on set theory as a
training ground: all the basic definitions will be given, and we’ll
give increasingly complicated proofs using them. Note that un-
derstanding these proofs—and being able to find and formulate
them yourself—is perhaps more important than understanding
the results, and especially in the first part, and especially if you
are new to mathematical thinking, it is important that you think
through the examples and problems.
In the first part we will establish one important result, how-
ever. This result—Cantor’s theorem—relies on one of the most
striking examples of conceptual analysis to be found anywhere
in the sciences, namely, Cantor’s analysis of infinity. Infinity has
puzzled mathematicians and philosophers alike for centuries. No-
one knew how to properly think about it. Many people even
thought it was a mistake to think about it at all, that the notion
of an infinite object or infinite collection itself was incoherent.
Cantor made infinity into a subject we can coherently work with,
and developed an entire theory of infinite collections—and in-
finite numbers with which we can measure the sizes of infinite
collections—and showed that there are different levels of infinity.
This theory of “transfinite” numbers is beautiful and intricate,
and we won’t get very far into it; but we will be able to show
that there are different levels of infinity, specifically, that there
are “countable” and “uncountable” levels of infinity. This result
has important applications, but it is also really the kind of re-
sult that any self-respecting mathematician, computer scientist,
or philosopher should know.
In the second part we turn to first-order logic. We will define
the language of first-order logic and its semantics, i.e., what first-
order structures are and when a sentence of first-order logic is
true in a structure. This will enable us to do two important things:
(1) We can define, with mathematical precision, when a sentence
PREFACE xiv

is a logical consequence of another. (2) We can also consider how


the relations that make up a first-order structure are described—
characterized—by the sentences that are true in them. This in
particular leads us to a discussion of the axiomatic method, in
which sentences of first-order languages are used to characterize
certain kinds of structures. Proof theory will occupy us next,
and we will consider the original version of natural deduction as
defined in the 1930s by Gerhard Gentzen. The semantic notion of
consequence and the syntactic notion of provability give us two
completely different ways to make precise the idea that a sentence
may follow from some others. The soundness and completeness
theorems link these two characterization. In particular, we will
prove Gödel’s completeness theorem, which states that whenever
a sentence is a semantic consequence of some others, there is also
a deduction of said sentence from these others. An equivalent
formulation is: if a collection of sentences is consistent—in the
sense that nothing contradictory can be proved from them—then
there is a structure that makes all of them true.
The second formulation of the completeness theorem is per-
haps the more surprising. Around the time Gödel proved this
result (in 1929), the German mathematician David Hilbert fa-
mously held the view that consistency (i.e., freedom from con-
tradiction) is all that mathematical existence requires. In other
words, whenever a mathematician can coherently describe a struc-
ture or class of structures, then they should be be entitled to be-
lieve in the existence of such structures. At the time, many found
this idea preposterous: just because you can describe a struc-
ture without contradicting yourself, it surely does not follow that
such a structure actually exists. But that is exactly what Gödel’s
completeness theorem says. In addition to this paradoxical—
and certainly philosophically intriguing—aspect, the complete-
ness theorem also has two important applications which allow us
to prove further results about the existence of structures which
make given sentences true. These are the compactness and the
Löwenheim-Skolem theorems.
In the third part, we connect logic with computability. Again,
xv

there is a historical connection: David Hilbert had posed as a


fundamental problem of logic to find a mechanical method which
would decide, of a given sentence of logic, whether it has a proof.
Such a method exists, of course, for propositional logic: one just
has to check all truth tables, and since there are only finitely many
of them, the method eventually yields a correct answer. Such a
straightforward method is not possible for first-order logic, since
the number of possible structures is infinite (and structures them-
selves may be infinite). Logicians were working to find a more
ingenious methods for years. Alonzo Church and Alan Turing
eventually established that there is no such method. In order to
do this, it was necessary to first provide a precise definition of
what a mechanical method is in general. If a decision procedure
had been proposed, presumably it would have been recognized
as an effective method. To prove that no effective method exists,
you have to define “effective method” first and give an impossi-
bility proof on the basis of that definition. This is what Turing
did: he proposed the idea of a Turing machine1 as a mathemati-
cal model of what a mechanical procedure can, in principle, do.
This is another example of a conceptual analysis of an informal
concept using mathematical machinery; and it is perhaps of the
same order of importance for computer science as Cantor’s anal-
ysis of infinity is for mathematics. Our last major undertaking
will be the proof of two impossibility theorems: we will show that
the so-called “halting problem” cannot be solved by Turing ma-
chines, and finally that Hilbert’s “decision problem” (for logic)
also cannot.
This text is mathematical, in the sense that we discuss math-
ematical definitions and prove our results mathematically. But it
is not mathematical in the sense that you need extensive math-
ematical background knowledge. Nothing in this text requires
knowledge of algebra, trigonometry, or calculus. We have made
a special effort to also not require any familiarity with the way
mathematics works: in fact, part of the point is to develop the kinds

1 Turing of course did not call it that himself.


PREFACE xvi

of reasoning and proof skills required to understand and prove


our results. The organization of the text follows mathematical
convention, for one reason: these conventions have been devel-
oped because clarity and precision are especially important, and
so, e.g., it is critical to know when something is asserted as the
conclusion of an argument, is offered as a reason for something
else, or is intended to introduce new vocabulary. So we follow
mathematical convention and label passages as “definitions” if
they are used to introduce new terminology or symbols; and as
“theorems,” “propositions,” “lemmas,” or “corollaries” when we
record a result or finding.2 Other than these conventions, we will
only use the methods of logical proof as they should be familiar
from a first logic course, with one exception: we will make exten-
sive use of the method of induction to prove results. A chapter of
the appendix is devoted to this principle.

2 The difference between the latter four is not terribly important, but
roughly: A theorem is an important result. A proposition is a result worth
recording, but perhaps not as important as a theorem. A lemma is a result we
mainly record only because we want to break up a proof into smaller, easier to
manage chunks. A corollary is a result that follows easily from a theorem or
proposition, such as an interesting special case.
PART I

Sets,
Relations,
Functions

1
CHAPTER 1

Sets
1.1 Basics
Sets are the most fundamental building blocks of mathematical
objects. In fact, almost every mathematical object can be seen as
a set of some kind. In logic, as in other parts of mathematics,
sets and set theoretical talk is ubiquitous. So it will be important
to discuss what sets are, and introduce the notations necessary
to talk about sets and operations on sets in a standard way.

Definition 1.1 (Set). A set is a collection of objects, considered


independently of the way it is specified, of the order of the objects
in the set, or of their multiplicity. The objects making up the set
are called elements or members of the set. If a is an element of a
set X , we write a ∈ X (otherwise, a < X ). The set which has no
elements is called the empty set and denoted by the symbol ∅.

Example 1.2. Whenever you have a bunch of objects, you can


collect them together in a set. The set of Richard’s siblings, for
instance, is a set that contains one person, and we could write
it as S = {Ruth}. In general, when we have some objects a1 ,
. . . , an , then the set consisting of exactly those objects is written
{a 1, . . . , an }. Frequently we’ll specify a set by some property that
its elements share—as we just did, for instance, by specifying S
as the set of Richard’s siblings. We’ll use the following shorthand

2
1.1. BASICS 3

notation for that: {x : . . . x . . .}, where the . . . x . . . stands for the


property that x has to have in order to be counted among the
elements of the set. In our example, we could have specified S
also as
S = {x : x is a sibling of Richard}.

When we say that sets are independent of the way they are
specified, we mean that the elements of a set are all that matters.
For instance, it so happens that

{Nicole, Jacob},
{x : is a niece or nephew of Richard}, and
{x : is a child of Ruth}

are three ways of specifying one and the same set.


Saying that sets are considered independently of the order of
their elements and their multiplicity is a fancy way of saying that

{Nicole, Jacob} and


{Jacob, Nicole}

are two ways of specifying the same set; and that

{Nicole, Jacob} and


{Jacob, Nicole, Nicole}

are also two ways of specifying the same set. In other words, all
that matters is which elements a set has. The elements of a set
are not ordered and each element occurs only once. When we
specify or describe a set, elements may occur multiple times and in
different orders, but any descriptions that only differ in the order
of elements or in how many times elements are listed describes
the same set.

Definition 1.3 (Extensionality). If X and Y are sets, then X and


Y are identical, X = Y , iff every element of X is also an element
CHAPTER 1. SETS 4

of Y , and vice versa.

Extensionality gives us a way for showing that sets are iden-


tical: to show that X = Y , show that whenever x ∈ X then also
x ∈ Y , and whenever y ∈ Y then also y ∈ X .

1.2 Some Important Sets


Example 1.4. Mostly we’ll be dealing with sets that have mathe-
matical objects as members. You will remember the various sets
of numbers: N is the set of natural numbers {0, 1, 2, 3, . . . }; Z the
set of integers,

{. . . , −3, −2, −1, 0, 1, 2, 3, . . . };

Q the set of rational numbers (Q = {z /n : z ∈ Z, n ∈ N, n , 0});


and R the set of real numbers. These are all infinite sets, that
is, they each have infinitely many elements. As it turns out, N,
Z, Q have the same number of elements, while R has a whole
bunch more—N, Z, Q are “countable and infinite” whereas R is
“uncountable”.
We’ll sometimes also use the set of positive integers Z+ =
{1, 2, 3, . . . } and the set containing just the first two natural num-
bers B = {0, 1}.

Example 1.5 (Strings). Another interesting example is the set


A∗ of finite strings over an alphabet A: any finite sequence of
elements of A is a string over A. We include the empty string Λ
among the strings over A, for every alphabet A. For instance,

B∗ = {Λ, 0, 1, 00, 01, 10, 11,


000, 001, 010, 011, 100, 101, 110, 111, 0000, . . .}.

If x = x 1 . . . x n ∈ A∗ is a string consisting of n “letters” from A,


then we say length of the string is n and write len(x) = n.
1.3. SUBSETS 5

Example 1.6 (Infinite sequences). For any set A we may also


consider the set Aω of infinite sequences of elements of A. An
infinite sequence a 1 a2 a3 a4 . . . consists of a one-way infinite list of
objects, each one of which is an element of A.

1.3 Subsets
Sets are made up of their elements, and every element of a set is a
part of that set. But there is also a sense that some of the elements
of a set taken together are a “part of” that set. For instance, the
number 2 is part of the set of integers, but the set of even numbers
is also a part of the set of integers. It’s important to keep those
two senses of being part of a set separate.

Definition 1.7 (Subset). If every element of a set X is also an


element of Y , then we say that X is a subset of Y , and write
X ⊆Y.

Example 1.8. First of all, every set is a subset of itself, and ∅ is


a subset of every set. The set of even numbers is a subset of the
set of natural numbers. Also, {a, b } ⊆ {a, b, c }.
But {a, b, e } is not a subset of {a, b, c }.

Note that a set may contain other sets, not just as subsets but
as elements! In particular, a set may happen to both be an element
and a subset of another, e.g., {0} ∈ {0, {0}} and also {0} ⊆
{0, {0}}.
Extensionality gives a criterion of identity for sets: X = Y iff
every element of X is also an element of Y and vice versa. The
definition of “subset” defines X ⊆ Y precisely as the first half of
this criterion: every element of X is also an element of Y . Of
course the definition also applies if we switch X and Y : Y ⊆ X
iff every element of Y is also an element of X . And that, in turn,
is exactly the “vice versa” part of extensionality. In other words,
extensionality amounts to: X = Y iff X ⊆ Y and Y ⊆ X .
CHAPTER 1. SETS 6

Definition 1.9 (Power Set). The set consisting of all subsets of


a set X is called the power set of X , written ℘(X ).

℘(X ) = {Y : Y ⊆ X }

Example 1.10. What are all the possible subsets of {a, b, c }?


They are: ∅, {a}, {b }, {c }, {a, b }, {a, c }, {b, c }, {a, b, c }. The
set of all these subsets is ℘({a, b, c }):

℘({a, b, c }) = {∅, {a}, {b }, {c }, {a, b }, {b, c }, {a, c }, {a, b, c }}

1.4 Unions and Intersections

Definition 1.11 (Union). The union of two sets X and Y , written


X ∪Y , is the set of all things which are elements of X , Y , or both.

X ∪ Y = {x : x ∈ X ∨ x ∈ Y }

Example 1.12. Since the multiplicity of elements doesn’t matter,


the union of two sets which have an element in common contains
that element only once, e.g., {a, b, c } ∪ {a, 0, 1} = {a, b, c, 0, 1}.
The union of a set and one of its subsets is just the bigger set:
{a, b, c } ∪ {a} = {a, b, c }.
The union of a set with the empty set is identical to the set:
{a, b, c } ∪ ∅ = {a, b, c }.

Definition 1.13 (Intersection). The intersection of two sets X and


Y , written X ∩ Y , is the set of all things which are elements of
both X and Y .

X ∩ Y = {x : x ∈ X ∧ x ∈ Y }

Two sets are called disjoint if their intersection is empty. This


means they have no elements in common.
1.4. UNIONS AND INTERSECTIONS 7

Example 1.14. If two sets have no elements in common, their


intersection is empty: {a, b, c } ∩ {0, 1} = ∅.
If two sets do have elements in common, their intersection is
the set of all those: {a, b, c } ∩ {a, b, d } = {a, b }.
The intersection of a set with one of its subsets is just the
smaller set: {a, b, c } ∩ {a, b } = {a, b }.
The intersection of any set with the empty set is empty: {a, b, c }∩
∅ = ∅.

We can also form the union or intersection of more than two


sets. An elegant way of dealing with this in general is the follow-
ing: suppose you collect all the sets you want to form the union
(or intersection) of into a single set. Then we can define the union
of all our original sets as the set of all objects which belong to at
least one element of the set, and the intersection as the set of all
objects which belong to every element of the set.

Definition 1.15. If Z is a set of sets, then Z is the set of


S
elements of elements of Z :
[
Z = {x : x belongs to an element of Z }, i.e.,
[
Z = {x : there is a Y ∈ Z so that x ∈ Y }

Definition 1.16. If Z is a set of sets, then Z is the set of objects


T
which all elements of Z have in common:
\
Z = {x : x belongs to every element of Z }, i.e.,
\
Z = {x : for all Y ∈ Z, x ∈ Y }

Example 1.17. Suppose Z = {{a, b }, {a, d, e }, {a, d }}. Then Z =


S
{a, b, d, e } and Z = {a}.
T
CHAPTER 1. SETS 8

We could also do the same for a sequence of sets X 1 , X 2 , . . .


[
Xi = {x : x belongs to one of the Xi }
i
\
Xi = {x : x belongs to every Xi }.
i

Definition 1.18 (Difference). The difference X \ Y is the set of


all elements of X which are not also elements of Y , i.e.,

X \ Y = {x : x ∈ X and x < Y }.

1.5 Proofs about Sets


Sets and the notations we’ve introduced so far provide us with
convenient shorthands for specifying sets and expressing rela-
tionships between them. Often it will also be necessary to prove
claims about such relationships. If you’re not familiar with math-
ematical proofs, this may be new to you. So we’ll walk through
a simple example. We’ll prove that for any sets X and Y , it’s
always the case that X ∩ (X ∪ Y ) = X . How do you prove an
identity between sets like this? Recall that sets are determined
solely by their elements, i.e., sets are identical iff they have the
same elements. So in this case we have to prove that (a) every
element of X ∩ (X ∪ Y ) is also an element of X and, conversely,
that (b) every element of X is also an element of X ∩ (X ∪ Y ).
In other words, we show that both (a) X ∩ (X ∪ Y ) ⊆ X and (b)
X ⊆ X ∩ (X ∪ Y ).
A proof of a general claim like “every element z of X ∩(X ∪Y )
is also an element of X ” is proved by first assuming that an arbi-
trary z ∈ X ∩ (X ∪ Y ) is given, and proving from this assumtion
that z ∈ X . You may know this pattern as “general conditional
proof.” In this proof we’ll also have to make use of the definitions
involved in the assumption and conclusion, e.g., in this case of
“∩” and “∪.” So case (a) would be argued as follows:
1.5. PROOFS ABOUT SETS 9

(a) We first want to show that X ∩ (X ∪ Y ) ⊆ X , i.e.,


by definition of ⊆, that if z ∈ X ∩ (X ∪Y ) then z ∈ X ,
for any z . So assume that z ∈ X ∩ (X ∪ Y ). Since
z is an element of the intersection of two sets iff it is
an element of both sets, we can conclude that z ∈ X
and also z ∈ X ∪ Y . In particular, z ∈ X , which is
what we wanted to show.

This completes the first half of the proof. Note that in the
last step we used the fact that if a conjunction (z ∈ X and z ∈
X ∪ Y ) follows from an assumption, each conjunct follows from
that same assumption. You may know this rule as “conjunction
elimination,” or ∧Elim. Now let’s prove (b):

(b) We now prove that X ⊆ X ∩ (X ∪ Y ), i.e., by


definition of ⊆, that if z ∈ X then also z ∈ X ∩(X ∪Y ),
for any z . Assume z ∈ X . To show that z ∈ X ∩ (X ∪
Y ), we have to show (by definition of “∩”) that (i)
z ∈ X and also (ii) z ∈ X ∪ Y . Here (i) is just our
assumption, so there is nothing further to prove. For
(ii), recall that z is an element of a union of sets iff
it is an element of at least one of those sets. Since
z ∈ X , and X ∪ Y is the union of X and Y , this is
the case here. So z ∈ X ∪ Y . We’ve shown both (i)
z ∈ X and (ii) z ∈ X ∪Y , hence, by definition of “∩,”
z ∈ X ∩ (X ∪ Y ).

This was somewhat long-winded, but it illustrates how we rea-


son about sets and their relationships. We usually aren’t this ex-
plicit; in particular, we might not repeat all the definitions. A
proof of our result in a more advanced text would be much more
compressed. It might look something like this.
CHAPTER 1. SETS 10

Proposition 1.19 (Absorption). For all sets X , Y ,

X ∩ (X ∪ Y ) = X

Proof. (a) Suppose z ∈ X ∩(X ∪Y ). Then z ∈ X , so X ∩(X ∪Y ) ⊆


X.
(b) Now suppose z ∈ X . Then also z ∈ X ∪ Y , and therefore
also z ∈ X ∩ (X ∪ Y ). Thus, X ⊆ X ∩ (X ∪ Y ). 

1.6 Pairs, Tuples, Cartesian Products


Sets have no order to their elements. We just think of them as an
unordered collection. So if we want to represent order, we use
ordered pairs hx, yi, or more generally, ordered n-tuples hx 1, . . . , x n i.

Definition 1.20 (Cartesian product). Given sets X and Y , their


Cartesian product X × Y is {hx, yi : x ∈ X and y ∈ Y }.

Example 1.21. If X = {0, 1}, and Y = {1, a, b }, then their prod-


uct is

X × Y = {h0, 1i, h0, ai, h0, bi, h1, 1i, h1, ai, h1, bi}.

Example 1.22. If X is a set, the product of X with itself, X × X ,


is also written X 2 . It is the set of all pairs hx, yi with x, y ∈ X .
The set of all triples hx, y, z i is X 3 , and so on.
Example 1.23. If X is a set, a word over X is any sequence of
elements of X . A sequence can be thought of as an n-tuple of ele-
ments of X . For instance, if X = {a, b, c }, then the sequence “bac ”
can be thought of as the triple hb, a, c i. Words, i.e., sequences
of symbols, are of crucial importance in computer science, of
course. By convention, we count elements of X as sequences of
length 1, and ∅ as the sequence of length 0. The set of all words
over X then is

X ∗ = {∅} ∪ X ∪ X 2 ∪ X 3 ∪ . . .
1.7. RUSSELL’S PARADOX 11

1.7 Russell’s Paradox


We said that one can define sets by specifying a property that its
elements share, e.g., defining the set of Richard’s siblings as

S = {x : x is a sibling of Richard}.

In the very general context of mathematics one must be careful,


however: not every property lends itself to comprehension. Some
properties do not define sets. If they did, we would run into
outright contradictions. One example of such a case is Russell’s
Paradox.
Sets may be elements of other sets—for instance, the power
set of a set X is made up of sets. And so it makes sense, of course,
to ask or investigate whether a set is an element of another set.
Can a set be a member of itself? Nothing about the idea of a
set seems to rule this out. For instance, surely all sets form a
collection of objects, so we should be able to collect them into
a single set—the set of all sets. And it, being a set, would be
an element of the set of all sets.
Russell’s Paradox arises when we consider the property of not
having itself as an element. The set of all sets does not have this
property, but all sets we have encountered so far have it. N is not
an element of N, since it is a set, not a natural number. ℘(X ) is
generally not an element of ℘(X ); e.g., ℘(R) < ℘(R) since it is a
set of sets of real numbers, not a set of real numbers. What if we
suppose that there is a set of all sets that do not have themselves
as an element? Does

R = {x : x < x }

exist?
If R exists, it makes sense to ask if R ∈ R or not—it must be
either ∈ R or < R. Suppose the former is true, i.e., R ∈ R. R was
defined as the set of all sets that are not elements of themselves,
and so if R ∈ R, then R does not have this defining property of R.
But only sets that have this property are in R, hence, R cannot
CHAPTER 1. SETS 12

be an element of R, i.e., R < R. But R can’t both be and not be


an element of R, so we have a contradiction.
Since the assumption that R ∈ R leads to a contradiction, we
have R < R. But this also leads to a contradiction! For if R < R, it
does have the defining property of R, and so would be an element
of R just like all the other non-self-containing sets. And again, it
can’t both not be and be an element of R.

Summary
A set is a collection of objects, the elements of the set. We write
x ∈ X if x is an element of X . Sets are extensional—they are
completely determined by their elements. Sets are specified by
listing the elements explicitly or by giving a property the ele-
ments share (abstraction). Extensionality means that the order
or way of listing or specifying the elements of a set doesn’t mat-
ter. To prove that X and Y are the same set (X = Y ) one has to
prove that every element of X is an element of Y and vice versa.
Important sets include the natural (N), integer (Z), rational
(Q), and real (R) numbers, but also strings (X ∗ ) and infinite
sequences (X ω ) of objects. X is a subset of Y , X ⊆ Y , if every
element of X is also one of Y . The collection of all subsets of
a set Y is itself a set, the power set ℘(Y ) of Y . We can form
the union X ∪ Y and intersection X ∩ Y of sets. An ordered
pair hx, yi consists of two objects x and y, but in that specific
order. The pairs hx, yi and hy, xi are different pairs (unless x = y).
The set of all pairs hx, yi where x ∈ X and y ∈ Y is called the
Cartesian product X × Y of X and Y . We write X 2 for X × X ;
so for instance N2 is the set of pairs of natural numbers.

Problems
Problem 1.1. Show that there is only one empty set, i.e., show
that if X and Y are sets without members, then X = Y .
1.7. RUSSELL’S PARADOX 13

Problem 1.2. List all subsets of {a, b, c, d }.

Problem 1.3. Show that if X has n elements, then ℘(X ) has 2n


elements.

Problem 1.4. Prove rigorously that if X ⊆ Y , then X ∪ Y = Y .

Problem 1.5. Prove rigorously that if X ⊆ Y , then X ∩ Y = X .

Problem 1.6. Prove in detail that X ∪ (X ∩ Y ) = X . Then give


a shortened, compressed proof. (Hint: for the X ∪ (X ∩ Y ) ⊆ X
direction you will need proof by cases, aka ∨Elim.)

Problem 1.7. List all elements of {1, 2, 3}3 .

Problem 1.8. Show that if X has n elements, then X k has n k


elements.
CHAPTER 2

Relations
2.1 Relations as Sets
You will no doubt remember some interesting relations between
objects of some of the sets we’ve mentioned. For instance, num-
bers come with an order relation < and from the theory of whole
numbers the relation of divisibility without remainder (usually writ-
ten n | m) may be familar. There is also the relation is identical
with that every object bears to itself and to no other thing. But
there are many more interesting relations that we’ll encounter,
and even more possible relations. Before we review them, we’ll
just point out that we can look at relations as a special sort of
set. For this, first recall what a pair is: if a and b are two objects,
we can combine them into the ordered pair ha, bi. Note that for
ordered pairs the order does matter, e.g, ha, bi , hb, ai, in contrast
to unordered pairs, i.e., 2-element sets, where {a, b } = {b, a}.
If X and Y are sets, then the Cartesian product X ×Y of X and
Y is the set of all pairs ha, bi with a ∈ X and b ∈ Y . In particular,
X 2 = X × X is the set of all pairs from X .
Now consider a relation on a set, e.g., the <-relation on the
set N of natural numbers, and consider the set of all pairs of
numbers hn, mi where n < m, i.e.,
R = {hn, mi : n, m ∈ N and n < m}.
Then there is a close connection between the number n being

14
2.1. RELATIONS AS SETS 15

less than a number m and the corresponding pair hn, mi being a


member of R, namely, n < m if and only if hn, mi ∈ R. In a sense
we can consider the set R to be the <-relation on the set N. In the
same way we can construct a subset of N2 for any relation between
numbers. Conversely, given any set of pairs of numbers S ⊆ N2 ,
there is a corresponding relation between numbers, namely, the
relationship n bears to m if and only if hn, mi ∈ S . This justifies
the following definition:

Definition 2.1 (Binary relation). A binary relation on a set X is


a subset of X 2 . If R ⊆ X 2 is a binary relation on X and x, y ∈ X ,
we write Rxy (or xRy) for hx, yi ∈ R.

Example 2.2. The set N2 of pairs of natural numbers can be


listed in a 2-dimensional matrix like this:

h0, 0i h0, 1i h0, 2i h0, 3i ...


h1, 0i h1, 1i h1, 2i h1, 3i ...
h2, 0i h2, 1i h2, 2i h2, 3i ...
h3, 0i h3, 1i h3, 2i h3, 3i ...
.. .. .. .. ..
. . . . .

The subset consisting of the pairs lying on the diagonal, i.e.,

{h0, 0i, h1, 1i, h2, 2i, . . . },

is the identity relation on N. (Since the identity relation is popular,


let’s define IdX = {hx, xi : x ∈ X } for any set X .) The subset of
all pairs lying above the diagonal, i.e.,

L = {h0, 1i, h0, 2i, . . . , h1, 2i, h1, 3i, . . . , h2, 3i, h2, 4i, . . .},

is the less than relation, i.e., Lnm iff n < m. The subset of pairs
below the diagonal, i.e.,

G = {h1, 0i, h2, 0i, h2, 1i, h3, 0i, h3, 1i, h3, 2i, . . . },
CHAPTER 2. RELATIONS 16

is the greater than relation, i.e., G nm iff n > m. The union of L


with I , K = L ∪ I , is the less than or equal to relation: K nm iff
n ≤ m. Similarly, H = G ∪ I is the greater than or equal to relation.
L, G , K , and H are special kinds of relations called orders. L and
G have the property that no number bears L or G to itself (i.e.,
for all n, neither Lnn nor G nn). Relations with this property are
called irreflexive, and, if they also happen to be orders, they are
called strict orders.

Although orders and identity are important and natural rela-


tions, it should be emphasized that according to our definition
any subset of X 2 is a relation on X , regardless of how unnatural
or contrived it seems. In particular, ∅ is a relation on any set (the
empty relation, which no pair of elements bears), and X 2 itself
is a relation on X as well (one which every pair bears), called
the universal relation. But also something like E = {hn, mi : n >
5 or m × n ≥ 34} counts as a relation.

2.2 Special Properties of Relations


Some kinds of relations turn out to be so common that they have
been given special names. For instance, ≤ and ⊆ both relate their
respective domains (say, N in the case of ≤ and ℘(X ) in the case
of ⊆) in similar ways. To get at exactly how these relations are
similar, and how they differ, we categorize them according to
some special properties that relations can have. It turns out that
(combinations of) some of these special properties are especially
important: orders and equivalence relations.

Definition 2.3 (Reflexivity). A relation R ⊆ X 2 is reflexive iff,


for every x ∈ X , Rxx.
2.2. SPECIAL PROPERTIES OF RELATIONS 17

Definition 2.4 (Transitivity). A relation R ⊆ X 2 is transitive iff,


whenever Rxy and Ryz , then also Rxz .

Definition 2.5 (Symmetry). A relation R ⊆ X 2 is symmetric iff,


whenever Rxy, then also Ryx.

Definition 2.6 (Anti-symmetry). A relation R ⊆ X 2 is anti-


symmetric iff, whenever both Rxy and Ryx, then x = y (or, in
other words: if x , y then either ¬Rxy or ¬Ryx).

In a symmetric relation, Rxy and Ryx always hold together,


or neither holds. In an anti-symmetric relation, the only way for
Rxy and Ryx to hold together is if x = y. Note that this does not
require that Rxy and Ryx holds when x = y, only that it isn’t ruled
out. So an anti-symmetric relation can be reflexive, but it is not
the case that every anti-symmetric relation is reflexive. Also note
that being anti-symmetric and merely not being symmetric are
different conditions. In fact, a relation can be both symmetric
and anti-symmetric at the same time (e.g., the identity relation
is).

Definition 2.7 (Connectivity). A relation R ⊆ X 2 is connected if


for all x, y ∈ X , if x , y, then either Rxy or Ryx.

Definition 2.8 (Partial order). A relation R ⊆ X 2 that is reflex-


ive, transitive, and anti-symmetric is called a partial order.
CHAPTER 2. RELATIONS 18

Definition 2.9 (Linear order). A partial order that is also con-


nected is called a linear order.

Definition 2.10 (Equivalence relation). A relation R ⊆ X 2 that


is reflexive, symmetric, and transitive is called an equivalence re-
lation.

2.3 Orders
Very often we are interested in comparisons between objects,
where one object may be less or equal or greater than another
in a certain respect. Size is the most obvious example of such a
comparative relation, or order. But not all such relations are alike
in all their properties. For instance, some comparative relations
require any two objects to be comparable, others don’t. (If they
do, we call them linear or total.) Some include identity (like ≤)
and some exclude it (like <). Let’s get some order into all this.

Definition 2.11 (Preorder). A relation which is both reflexive


and transitive is called a preorder.

Definition 2.12 (Partial order). A preorder which is also anti-


symmetric is called a partial order.
2.3. ORDERS 19

Definition 2.13 (Linear order). A partial order which is also


connected is called a total order or linear order.

Example 2.14. Every linear order is also a partial order, and ev-
ery partial order is also a preorder, but the converses don’t hold.
For instance, the identity relation and the full relation on X are
preorders, but they are not partial orders, because they are not
anti-symmetric (if X has more than one element). For a some-
what less silly example, consider the no longer than relation 4
on B∗ : x 4 y iff len(x) ≤ len(y). This is a preorder, even a con-
nected preorder, but not a partial order.
The relation of divisibility without remainder gives us an ex-
ample of a partial order which isn’t a linear order: for integers
n, m, we say n (evenly) divides m, in symbols: n | m, if there is
some k so that m = kn. On N, this is a partial order, but not a
linear order: for instance, 2 - 3 and also 3 - 2. Considered as a
relation on Z, divisibility is only a preorder since anti-symmetry
fails: 1 | −1 and −1 | 1 but 1 , −1. Another important partial
order is the relation ⊆ on a set of sets.
Notice that the examples L and G from Example 2.2, although
we said there that they were called “strict orders” are not linear
orders even though they are connected (they are not reflexive).
But there is a close connection, as we will see momentarily.

Definition 2.15 (Irreflexivity). A relation R on X is called ir-


reflexive if, for all x ∈ X , ¬Rxx.

Definition 2.16 (Asymmetry). A relation R on X is called asym-


metric if for no pair x, y ∈ X we have Rxy and Ryx.
CHAPTER 2. RELATIONS 20

Definition 2.17 (Strict order). A strict order is a relation which


is irreflexive, asymmetric, and transitive.

Definition 2.18 (Strict linear order). A strict order which is also


connected is called a strict linear order.

A strict order on X can be turned into a partial order by


adding the diagonal IdX , i.e., adding all the pairs hx, xi. (This
is called the reflexive closure of R.) Conversely, starting from a
partial order, one can get a strict order by removing IdX .

Proposition 2.19. 1. If R is a strict (linear) order on X , then


R + = R ∪ IdX is a partial order (linear order).

2. If R is a partial order (linear order) on X , then R − = R \ IdX


is a strict (linear) order.

Proof. 1. Suppose R is a strict order, i.e., R ⊆ X 2 and R is


irreflexive, asymmetric, and transitive. Let R + = R ∪ IdX .
We have to show that R + is reflexive, antisymmetric, and
transitive.
R + is clearly reflexive, since for all x ∈ X , hx, xi ∈ IdX ⊆ R + .
To show R + is antisymmetric, suppose R + xy and R + yx, i.e.,
hx, yi and hy, xi ∈ R + , and x , y. Since hx, yi ∈ R ∪ IdX , but
hx, yi < IdX , we must have hx, yi ∈ R, i.e., Rxy. Similarly
we get that Ryx. But this contradicts the assumption that
R is asymmetric.
Now suppose that R + xy and R + yz . If both hx, yi ∈ R and
hy, z i ∈ R, it follows that hx, z i ∈ R since R is transitive.
Otherwise, either hx, yi ∈ IdX , i.e., x = y, or hy, z i ∈ IdX ,
i.e., y = z . In the first case, we have that R + yz by assump-
tion, x = y, hence R + xz . Similarly in the second case. In
either case, R + xz , thus, R + is also transitive.
2.4. GRAPHS 21

If R is connected, then for all x , y, either Rxy or Ryx, i.e.,


either hx, yi ∈ R or hy, xi ∈ R. Since R ⊆ R + , this remains
true of R + , so R + is connected as well.

2. Exercise.


Example 2.20. ≤ is the linear order corresponding to the strict


linear order <. ⊆ is the partial order corresponding to the strict
order (.

2.4 Graphs
A graph is a diagram in which points—called “nodes” or “ver-
tices” (plural of “vertex”)—are connected by edges. Graphs are
a ubiquitous tool in descrete mathematics and in computer sci-
ence. They are incredibly useful for representing, and visualizing,
relationships and structures, from concrete things like networks
of various kinds to abstract structures such as the possible out-
comes of decisions. There are many different kinds of graphs in
the literature which differ, e.g., according to whether the edges
are directed or not, have labels or not, whether there can be edges
from a node to the same node, multiple edges between the same
nodes, etc. Directed graphs have a special connection to relations.

Definition 2.21 (Directed graph). A directed graph G = hV, Ei is


a set of vertices V and a set of edges E ⊆ V 2 .

According to our definition, a graph just is a set together with


a relation on that set. Of course, when talking about graphs, it’s
only natural to expect that they are graphically represented: we
can draw a graph by connecting two vertices v 1 and v 2 by an
arrow iff hv 1, v 2 i ∈ E. The only difference between a relation by
itself and a graph is that a graph specifies the set of vertices, i.e., a
graph may have isolated vertices. The important point, however,
is that every relation R on a set X can be seen as a directed graph
CHAPTER 2. RELATIONS 22

hX, Ri, and conversely, a directed graph hV, Ei can be seen as a


relation E ⊆ V 2 with the set V explicitly specified.

Example 2.22. The graph hV, Ei with V = {1, 2, 3, 4} and E =


{h1, 1i, h1, 2i, h1, 3i, h2, 3i} looks like this:

1 2 4

This is a different graph than hV 0, Ei with V 0 = {1, 2, 3}, which


looks like this:

1 2

2.5 Operations on Relations


It is often useful to modify or combine relations. We’ve already
used the union of relations above (which is just the union of two
relations considered as sets of pairs). Here are some other ways:

Definition 2.23. Let R, S ⊆ X 2 be relations and Y a set.

1. The inverse R −1 of R is R −1 = {hy, xi : hx, yi ∈ R}.


2.5. OPERATIONS ON RELATIONS 23

2. The relative product R | S of R and S is

(R | S ) = {hx, z i : for some y, Rxy and S yz }

3. The restriction R  Y of R to Y is R ∩ Y 2

4. The application R[Y ] of R to Y is

R[Y ] = {y : for some x ∈ Y, Rxy }

Example 2.24. Let S ⊆ Z2 be the successor relation on Z, i.e.,


the set of pairs hx, yi where x + 1 = y, for x, y ∈ Z. S xy holds iff y
is the successor of x.
1. The inverse S −1 of S is the predecessor relation, i.e., S −1 xy
iff x − 1 = y.

2. The relative product S | S is the relation x bears to y if


x + 2 = y.

3. The restriction of S to N is the successor relation on N.

4. The application of S to a set, e.g., S [{1, 2, 3}] is {2, 3, 4}.

Definition 2.25 (Transitive closure). The transitive closure R + of


a relation R ⊆ X 2 is R + = i∞=1 R i where R 1 = R and R i +1 = R i |
S
R.
The reflexive transitive closure of R is R ∗ = R + ∪ I X .

Example 2.26. Take the successor relation S ⊆ Z2 . S 2 xy iff


x + 2 = y, S 3 xy iff x + 3 = y, etc. So R ∗ xy iff for some i ≥ 1,
x + i = y. In other words, S + xy iff x < y (and R ∗ xy iff x ≤ y).

Summary
A relation R on a set X is a way of relating elements of X . We
write Rxy if the relation holds between x and y. Formally, we can
CHAPTER 2. RELATIONS 24

consider R as the sets of pairs hx, yi ∈ X 2 such that Rxy. Being


less than, greater than, equal to, evenly dividing, being the same
length as, a subset of, and the same size as are all important
examples of relations (on sets of numbers, strings, or of sets).
Graphs are a general way of visually representing relations. But
a graph can also be seen as a binary relation (the edge relation)
together with the underlying set of vertices.
Some relations share certain features which makes them espe-
cially interesting or useful. A relation R is reflexive if everything
is R-related to itself; symmetric, if with Rxy also Ryx holds for
any x and y; and transitive if Rxy and Ryz guarantees Rxz . Re-
lations that have all three of these properties are equivalence
relations. A relation is anti-symmetric if Rxy and Ryx guar-
antees x = y. Partial orders are those relations that are reflex-
ive, anti-symmetric, and transitive. A linear order is any partial
order which satisfies that for any x and y, either Rxy or Ryx.
(Generally, a relation with this property is connected).
Since relations are sets (of pairs), they can be operated on as
sets (e.g., we can form the union and intersection of relations).
We can also chain them together (relative product R | S ). If we
form the relative product of R with itself arbitrarily many times
we get the transitive closure R + of R.

Problems
Problem 2.1. List the elements of the relation ⊆ on the set
℘({a, b, c }).

Problem 2.2. Give examples of relations that are (a) reflexive


and symmetric but not transitive, (b) reflexive and anti-symmetric,
(c) anti-symmetric, transitive, but not reflexive, and (d) reflexive,
symmetric, and transitive. Do not use relations on numbers or
sets.
2.5. OPERATIONS ON RELATIONS 25

Problem 2.3. Complete the proof of Proposition 2.19, i.e., prove


that if R is a partial order on X , then R − = R \ IdX is a strict
order.

Problem 2.4. Consider the less-than-or-equal-to relation ≤ on the


set {1, 2, 3, 4} as a graph and draw the corresponding diagram.

Problem 2.5. Show that the transitive closure of R is in fact


transitive.
CHAPTER 3

Functions
3.1 Basics
A function is a mapping of which pairs each object of a given set
with a single partner in another set. For instance, the operation
of adding 1 defines a function: each number n is paired with a
unique number n + 1. More generally, functions may take pairs,
triples, etc., of inputs and returns some kind of output. Many
functions are familiar to us from basic arithmetic. For instance,
addition and multiplication are functions. They take in two num-
bers and return a third. In this mathematical, abstract sense, a
function is a black box: what matters is only what output is paired
with what input, not the method for calculating the output.

Definition 3.1 (Function). A function f : X → Y is a mapping


of each element of X to an element of Y . We call X the domain
of f and Y the codomain of f . The elements of X are called
inputs or arguments of f , and the element of Y that is paired
with an argument x by f is called the value of f for argument x,
written f (x).
The range ran(f ) of f is the subset of the codomain consisting
of the values of f for some argument; ran(f ) = {f (x) : x ∈ X }.

Example 3.2. Multiplication takes pairs of natural numbers as


inputs and maps them to natural numbers as outputs, so goes

26
3.1. BASICS 27

from N × N (the domain) to N (the codomain). As it turns out,


the range is also N, since every n ∈ N is n × 1.

Multiplication is a function because it pairs each input—each


pair of natural numbers—with a single output: × : N2 → N. By
contrast, the square root operation applied to the domain N is
not functional, since each positive integer n has two square roots:
√ √
n and − n. We can√make it functional by only returning the
positive square root: : N → R. The relation that pairs each
student in a class with their final grade is a function—no student
can get two different final grades in the same class. The rela-
tion that pairs each student in a class with their parents is not a
function—generally each student will have at least two parents.
We can define functions by specifying in some precise way
what the value of the function is for every possible argment. Dif-
ferent ways of doing this are by giving a formula, describing a
method for computing the value, or listing the values for each
argument. However functions are defined, we must make sure
that for each argment we specify one, and only one, value.

Example 3.3. Let f : N → N be defined such that f (x) = x + 1.


This is a definition that specifies f as a function which takes in
natural numbers and outputs natural numbers. It tells us that,
given a natural number x, f will output its successor x + 1. In
this case, the codomain N is not the range of f , since the natural
number 0 is not the successor of any natural number. The range
of f is the set of all positive integers, Z+ .

Example 3.4. Let g : N → N be defined such that g (x) = x +2−1.


This tells us that g is a function which takes in natural numbers
and outputs natural numbers. Given a natural number n, g will
output the predecessor of the successor of the successor of x, i.e.,
x + 1. Despite their different definitions, g and f are the same
function.

Functions f and g defined above are the same because for


any natural number x, x + 2 − 1 = x + 1. f and g pair each
CHAPTER 3. FUNCTIONS 28

natural number with the same output. The definitions for f and
g specify the same mapping by means of different equations, and
so count as the same function.

Example 3.5. We can also define functions by cases. For in-


stance, we could define h : N → N by

 x2
 if x is even
h(x) = 
 x+1
 2 if x is odd.

Since every natural number is either even or odd, the output of


this function will always be a natural number. Just remember that
if you define a function by cases, every possible input must fall
into exactly one case. In some cases, this will require a a proof
that the cases are exhaustive and exclusive.

3.2 Kinds of Functions

Definition 3.6 (Surjective function). A function f : X → Y is


surjective iff Y is also the range of f , i.e., for every y ∈ Y there is
at least one x ∈ X such that f (x) = y.

If you want to show that a function is surjective, then you


need to show that every object in the codomain is the output of
the function given some input or other.

Definition 3.7 (Injective function). A function f : X → Y is


injective iff for each y ∈ Y there is at most one x ∈ X such
that f (x) = y.

Any function pairs each possible input with a unique output.


An injective function has a unique input for each possible output.
If you want to show that a function f is injective, you need to
show that for any elements x and x 0 of the domain, if f (x) =
f (x 0), then x = x 0.
3.3. INVERSES OF FUNCTIONS 29

A function which is neither injective, nor surjective, is the


constant function f : N → N where f (x) = 1.
A function which is both injective and surjective is the identity
function f : N → N where f (x) = x.
The successor function f : N → N where f (x) = x + 1 is
injective, but not surjective.
The function

 x2
 if x is even
f (x) = 
 x+1
 2 if x is odd.

is surjective, but not injective.

Definition 3.8 (Bijection). A function f : X → Y is bijective iff it


is both surjective and injective. We call such a function a bijection
from X to Y (or between X and Y ).

3.3 Inverses of Functions


One obvious question about functions is whether a given map-
ping can be “reversed.” For instance, the successor function
f (x) = x + 1 can be reversed in the sense that the function
g (y) = y − 1 “undos” what f does. But we must be careful: While
the definition of g defines a function Z → Z, it does not define
a function N → N (g (0) < N). So even in simple cases, it is not
quite obvious if functions can be reversed, and that it may depend
on the domain and codomain. Let’s give a precise definition.

Definition 3.9. A function g : Y → X is an inverse of a function


f : X → Y if f (g (y)) = y and g (f (x)) = x for all x ∈ X and y ∈ Y .

When do functions have inverses? A good candidate for an


inverse of f : X → Y is g : Y → X “defined by”

g (y) = “the” x such that f (x) = y.


CHAPTER 3. FUNCTIONS 30

The scare quotes around “defined by” suggest that this is not
a definition. At least, it is not in general. For in order for this
definition to specify a function, there has to be one and only one x
such that f (x) = y—the output of g has to be uniquely specified.
Moreover, it has to be specified for every y ∈ Y . If there are x 1
and x 2 ∈ X with x 1 , x 2 but f (x 1 ) = f (x 2 ), then g (y) would not
be uniquely specified for y = f (x 1 ) = f (x 2 ). And if there is no x
at all such that f (x) = y, then g (y) is not specified at all. In other
words, for g to be defined, f has to be injective and surjective.

Proposition 3.10. If f : X → Y is bijective, f has a unique in-


verse f −1 : Y → X .

Proof. Exercise. 

3.4 Composition of Functions


We have already seen that the inverse f −1 of a bijective function f
is itself a function. It is also possible to compose functions f and
g to define a new function by first applying f and then g . Of
course, this is only possible if the domains and codomains match,
i.e., the codomain of f must be a subset of the domain of g .

Definition 3.11 (Composition). Let f : X → Y and g : Y → Z .


The composition of f with g is the function (g ◦ f ) : X → Z , where
(g ◦ f )(x) = g (f (x)).

The function (g ◦ f ) : X → Z pairs each member of X with


a member of Z . We specify which member of Z a member of X
is paired with as follows—given an input x ∈ X , first apply the
function f to x, which will output some y ∈ Y . Then apply the
function g to y, which will output some z ∈ Z .

Example 3.12. Consider the functions f (x) = x + 1, and g (x) =


2x. What function do you get when you compose these two?
(g ◦ f )(x) = g (f (x)). So that means for every natural number you
3.5. ISOMORPHISM 31

give this function, you first add one, and then you multiply the
result by two. So their composition is (g ◦ f )(x) = 2(x + 1).

3.5 Isomorphism
An isomorphism is a bijection that preserves the structure of the
sets it relates, where structure is a matter of the relationships that
obtain between the elements of the sets. Consider the following
two sets X = {1, 2, 3} and Y = {4, 5, 6}. These sets are both struc-
tured by the relations successor, less than, and greater than. An
isomorphism between the two sets is a bijection that preserves
those structures. So a bijective function f : X → Y is an isomor-
phism if, i < j iff f (i ) < f ( j ), i > j iff f (i ) > f ( j ), and j is the
successor of i iff f ( j ) is the successor of f (i ).

Definition 3.13 (Isomorphism). Let U be the pair hX, Ri and V


be the pair hY, S i such that X and Y are sets and R and S are
relations on X and Y respectively. A bijection f from X to Y is an
isomorphism from U to V iff it preserves the relational structure,
that is, for any x 1 and x 2 in X , hx 1, x 2 i ∈ R iff hf (x 1 ), f (x 2 )i ∈ S .

Example 3.14. Consider the following two sets X = {1, 2, 3}


and Y = {4, 5, 6}, and the relations less than and greater than.
The function f : X → Y where f (x) = 7 − x is an isomorphism
between hX, <i and hY, >i.

3.6 Partial Functions


It is sometimes useful to relax the definition of function so that
it is not required that the output of the function is defined for all
possible inputs. Such mappings are called partial functions.
CHAPTER 3. FUNCTIONS 32

Definition 3.15. A partial function f : X → 7 Y is a mapping


which assigns to every element of X at most one element of Y .
If f assigns an element of Y to x ∈ X , we say f (x) is defined, and
otherwise undefined. If f (x) is defined, we write f (x) ↓, otherwise
f (x) ↑. The domain of a partial function f is the subset of X
where it is defined, i.e., dom(f ) = {x : f (x) ↓}.

Example 3.16. Every function f : X → Y is also a partial func-


tion. Partial functions that are defined everywhere on X —i.e.,
what we so far have simply called a function—are also called
total functions.

Example 3.17. The partial function f : R → 7 R given by f (x) =


1/x is undefined for x = 0, and defined everywhere else.

3.7 Functions and Relations


A function which maps elements of X to elements of Y obviously
defines a relation between X and Y , namely the relation which
holds between x and y iff f (x) = y. In fact, we might even—if we
are interested in reducing the building blocks of mathematics for
instance—identify the function f with this relation, i.e., with a
set of pairs. This then raises the question: which relations define
functions in this way?

Definition 3.18 (Graph of a function). Let f : X → 7 Y be a


partial function. The graph of f is the relation R f ⊆ X × Y
defined by
R f = {hx, yi : f (x) = y }.
3.7. FUNCTIONS AND RELATIONS 33

Proposition 3.19. Suppose R ⊆ X ×Y has the property that whenever


Rxy and Rxy 0 then y = y 0. Then R is the graph of the partial function
f :X → 7 Y defined by: if there is a y such that Rxy, then f (x) = y,
otherwise f (x) ↑. If R is also serial, i.e., for each x ∈ X there is a
y ∈ Y such that Rxy, then f is total.

Proof. Suppose there is a y such that Rxy. If there were another


y 0 , y such that Rxy 0, the condition on R would be violated.
Hence, if there is a y such that Rxy, that y is unique, and so f is
well-defined. Obviously, R f = R and f is total if R is serial. 

Summary
A function f : X → Y maps every element of the domain X
to a unique element of the codomain Y . If x ∈ X , we call the y
that f maps x to the value f (x) of f for argument x. If X is a set
of pairs, we can think of the function f as taking two arguments.
The range ran(f ) of f is the subset of Y that consists of all the
values of f .
If ran(f ) = Y then f is called surjective. The value f (x) is
unique in that f maps x to only one f (x), never more than one.
If f (x) is also unique in the sense that no two different arguments
are mapped to the same value, f is called injective. Functions
which are both injective and surjective are called bijective.
Bijective functions have a unique inverse function f −1 . Func-
tions can also be chained together: the function (g ◦ f ) is the
composition of f with g . Compositions of injective functions are
injective, and of surjective functions are surjective, and (f −1 ◦ f )
is the identity function.
If we relax the requirement that f must have a value for every
x ∈ X , we get the notion of a partial functions. If f : X → 7
Y is partial, we say f (x) is defined, f (x) ↓ if f has a value
for argument x. Any (partial) function f is associated with the
graph R f of f , the relation that holds iff f (x) = y.
CHAPTER 3. FUNCTIONS 34

Problems
Problem 3.1. Show that if f is bijective, an inverse g of f exists,
i.e., define such a g , show that it is a function, and show that it
is an inverse of f , i.e., f (g (y)) = y and g (f (x)) = x for all x ∈ X
and y ∈ Y .

Problem 3.2. Show that if f : X → Y has an inverse g , then f


is bijective.

Problem 3.3. Show that if g : Y → X and g 0 : Y → X are in-


verses of f : X → Y , then g = g 0, i.e., for all y ∈ Y , g (y) = g 0(y).

Problem 3.4. Show that if f : X → Y and g : Y → Z are both


injective, then g ◦ f : X → Z is injective.

Problem 3.5. Show that if f : X → Y and g : Y → Z are both


surjective, then g ◦ f : X → Z is surjective.

Problem 3.6. Given f : X → 7 Y , define the partial function


g:Y → 7 X by: for any y ∈ Y , if there is a unique x ∈ X such
that f (x) = y, then g (y) = x; otherwise g (y) ↑. Show that if f is
injective, then g (f (x)) = x for all x ∈ dom(f ), and f (g (y)) = y
for all y ∈ ran(f ).

Problem 3.7. Suppose f : X → Y and g : Y → Z . Show that the


graph of (g ◦ f ) is R f | R g .
CHAPTER 4

The Size of
Sets
4.1 Introduction
When Georg Cantor developed set theory in the 1870s, his inter-
est was in part to make palatable the idea of an infinite collection—
an actual infinity, as the medievals would say. Key to this reha-
bilitation of the notion of the infinite was a way to assign sizes—
“cardinalities”—to sets. The cardinality of a finite set is just a
natural number, e.g., ∅ has cardinality 0, and a set containing
five things has cardinality 5. But what about infinite sets? Do
they all have the same cardinality, ∞? It turns out, they do not.
The first important idea here is that of an enumeration. We
can list every finite set by listing all its elements. For some infinite
sets, we can also list all their elements if we allow the list itself
to be infinite. Such sets are called countable. Cantor’s surprising
result was that some infinite sets are not countable.

4.2 Countable Sets


One way of specifying a finite set is by listing its elements. But
conversely, since there are only finitely many elements in a set,

35
CHAPTER 4. THE SIZE OF SETS 36

every finite set can be enumerated. By this we mean: its elements


can be put into a list (a list with a beginning, where each element
of the list other than the first has a unique predecessor). Some
infinite sets can also be enumerated, such as the set of positive
integers.

Definition 4.1 (Enumeration). Informally, an enumeration of a


set X is a list (possibly infinite) of elements of X such that every
element of X appears on the list at some finite position. If X has
an enumeration, then X is said to be countable. If X is countable
and infinite, we say X is countably infinite.

A couple of points about enumerations:

1. We count as enumerations only lists which have a beginning


and in which every element other than the first has a single
element immediately preceding it. In other words, there
are only finitely many elements between the first element
of the list and any other element. In particular, this means
that every element of an enumeration has a finite position:
the first element has position 1, the second position 2, etc.

2. We can have different enumerations of the same set X which


differ by the order in which the elements appear: 4, 1, 25,
16, 9 enumerates the (set of the) first five square numbers
just as well as 1, 4, 9, 16, 25 does.

3. Redundant enumerations are still enumerations: 1, 1, 2, 2,


3, 3, . . . enumerates the same set as 1, 2, 3, . . . does.

4. Order and redundancy do matter when we specify an enu-


meration: we can enumerate the positive integers beginning
with 1, 2, 3, 1, . . . , but the pattern is easier to see when enu-
merated in the standard way as 1, 2, 3, 4, . . .

5. Enumerations must have a beginning: . . . , 3, 2, 1 is not


an enumeration of the natural numbers because it has no
4.2. COUNTABLE SETS 37

first element. To see how this follows from the informal


definition, ask yourself, “at what position in the list does
the number 76 appear?”

6. The following is not an enumeration of the positive inte-


gers: 1, 3, 5, . . . , 2, 4, 6, . . . The problem is that the even
numbers occur at places ∞ + 1, ∞ + 2, ∞ + 3, rather than
at finite positions.

7. Lists may be gappy: 2, −, 4, −, 6, −, . . . enumerates the


even positive integers.

8. The empty set is enumerable: it is enumerated by the empty


list!

Proposition 4.2. If X has an enumeration, it has an enumeration


without gaps or repetitions.

Proof. Suppose X has an enumeration x 1 , x 2 , . . . in which each


x i is an element of X or a gap. We can remove repetitions from
an enumeration by replacing repeated elements by gaps. For in-
stance, we can turn the enumeration into a new one in which x i0
is x i if x i is an element of X that is not among x 1 , . . . , x i −1 or
is − if it is. We can remove gaps by closing up the elements in
the list. To make precise what “closing up” amounts to is a bit
difficult to describe. Roughly, it means that we can generate a
new enumeration x 100, x 200, . . . , where each x i00 is the first element
in the enumeration x 10 , x 20 , . . . after x i00−1 (if there is one). 

The last argument shows that in order to get a good handle


on enumerations and countable sets and to prove things about
them, we need a more precise definition. The following provides
it.
CHAPTER 4. THE SIZE OF SETS 38

Definition 4.3 (Enumeration). An enumeration of a set X is any


surjective function f : Z+ → X .

Let’s convince ourselves that the formal definition and the


informal definition using a possibly gappy, possibly infinite list are
equivalent. A surjective function (partial or total) from Z+ to a
set X enumerates X . Such a function determines an enumeration
as defined informally above: the list f (1), f (2), f (3), . . . . Since
f is surjective, every element of X is guaranteed to be the value
of f (n) for some n ∈ Z+ . Hence, every element of X appears
at some finite position in the list. Since the function may not be
injective, the list may be redundant, but that is acceptable (as
noted above).
On the other hand, given a list that enumerates all elements
of X , we can define a surjective function f : Z+ → X by letting
f (n) be the nth element of the list that is not a gap, or the last
element of the list if there is no nth element. There is one case in
which this does not produce a surjective function: if X is empty,
and hence the list is empty. So, every non-empty list determines
a surjective function f : Z+ → X .

Definition 4.4. A set X is countable iff it is empty or has an


enumeration.

Example 4.5. A function enumerating the positive integers (Z+ )


is simply the identity function given by f (n) = n. A function
enumerating the natural numbers N is the function g (n) = n − 1.
Example 4.6. The functions f : Z+ → Z+ and g : Z+ → Z+ given
by
f (n) = 2n and
g (n) = 2n + 1
enumerate the even positive integers and the odd positive inte-
gers, respectively. However, neither function is an enumeration
of Z+ , since neither is surjective.
4.2. COUNTABLE SETS 39

Example 4.7. The function f (n) = (−1)n d (n−1)


2 e (where dxe de-
notes the ceiling function, which rounds x up to the nearest in-
teger) enumerates the set of integers Z. Notice how f generates
the values of Z by “hopping” back and forth between positive and
negative integers:

f (1) f (2) f (3) f (4) f (5) f (6) f (7) ...

−d 02 e d 12 e −d 22 e d 32 e −d 24 e d 52 e −d 62 e . . .

0 1 −1 2 −2 3 ...

You can also think of f as defined by cases as follows:




 0 if n = 1
f (n) =  if n is even

 n/2
 −(n − 1)/2 if n is odd and > 1



That is fine for “easy” sets. What about the set of, say, pairs
of natural numbers?

Z+ × Z+ = {hn, mi : n, m ∈ Z+ }

We can organize the pairs of positive integers in an array, such


as the following:

1 2 3 4 ...
1 h1, 1i h1, 2i h1, 3i h1, 4i ...
2 h2, 1i h2, 2i h2, 3i h2, 4i ...
3 h3, 1i h3, 2i h3, 3i h3, 4i ...
4 h4, 1i h4, 2i h4, 3i h4, 4i ...
.. .. .. .. .. ..
. . . . . .

Clearly, every ordered pair in Z+ × Z+ will appear exactly


once in the array. In particular, hn, mi will appear in the nth
column and mth row. But how do we organize the elements of
CHAPTER 4. THE SIZE OF SETS 40

such an array into a one-way list? The pattern in the array below
demonstrates one way to do this:

1 2 4 7 ...
3 5 8 ... ...
6 9 ... ... ...
10 . . . . . . . . . ...
.. .. .. .. ..
. . . . .
This pattern is called Cantor’s zig-zag method. Other patterns are
perfectly permissible, as long as they “zig-zag” through every cell
of the array. By Cantor’s zig-zag method, the enumeration for
Z+ × Z+ according to this scheme would be:
h1, 1i, h1, 2i, h2, 1i, h1, 3i, h2, 2i, h3, 1i, h1, 4i, h2, 3i, h3, 2i, h4, 1i, . . .
What ought we do about enumerating, say, the set of ordered
triples of positive integers?
Z+ × Z+ × Z+ = {hn, m, k i : n, m, k ∈ Z+ }
We can think of Z+ × Z+ × Z+ as the Cartesian product of Z+ × Z+
and Z+ , that is,
(Z+ )3 = (Z+ × Z+ ) × Z+ = {hhn, mi, k i : hn, mi ∈ Z+ × Z+, k ∈ Z+ }
and thus we can enumerate (Z+ )3 with an array by labelling one
axis with the enumeration of Z+ , and the other axis with the
enumeration of (Z+ )2 :
1 2 3 4 ...
h1, 1i h1, 1, 1i h1, 1, 2i h1, 1, 3i h1, 1, 4i ...
h1, 2i h1, 2, 1i h1, 2, 2i h1, 2, 3i h1, 2, 4i ...
h2, 1i h2, 1, 1i h2, 1, 2i h2, 1, 3i h2, 1, 4i ...
h1, 3i h1, 3, 1i h1, 3, 2i h1, 3, 3i h1, 3, 4i ...
.. .. .. .. .. ..
. . . . . .
Thus, by using a method like Cantor’s zig-zag method, we may
similarly obtain an enumeration of (Z+ )3 .
4.3. UNCOUNTABLE SETS 41

4.3 Uncountable Sets


Some sets, such as the set Z+ of positive integers, are infinite.
So far we’ve seen examples of infinite sets which were all count-
able. However, there are also infinite sets which do not have this
property. Such sets are called uncountable.
First of all, it is perhaps already surprising that there are un-
countable sets. For any countable set X there is a surjective func-
tion f : Z+ → X . If a set is uncountable there is no such function.
That is, no function mapping the infinitely many elements of Z+
to X can exhaust all of X . So there are “more” elements of X
than the infinitely many positive integers.
How would one prove that a set is uncountable? You have
to show that no such surjective function can exist. Equivalently,
you have to show that the elements of X cannot be enumerated
in a one way infinite list. The best way to do this is to show that
every list of elements of X must leave at least one element out;
or that no function f : Z+ → X can be surjective. We can do
this using Cantor’s diagonal method. Given a list of elements of
X , say, x 1 , x 2 , . . . , we construct another element of X which, by
its construction, cannot possibly be on that list.
Our first example is the set Bω of all infinite, non-gappy se-
quences of 0’s and 1’s.
CHAPTER 4. THE SIZE OF SETS 42

Theorem 4.8. Bω is uncountable.

Proof. We proceed by indirect proof. Suppose that Bω were count-


able, i.e., suppose that there is a list s 1 , s 2 , s 3 , s 4 , . . . of all elements
of Bω . Each of these si is itself an infinite sequence of 0’s and 1’s.
Let’s call the j -th element of the i -th sequence in this list si ( j ).
Then the i -th sequence si is

si (1), si (2), si (3), . . .

We may arrange this list, and the elements of each sequence


si in it, in an array:

1 2 3 4 ...
1 s1 (1) s 1 (2) s 1 (3) s 1 (4) ...
2 s 2 (1) s2 (2) s 2 (3) s 2 (4) ...
3 s 3 (1) s 3 (2) s3 (3) s 3 (4) ...
4 s 4 (1) s 4 (2) s 4 (3) s4 (4) ...
.. .. .. .. .. ..
. . . . . .

The labels down the side give the number of the sequence in the
list s 1 , s 2 , . . . ; the numbers across the top label the elements of the
individual sequences. For instance, s 1 (1) is a name for whatever
number, a 0 or a 1, is the first element in the sequence s 1 , and so
on.
Now we construct an infinite sequence, s , of 0’s and 1’s which
cannot possibly be on this list. The definition of s will depend on
the list s 1 , s 2 , . . . . Any infinite list of infinite sequences of 0’s and
1’s gives rise to an infinite sequence s which is guaranteed to not
appear on the list.
To define s , we specify what all its elements are, i.e., we spec-
ify s (n) for all n ∈ Z+ . We do this by reading down the diagonal
of the array above (hence the name “diagonal method”) and then
changing every 1 to a 0 and every 1 to a 0. More abstractly, we
define s (n) to be 0 or 1 according to whether the n-th element of
4.3. UNCOUNTABLE SETS 43

the diagonal, sn (n), is 1 or 0.


 1 if sn (n) = 0

s (n) = 
 0 if sn (n) = 1.

If you like formulas better than definitions by cases, you could
also define s (n) = 1 − sn (n).
Clearly s is a non-gappy infinite sequence of 0’s and 1’s, since
it is just the mirror sequence to the sequence of 0’s and 1’s that
appear on the diagonal of our array. So s is an element of Bω .
But it cannot be on the list s 1 , s 2 , . . . Why not?
It can’t be the first sequence in the list, s 1 , because it differs
from s1 in the first element. Whatever s 1 (1) is, we defined s (1)
to be the opposite. It can’t be the second sequence in the list,
because s differs from s 2 in the second element: if s 2 (2) is 0, s (2)
is 1, and vice versa. And so on.
More precisely: if s were on the list, there would be some k
so that s = sk . Two sequences are identical iff they agree at every
place, i.e., for any n, s (n) = sk (n). So in particular, taking n = k
as a special case, s (k ) = sk (k ) would have to hold. sk (k ) is either
0 or 1. If it is 0 then s (k ) must be 1—that’s how we defined s . But
if sk (k ) = 1 then, again because of the way we defined s , s (k ) = 0.
In either case s (k ) , sk (k ).
We started by assuming that there is a list of elements of Bω ,
s 1 , s 2 , . . . From this list we constructed a sequence s which we
proved cannot be on the list. But it definitely is a sequence of
0’s and 1’s if all the si are sequences of 0’s and 1’s, i.e., s ∈ Bω .
This shows in particular that there can be no list of all elements
of Bω , since for any such list we could also construct a sequence s
guaranteed to not be on the list, so the assumption that there is
a list of all sequences in Bω leads to a contradiction. 
This proof method is called “diagonalization” because it uses
the diagonal of the array to define s . Diagonalization need not
involve the presence of an array: we can show that sets are not
countable by using a similar idea even when no array and no
actual diagonal is involved.
CHAPTER 4. THE SIZE OF SETS 44

Theorem 4.9. ℘(Z+ ) is not countable.

Proof. We proceed in the same way, by showing that for every list
of subsets of Z+ there is a subset of Z+ which cannot be on the
list. Suppose the following is a given list of subsets of Z+ :
Z 1, Z 2, Z 3, . . .
We now define a set Z such that for any n ∈ Z+ , n ∈ Z iff n < Z n :
Z = {n ∈ Z+ : n < Z n }
Z is clearly a set of positive integers, since by assumption each Z n
is, and thus Z ∈ ℘(Z+ ). But Z cannot be on the list. To show
this, we’ll establish that for each k ∈ Z+ , Z , Zk .
So let k ∈ Z+ be arbitrary. We’ve defined Z so that for any
n ∈ Z+ , n ∈ Z iff n < Z n . In particular, taking n = k , k ∈ Z
iff k < Zk . But this shows that Z , Zk , since k is an element of
one but not the other, and so Z and Zk have different elements.
Since k was arbitrary, Z is not on the list Z 1 , Z 2 , . . . 
The preceding proof did not mention a diagonal, but you
can think of it as involving a diagonal if you picture it this way:
Imagine the sets Z 1 , Z 2 , . . . , written in an array, where each ele-
ment j ∈ Zi is listed in the j -th column. Say the first four sets on
that list are {1, 2, 3, . . . }, {2, 4, 6, . . . }, {1, 2, 5}, and {3, 4, 5, . . . }.
Then the array would begin with
Z1 = {1, 2, 3, 4, 5, 6, . . . }
Z2 = { 2, 4, 6, . . . }
Z3 = {1, 2, 5 }
Z4 = { 3, 4, 5, 6, . . . }
.. ..
. .

Then Z is the set obtained by going down the diagonal, leav-


ing out any numbers that appear along the diagonal and include
those j where the array has a gap in the j -th row/column. In the
above case, we would leave out 1 and 2, include 3, leave out 4,
etc.
4.4. REDUCTION 45

4.4 Reduction
We showed ℘(Z+ ) to be uncountable by a diagonalization argu-
ment. We already had a proof that Bω , the set of all infinite
sequences of 0s and 1s, is uncountable. Here’s another way we
can prove that ℘(Z+ ) is uncountable: Show that if ℘(Z+ ) is count-
able then Bω is also countable. Since we know Bω is not count-
able, ℘(Z+ ) can’t be either. This is called reducing one problem
to another—in this case, we reduce the problem of enumerat-
ing Bω to the problem of enumerating ℘(Z+ ). A solution to the
latter—an enumeration of ℘(Z+ )—would yield a solution to the
former—an enumeration of Bω .
How do we reduce the problem of enumerating a set Y to
that of enumerating a set X ? We provide a way of turning an
enumeration of X into an enumeration of Y . The easiest way to
do that is to define a surjective function f : X → Y . If x 1 , x 2 , . . .
enumerates X , then f (x 1 ), f (x 2 ), . . . would enumerate Y . In our
case, we are looking for a surjective function f : ℘(Z+ ) → Bω .

Proof of Theorem 4.9 by reduction. Suppose that ℘(Z+ ) were count-


able, and thus that there is an enumeration of it, Z 1 , Z 2 , Z 3 , . . .
Define the function f : ℘(Z+ ) → Bω by letting f (Z ) be the
sequence sk such that sk (n) = 1 iff n ∈ Z , and sk (n) = 0 other-
wise. This clearly defines a function, since whenever Z ⊆ Z+ , any
n ∈ Z+ either is an element of Z or isn’t. For instance, the set
2Z+ = {2, 4, 6, . . . } of positive even numbers gets mapped to the
sequence 010101 . . . , the empty set gets mapped to 0000 . . . and
the set Z+ itself to 1111 . . . .
It also is surjective: Every sequence of 0s and 1s corresponds
to some set of positive integers, namely the one which has as its
members those integers corresponding to the places where the
sequence has 1s. More precisely, suppose s ∈ Bω . Define Z ⊆ Z+
by:
Z = {n ∈ Z+ : s (n) = 1}
Then f (Z ) = s , as can be verified by consulting the definition
of f .
CHAPTER 4. THE SIZE OF SETS 46

Now consider the list

f (Z 1 ), f (Z 2 ), f (Z 3 ), . . .

Since f is surjective, every member of Bω must appear as a value


of f for some argument, and so must appear on the list. This list
must therefore enumerate all of Bω .
So if ℘(Z+ ) were countable, Bω would be countable. But Bω
is uncountable (Theorem 4.8). Hence ℘(Z+ ) is uncountable. 

It is easy to be confused about the direction the reduction


goes in. For instance, a surjective function g : Bω → X does not
establish that X is uncountable. (Consider g : Bω → B defined
by g (s ) = s (1), the function that maps a sequence of 0’s and 1’s
to its first element. It is surjective, because some sequences start
with 0 and some start with 1. But B is finite.) Note also that the
function f must be surjective, or otherwise the argument does
not go through: f (x 1 ), f (x 2 ), . . . would then not be guaranteed to
include all the elements of Y . For instance, h : Z+ → Bω defined
by
h(n) = 000 . . . }0
| {z
n 0’s

is a function, but Z+ is countable.

4.5 Equinumerous Sets


We have an intuitive notion of “size” of sets, which works fine for
finite sets. But what about infinite sets? If we want to come up
with a formal way of comparing the sizes of two sets of any size,
it is a good idea to start with defining when sets are the same
size. Let’s say sets of the same size are equinumerous. We want the
formal notion of equinumerosity to correspond with our intuitive
notion of “same size,” hence the formal notion ought to satisfy
the following properties:

Reflexivity: Every set is equinumerous with itself.


4.5. EQUINUMEROUS SETS 47

Symmetry: For any sets X and Y , if X is equinumerous with Y ,


then Y is equinumerous with X .

Transitivity: For any sets X,Y , and Z , if X is equinumerous


with Y and Y is equinumerous with Z , then X is equinu-
merous with Z .

In other words, we want equinumerosity to be an equivalence


relation.

Definition 4.10. A set X is equinumerous with a set Y , X ≈ Y , if


and only if there is a bijective f : X → Y .

Proposition 4.11. Equinumerosity defines an equivalence relation.

Proof. Let X,Y , and Z be sets.

Reflexivity: Using the identity map 1X : X → X , where 1X (x) =


x for all x ∈ X , we see that X is equinumerous with itself
(clearly, 1X is bijective).

Symmetry: Suppose that X is equinumerous with Y . Then there


is a bijective f : X → Y . Since f is bijective, its inverse f −1
exists and also bijective. Hence, f −1 : Y → X is a bijective
function from Y to X , so Y is also equinumerous with X .

Transitivity: Suppose that X is equinumerous with Y via the


bijective function f : X → Y and that Y is equinumerous
with Z via the bijective function g : Y → Z . Then the
composition of g ◦ f : X → Z is bijective, and X is thus
equinumerous with Z .

Therefore, equinumerosity is an equivalence relation. 


CHAPTER 4. THE SIZE OF SETS 48

Theorem 4.12. Suppose X and Y are equinumerous. Then X is


countable if and only if Y is.

Proof. Let X and Y be equinumerous. Suppose that X is count-


able. Then either X = ∅ or there is a surjective function f : Z+ →
X . Since X and Y are equinumerous, there is a bijective g : X →
Y . If X = ∅, then Y = ∅ also (otherwise there would be an ele-
ment y ∈ Y but no x ∈ X with g (x) = y). If, on the other hand,
f : Z+ → X is surjective, then g ◦ f : Z+ → Y is surjective. To
see this, let y ∈ Y . Since g is surjective, there is an x ∈ X such
that g (x) = y. Since f is surjective, there is an n ∈ Z+ such that
f (n) = x. Hence,

(g ◦ f )(n) = g (f (n)) = g (x) = y

and thus g ◦ f is surjective. We have that g ◦ f is an enumeration


of Y , and so Y is countable. 

4.6 Comparing Sizes of Sets


Just like we were able to make precise when two sets have the same
size in a way that also accounts for the size of infinite sets, we can
also compare the sizes of sets in a precise way. Our definition
of “is smaller than (or equinumerous)” will require, instead of
a bijection between the sets, a total injective function from the
first set to the second. If such a function exists, the size of the
first set is less than or equal to the size of the second. Intuitively,
an injective function from one set to another guarantees that the
range of the function has at least as many elements as the domain,
since no two elements of the domain map to the same element
of the range.
4.6. COMPARING SIZES OF SETS 49

Definition 4.13. X is no larger than Y , X  Y , if and only if


there is an injective function f : X → Y .

Theorem 4.14 (Schröder-Bernstein). Let X and Y be sets. If X 


Y and Y  X , then X ≈ Y .

In other words, if there is a total injective function from X


to Y , and if there is a total injective function from Y back to X ,
then there is a total bijection from X to Y . Sometimes, it can be
difficult to think of a bijection between two equinumerous sets, so
the Schröder-Bernstein theorem allows us to break the compari-
son down into cases so we only have to think of an injection from
the first to the second, and vice-versa. The Schröder-Bernstein
theorem, apart from being convenient, justifies the act of dis-
cussing the “sizes” of sets, for it tells us that set cardinalities have
the familiar anti-symmetric property that numbers have.

Definition 4.15. X is smaller than Y , X ≺ Y , if and only if there


is an injective function f : X → Y but no bijective g : X → Y .

Theorem 4.16 (Cantor). For all X , X ≺ ℘(X ).

Proof. The function f : X → ℘(X ) that maps any x ∈ X to its


singleton {x } is injective, since if x , y then also f (x) = {x } ,
{y } = f (y).
There cannot be a surjective function g : X → ℘(X ), let alone
a bijective one. For suppose that g : X → ℘(X ). Since g is total,
every x ∈ X is mapped to a subset g (x) ⊆ X . We show that g
cannot be surjective. To do this, we define a subset Y ⊆ X which
by definition cannot be in the range of g . Let

Y = {x ∈ X : x < g (x)}.
CHAPTER 4. THE SIZE OF SETS 50

Since g (x) is defined for all x ∈ X , Y is clearly a well-defined


subset of X . But, it cannot be in the range of g . Let x ∈ X be
arbitrary, we show that Y , g (x). If x ∈ g (x), then it does not
satisfy x < g (x), and so by the definition of Y , we have x < Y . If
x ∈ Y , it must satisfy the defining property of Y , i.e., x < g (x).
Since x was arbitrary this shows that for each x ∈ X , x ∈ g (x)
iff x < Y , and so g (x) , Y . So Y cannot be in the range of g ,
contradicting the assumption that g is surjective. 

It’s instructive to compare the proof of Theorem 4.16 to that


of Theorem 4.9. There we showed that for any list Z 1 , Z 2 , . . . , of
subsets of Z+ one can construct a set Z of numbers guaranteed
not to be on the list. It was guaranteed not to be on the list
because, for every n ∈ Z+ , n ∈ Z n iff n < Z . This way, there is
always some number that is an element of one of Z n and Z but
not the other. We follow the same idea here, except the indices n
are now elements of X instead of Z+ . The set Y is defined so that
it is different from g (x) for each x ∈ X , because x ∈ g (x) iff x < Y .
Again, there is always an element of X which is an element of
one of g (x) and Y but not the other. And just as Z therefore
cannot be on the list Z 1 , Z 2 , . . . , Y cannot be in the range of g .

Summary
The size of a set X can be measured by a natural number if
the set is finite, and sizes can be compared by comparing these
numbers. If sets are infinite, things are more complicated. The
first level of infinity is that of countably infinite sets. A set X is
countable if its elements can be arranged in an enumeration, a
one-way infinite, possibly gappy list, i.e., when there is a surjective
function f : Z+ → 7 X . It is countably infinite if it is countable but
not finite. Cantor’s zig-zag method shows that the sets of pairs
of elements of countably infinite sets is also countable; and this
can be used to show that even the set of rational numbers Q is
countable.
4.6. COMPARING SIZES OF SETS 51

There are, however, infinite sets that are not countable: these
sets are called uncountable. There are two ways of showing that
a set is uncountable: directly, using a diagonal argument, or
by reduction. To give a diagonal argument, we assume that the
set X in question is countable, and use a hypothetical enumera-
tion to define an element of X which, by the very way we define
it, is guaranteed to be different from every element in the enu-
meration. So the enumeration can’t be an enumeration of all
of X after all, and we’ve shown that no enumeration of X can
exist. A reduction shows that X is uncountable by associating
every element of X with an element of some known uncountable
set Y in a surjective way. If this is possible, than a hypothetical
enumeration of X would yieled an enumeration of Y . Since Y is
uncountable, no enumeration of X can exist.
In general, infinite sets can be compared sizewise: X and
Y are the same size, or equinumerous, if there is a bijection
between them. We can also define that X is no larger than Y
(|X | ≤ |Y |) if there is an injective function from X to Y . By
the Schröder-Bernstein Theorem, this in fact provides a sizewise
order of infinite sets. Finally, Cantor’s theorem says that for
any X , |X | < |℘(X )|. This is a generalization of our result that
℘(Z+ ) is uncountable, and shows that there are not just two, but
infinitely many levels of infinity.

Problems
Problem 4.1. According to Definition 4.4, a set X is enumerable
iff X = ∅ or there is a surjective f : Z+ → X . It is also possible to
define “countable set” precisely by: a set is enumerable iff there
is an injective function g : X → Z+ . Show that the definitions are
equivalent, i.e., show that there is an injective function g : X →
Z+ iff either X = ∅ or there is a surjective f : Z+ → X .

Problem 4.2. Define an enumeration of the positive squares 4,


9, 16, . . .
CHAPTER 4. THE SIZE OF SETS 52

Problem 4.3. Show that if X and Y are countable, so is X ∪ Y .

Problem 4.4. Show by induction on n that if X 1 , X 2 , . . . , Xn are


all countable, so is X 1 ∪ · · · ∪ Xn .

Problem 4.5. Give an enumeration of the set of all positive ra-


tional numbers. (A positive rational number is one that can be
written as a fraction n/m with n, m ∈ Z+ ).

Problem 4.6. Show that Q is countable. (A rational number is


one that can be written as a fraction z /m with z ∈ Z, m ∈ Z+ ).

Problem 4.7. Define an enumeration of B∗ .

Problem 4.8. Recall from your introductory logic course that


each possible truth table expresses a truth function. In other
words, the truth functions are all functions from Bk → B for
some k . Prove that the set of all truth functions is enumerable.

Problem 4.9. Show that the set of all finite subsets of an arbitrary
infinite enumerable set is enumerable.

Problem 4.10. A set of positive integers is said to be cofinite iff it


is the complement of a finite set of positive integers. Let I be the
set that contains all the finite and cofinite sets of positive integers.
Show that I is enumerable.

Problem 4.11. Show that the countable union of countable sets


is countable. That is, whenever X 1 , X 2 , . . . are sets, and each
Xi is countable, then the union i∞=1 Xi of all of them is also
S
countable.

Problem 4.12. Show that ℘(N) is uncountable by a diagonal ar-


gument.

Problem 4.13. Show that the set of functions f : Z+ → Z+ is


uncountable by an explicit diagonal argument. That is, show
that if f 1 , f2 , . . . , is a list of functions and each fi : Z+ → Z+ , then
there is some f : Z+ → Z+ not on this list.
4.6. COMPARING SIZES OF SETS 53

Problem 4.14. Show that if there is an injective function g : Y →


X , and Y is uncountable, then so is X . Do this by showing how
you can use g to turn an enumeration of X into one of Y .

Problem 4.15. Show that the set of all sets of pairs of positive
integers is uncountable by a reduction argument.

Problem 4.16. Show that Nω , the set of infinite sequences of


natural numbers, is uncountable by a reduction argument.

Problem 4.17. Let P be the set of functions from the set of posi-
tive integers to the set {0}, and let Q be the set of partial functions
from the set of positive integers to the set {0}. Show that P is
countable and Q is not. (Hint: reduce the problem of enumerat-
ing Bω to enumerating Q ).

Problem 4.18. Let S be the set of all surjective functions from


the set of positive integers to the set {0,1}, i.e., S consists of all
surjective f : Z+ → B. Show that S is uncountable.

Problem 4.19. Show that the set R of all real numbers is un-
countable.

Problem 4.20. Show that if X is equinumerous with U and and


Y is equinumerous with V , and the intersections X ∩Y and U ∩V
are empty, then the unions X ∪ Y and U ∪ V are equinumerous.

Problem 4.21. Show that if X is infinite and countable, then it


is equinumerous with the positive integers Z+ .

Problem 4.22. Show that there cannot be an injective function


g : ℘(X ) → X , for any set X . Hint: Suppose g : ℘(X ) → X is
injective. Then for each x ∈ X there is at most one Y ⊆ X such
that g (Y ) = x. Define a set Y such that for every x ∈ X , g (Y ) , x.
PART II

First-order
Logic

55
CHAPTER 5

Syntax and
Semantics
5.1 Introduction
In order to develop the theory and metatheory of first-order logic,
we must first define the syntax and semantics of its expressions.
The expressions of first-order logic are terms and formulas. Terms
are formed from variables, constant symbols, and function sym-
bols. Formulas, in turn, are formed from predicate symbols to-
gether with terms (these form the smallest, “atomic” formulas),
and then from atomic formulas we can form more complex ones
using logical connectives and quantifiers. There are many dif-
ferent ways to set down the formation rules; we give just one
possible one. Other systems will chose different symbols, will se-
lect different sets of connectives as primitive, will use parentheses
differently (or even not at all, as in the case of so-called Polish
notation). What all approaches have in common, though, is that
the formation rules define the set of terms and formulas induc-
tively. If done properly, every expression can result essentially
in only one way according to the formation rules. The induc-
tive definition resulting in expressions that are uniquely readable
means we can give meanings to these expressions using the same

56
5.1. INTRODUCTION 57

method—inductive definition.

Giving the meaning of expressions is the domain of seman-


tics. The central concept in semantics is that of satisfaction in
a structure. A structure gives meaning to the building blocks of
the language: a domain is a non-empty set of objects. The quan-
tifiers are interpreted as ranging over this domain, constant sym-
bols are assigned elements in the domain, function symbols are
assigned functions from the domain to itself, and predicate sym-
bols are assigned relations on the domain. The domain together
with assignments to the basic vocabulary constitutes a structure.
Variables may appear in formulas, and in order to give a seman-
tics, we also have to assign elements of the domain to them—this
is a variable assignment. The satisfaction relation, finally, brings
these together. A formula may be satisfied in a structure M rela-
tive to a variable assignment s , written as M, s |= A. This relation
is also defined by induction on the structure of A, using the truth
tables for the logical connectives to define, say, satisfaction of
A ∧ B in terms of satisfaction (or not) of A and B. It then turns
out that the variable assignment is irrelevant if the formula A
is a sentence, i.e., has no free variables, and so we can talk of
sentences being simply satisfied (or not) in structures.

On the basis of the satisfaction relation M |= A for sentences


we can then define the basic semantic notions of validity, entail-
ment, and satisfiability. A sentence is valid,  A, if every struc-
ture satisfies it. It is entailed by a set of sentences, Γ  A, if every
structure that satisfies all the sentences in Γ also satisfies A. And
a set of sentences is satisfiable if some structure satisfies all sen-
tences in it at the same time. Because formulas are inductively
defined, and satisfaction is in turn defined by induction on the
structure of formulas, we can use induction to prove properties
of our semantics and to relate the semantic notions defined.
CHAPTER 5. SYNTAX AND SEMANTICS 58

5.2 First-Order Languages


Expressions of first-order logic are built up from a basic vocab-
ulary containing variables, constant symbols, predicate symbols and
sometimes function symbols. From them, together with logical con-
nectives, quantifiers, and punctuation symbols such as parenthe-
ses and commas, terms and formulas are formed.
Informally, predicate symbols are names for properties and
relations, constant symbols are names for individual objects, and
function symbols are names for mappings. These, except for
the identity predicate =, are the non-logical symbols and together
make up a language. Any first-order language L is determined
by its non-logical symbols. In the most general case, L contains
infinitely many symbols of each kind.
In the general case, we make use of the following symbols in
first-order logic:

1. Logical symbols

a) Logical connectives: ¬ (negation), ∧ (conjunction),


∨ (disjunction), → (conditional), ∀ (universal quanti-
fier), ∃ (existential quantifier).
b) The propositional constant for falsity ⊥.
c) The two-place identity predicate =.
d) A countably infinite set of variables: v0 , v1 , v2 , . . .

2. Non-logical symbols, making up the standard language of


first-order logic

a) A countably infinite set of n-place predicate symbols


for each n > 0: An0 , An1 , An2 , . . .
b) A countably infinite set of constant symbols: c0 , c1 ,
c2 , . . . .
c) A countably infinite set of n-place function symbols
for each n > 0: f0n , f1n , f2n , . . .
5.2. FIRST-ORDER LANGUAGES 59

3. Punctuation marks: (, ), and the comma.

Most of our definitions and results will be formulated for the


full standard language of first-order logic. However, depending
on the application, we may also restrict the language to only a
few predicate symbols, constant symbols, and function symbols.

Example 5.1. The language LA of arithmetic contains a single


two-place predicate symbol <, a single constant symbol , one
one-place function symbol 0, and two two-place function sym-
bols + and ×.

Example 5.2. The language of set theory LZ contains only the


single two-place predicate symbol ∈.

Example 5.3. The language of orders L≤ contains only the two-


place predicate symbol ≤.

Again, these are conventions: officially, these are just aliases,


e.g., <, ∈, and ≤ are aliases for A20 ,  for c0 , 0 for f01 , + for f02 , ×
for f12 .
In addition to the primitive connectives and quantifiers in-
troduced above, we also use the following defined symbols: ↔
(biconditional), truth >
A defined symbol is not officially part of the language, but
is introduced as an informal abbreviation: it allows us to abbre-
viate formulas which would, if we only used primitive symbols,
get quite long. This is obviously an advantage. The bigger ad-
vantage, however, is that proofs become shorter. If a symbol is
primitive, it has to be treated separately in proofs. The more
primitive symbols, therefore, the longer our proofs.
You may be familiar with different terminology and symbols
than the ones we use above. Logic texts (and teachers) commonly
use either ∼, ¬, and ! for “negation”, ∧, ·, and & for “conjunction”.
Commonly used symbols for the “conditional” or “implication”
are →, ⇒, and ⊃. Symbols for “biconditional,” “bi-implication,”
or “(material) equivalence” are ↔, ⇔, and ≡. The ⊥ symbol
CHAPTER 5. SYNTAX AND SEMANTICS 60

is variously called “falsity,” “falsum,”, “absurdity,”, or “bottom.”


The > symbol is variously called “truth,” “verum,”, or “top.”
It is conventional to use lower case letters (e.g., a, b, c ) from
the beginning of the Latin alphabet for constant symbols (some-
times called names), and lower case letters from the end (e.g., x,
y, z ) for variables. Quantifiers combine with variables, e.g., x;
notational variations include ∀x, (∀x), (x), Π x, x for the uni-
V
versal quantifier and ∃x, (∃x), (Ex), Σ x, x for the existential
W
quantifier.
We might treat all the propositional operators and both quan-
tifiers as primitive symbols of the language. We might instead
choose a smaller stock of primitive symbols and treat the other
logical operators as defined. “Truth functionally complete” sets
of Boolean operators include {¬, ∨}, {¬, ∧}, and {¬, →}—these
can be combined with either quantifier for an expressively com-
plete first-order language.
You may be familiar with two other logical operators: the
Sheffer stroke | (named after Henry Sheffer), and Peirce’s ar-
row ↓, also known as Quine’s dagger. When given their usual
readings of “nand” and “nor” (respectively), these operators are
truth functionally complete by themselves.

5.3 Terms and Formulas


Once a first-order language L is given, we can define expressions
built up from the basic vocabulary of L. These include in partic-
ular terms and formulas.

Definition 5.4 (Terms). The set of terms Trm(L) of L is defined


inductively by:

1. Every variable is a term.

2. Every constant symbol of L is a term.


5.3. TERMS AND FORMULAS 61

3. If f is an n-place function symbol and t1 , . . . , tn are terms,


then f (t1, . . . , tn ) is a term.

4. Nothing else is a term.

A term containing no variables is a closed term.

The constant symbols appear in our specification of the lan-


guage and the terms as a separate category of symbols, but they
could instead have been included as zero-place function symbols.
We could then do without the second clause in the definition of
terms. We just have to understand f (t1, . . . , tn ) as just f by itself
if n = 0.

Definition 5.5 (Formula). The set of formulas Frm(L) of the


language L is defined inductively as follows:

1. ⊥ is an atomic formula.

2. If R is an n-place predicate symbol of L and t1 , . . . , tn are


terms of L, then R(t1, . . . , tn ) is an atomic formula.

3. If t1 and t2 are terms of L, then =(t1, t2 ) is an atomic for-


mula.

4. If A is a formula, then ¬A is formula.

5. If A and B are formulas, then (A ∧ B) is a formula.

6. If A and B are formulas, then (A ∨ B) is a formula.

7. If A and B are formulas, then (A → B) is a formula.

8. If A is a formula and x is a variable, then ∀x A is a formula.

9. If A is a formula and x is a variable, then ∃x A is a formula.

10. Nothing else is a formula.

The definitions of the set of terms and that of formulas are


CHAPTER 5. SYNTAX AND SEMANTICS 62

inductive definitions. Essentially, we construct the set of formu-


las in infinitely many stages. In the initial stage, we pronounce
all atomic formulas to be formulas; this corresponds to the first
few cases of the definition, i.e., the cases for ⊥, R(t1, . . . , tn ) and
=(t1, t2 ). “Atomic formula” thus means any formula of this form.
The other cases of the definition give rules for constructing
new formulas out of formulas already constructed. At the second
stage, we can use them to construct formulas out of atomic for-
mulas. At the third stage, we construct new formulas from the
atomic formulas and those obtained in the second stage, and so
on. A formula is anything that is eventually constructed at such
a stage, and nothing else.
By convention, we write = between its arguments and leave
out the parentheses: t1 = t2 is an abbreviation for =(t1, t2 ). More-
over, ¬=(t1, t2 ) is abbreviated as t1 , t2 . When writing a formula
(B ∗C ) constructed from B, C using a two-place connective ∗, we
will often leave out the outermost pair of parentheses and write
simply B ∗ C .
Some logic texts require that the variable x must occur in A
in order for ∃x A and ∀x A to count as formulas. Nothing bad
happens if you don’t require this, and it makes things easier.

Definition 5.6. Formulas constructed using the defined opera-


tors are to be understood as follows:

1. > abbreviates ¬⊥.

2. A ↔ B abbreviates (A → B) ∧ (B → A).

If we work in a language for a specific application, we will


often write two-place predicate symbols and function symbols
between the respective terms, e.g., t1 < t2 and (t1 + t2 ) in the
language of arithmetic and t1 ∈ t2 in the language of set the-
ory. The successor function in the language of arithmetic is even
written conventionally after its argument: t 0. Officially, however,
5.4. UNIQUE READABILITY 63

these are just conventional abbreviations for A20 (t1, t2 ), f02 (t1, t2 ),
A20 (t1, t2 ) and f01 (t ), respectively.

Definition 5.7 (Syntactic identity). The symbol ≡ expresses syn-


tactic identity between strings of symbols, i.e., A ≡ B iff A and B
are strings of symbols of the same length and which contain the
same symbol in each place.

The ≡ symbol may be flanked by strings obtained by con-


catenation, e.g., A ≡ (B ∨ C ) means: the string of symbols A is
the same string as the one obtained by concatenating an opening
parenthesis, the string B, the ∨ symbol, the string C , and a clos-
ing parenthesis, in this order. If this is the case, then we know
that the first symbol of A is an opening parenthesis, A contains
B as a substring (starting at the second symbol), that substring
is followed by ∨, etc.

5.4 Unique Readability


The way we defined formulas guarantees that every formula has
a unique reading, i.e., there is essentially only one way of con-
structing it according to our formation rules for formulas and
only one way of “interpreting” it. If this were not so, we would
have ambiguous formulas, i.e., formulas that have more than one
reading or intepretation—and that is clearly something we want
to avoid. But more importantly, without this property, most of the
definitions and proofs we are going to give will not go through.
Perhaps the best way to make this clear is to see what would
happen if we had given bad rules for forming formulas that would
not guarantee unique readability. For instance, we could have
forgotten the parentheses in the formation rules for connectives,
e.g., we might have allowed this:

If A and B are formulas, then so is A → B.


CHAPTER 5. SYNTAX AND SEMANTICS 64

Starting from an atomic formula D, this would allow us to form


D → D. From this, together with D, we would get D → D → D.
But there are two ways to do this:

1. We take D to be A and D → D to be B.

2. We take A to be D → D and B is D.

Correspondingly, there are two ways to “read” the formula D →


D → D. It is of the form B → C where B is D and C is D → D,
but it is also of the form B → C with B being D → D and C
being D.
If this happens, our definitions will not always work. For in-
stance, when we define the main operator of a formula, we say: in
a formula of the form B → C , the main operator is the indicated
occurrence of →. But if we can match the formula D → D → D
with B → C in the two different ways mentioned above, then in
one case we get the first occurrence of → as the main operator,
and in the second case the second occurrence. But we intend the
main operator to be a function of the formula, i.e., every formula
must have exactly one main operator occurrence.

Lemma 5.8. The number of left and right parentheses in a formula A


are equal.

Proof. We prove this by induction on the way A is constructed.


This requires two things: (a) We have to prove first that all atomic
formulas have the property in question (the induction basis). (b)
Then we have to prove that when we construct new formulas out
of given formulas, the new formulas have the property provided
the old ones do.
Let l (A) be the number of left parentheses, and r (A) the num-
ber of right parentheses in A, and l (t ) and r (t ) similarly the num-
ber of left and right parentheses in a term t . We leave the proof
that for any term t , l (t ) = r (t ) as an exercise.

1. A ≡ ⊥: A has 0 left and 0 right parentheses.


5.4. UNIQUE READABILITY 65

2. A ≡ R(t1, . . . , tn ): l (A) = 1 + l (t1 ) + · · · + l (tn ) = 1 + r (t1 ) +


· · · + r (tn ) = r (A). Here we make use of the fact, left as an
exercise, that l (t ) = r (t ) for any term t .
3. A ≡ t1 = t2 : l (A) = l (t1 ) + l (t2 ) = r (t1 ) + r (t2 ) = r (A).
4. A ≡ ¬B: By induction hypothesis, l (B) = r (B). Thus
l (A) = l (B) = r (B) = r (A).
5. A ≡ (B ∗ C ): By induction hypothesis, l (B) = r (B) and
l (C ) = r (C ). Thus l (A) = 1 + l (B) + l (C ) = 1 + r (B) + r (C ) =
r (A).
6. A ≡ ∀x B: By induction hypothesis, l (B) = r (B). Thus,
l (A) = l (B) = r (B) = r (A).
7. A ≡ ∃x B: Similarly.

Definition 5.9 (Proper prefix). A string of symbols B is a proper


prefix of a string of symbols A if concatenating B and a non-empty
string of symbols yields A.

Lemma 5.10. If A is a formula, and B is a proper prefix of A, then


B is not a formula.

Proof. Exercise. 

Proposition 5.11. If A is an atomic formula, then it satisfes one, and


only one of the following conditions.

1. A ≡ ⊥.

2. A ≡ R(t1, . . . , tn ) where R is an n-place predicate symbol, t1 , . . . ,


tn are terms, and each of R, t1 , . . . , tn is uniquely determined.
CHAPTER 5. SYNTAX AND SEMANTICS 66

3. A ≡ t1 = t2 where t1 and t2 are uniquely determined terms.

Proof. Exercise. 

Proposition 5.12 (Unique Readability). Every formula satisfies


one, and only one of the following conditions.

1. A is atomic.

2. A is of the form ¬B.

3. A is of the form (B ∧ C ).

4. A is of the form (B ∨ C ).

5. A is of the form (B → C ).

6. A is of the form ∀x B.

7. A is of the form ∃x B.

Moreover, in each case B, or B and C , are uniquely determined. This


means that, e.g., there are no different pairs B, C and B 0, C 0 so that A
is both of the form (B → C ) and (B 0 → C 0).

Proof. The formation rules require that if a formula is not atomic,


it must start with an opening parenthesis (, ¬, or with a quanti-
fier. On the other hand, every formula that start with one of the
following symbols must be atomic: a predicate symbol, a function
symbol, a constant symbol, ⊥.
So we really only have to show that if A is of the form (B ∗ C )
and also of the form (B 0 ∗0 C 0), then B ≡ B 0, C ≡ C 0, and ∗ = ∗0.
So suppose both A ≡ (B ∗ C ) and A ≡ (B 0 ∗0 C 0). Then either
B ≡ B 0 or not. If it is, clearly ∗ = ∗0 and C ≡ C 0, since they then
are substrings of A that begin in the same place and are of the
same length. The other case is C 6≡ C 0. Since C and C 0 are both
substrings of A that begin at the same place, one must be a prefix
of the other. But this is impossible by Lemma 5.10. 
5.5. MAIN OPERATOR OF A FORMULA 67

5.5 Main operator of a Formula


It is often useful to talk about the last operator used in construct-
ing a formula A. This operator is called the main operator of A.
Intuitively, it is the “outermost” operator of A. For example, the
main operator of ¬A is ¬, the main operator of (A ∨ B) is ∨, etc.

Definition 5.13 (Main operator). The main operator of a for-


mula A is defined as follows:

1. A is atomic: A has no main operator.

2. A ≡ ¬B: the main operator of A is ¬.

3. A ≡ (B ∧ C ): the main operator of A is ∧.

4. A ≡ (B ∨ C ): the main operator of A is ∨.

5. A ≡ (B → C ): the main operator of A is →.

6. A ≡ ∀x B: the main operator of A is ∀.

7. A ≡ ∃x B: the main operator of A is ∃.

In each case, we intend the specific indicated occurrence of the


main operator in the formula. For instance, since the formula
((D → E) → (E → D)) is of the form (B → C ) where B is
(D → E) and C is (E → D), the second occurrence of → is the
main operator.
This is a recursive definition of a function which maps all non-
atomic formulas to their main operator occurrence. Because of
the way formulas are defined inductively, every formula A satis-
fies one of the cases in Definition 5.13. This guarantees that for
each non-atomic formula A a main operator exists. Because each
formula satisfies only one of these conditions, and because the
smaller formulas from which A is constructed are uniquely deter-
mined in each case, the main operator occurrence of A is unique,
and so we have defined a function.
CHAPTER 5. SYNTAX AND SEMANTICS 68

We call formulas by the following names depending on which


symbol their main operator is:
Main operator Type of formula Example
none atomic (formula) ⊥, R(t1, . . . , tn ), t1 = t2
¬ negation ¬A
∧ conjunction (A ∧ B)
∨ disjunction (A ∨ B)
→ conditional (A → B)
∀ universal (formula) ∀x A
∃ existential (formula) ∃x A

5.6 Subformulas
It is often useful to talk about the formulas that “make up” a
given formula. We call these its subformulas. Any formula counts
as a subformula of itself; a subformula of A other than A itself is
a proper subformula.

Definition 5.14 (Immediate Subformula). If A is a formula, the


immediate subformulas of A are defined inductively as follows:

1. Atomic formulas have no immediate subformulas.

2. A ≡ ¬B: The only immediate subformula of A is B.

3. A ≡ (B ∗ C ): The immediate subformulas of A are B and


C (∗ is any one of the two-place connectives).

4. A ≡ ∀x B: The only immediate subformula of A is B.

5. A ≡ ∃x B: The only immediate subformula of A is B.


5.6. SUBFORMULAS 69

Definition 5.15 (Proper Subformula). If A is a formula, the


proper subformulas of A are recursively as follows:

1. Atomic formulas have no proper subformulas.

2. A ≡ ¬B: The proper subformulas of A are B together with


all proper subformulas of B.

3. A ≡ (B ∗ C ): The proper subformulas of A are B, C ,


together with all proper subformulas of B and those of C .

4. A ≡ ∀x B: The proper subformulas of A are B together


with all proper subformulas of B.

5. A ≡ ∃x B: The proper subformulas of A are B together


with all proper subformulas of B.

Definition 5.16 (Subformula). The subformulas of A are A itself


together with all its proper subformulas.

Note the subtle difference in how we have defined immediate


subformulas and proper subformulas. In the first case, we have
directly defined the immediate subformulas of a formula A for
each possible form of A. It is an explicit definition by cases, and
the cases mirror the inductive definition of the set of formulas.
In the second case, we have also mirrored the way the set of all
formulas is defined, but in each case we have also included the
proper subformulas of the smaller formulas B, C in addition to
these formulas themselves. This makes the definition recursive. In
general, a definition of a function on an inductively defined set
(in our case, formulas) is recursive if the cases in the definition of
the function make use of the function itself. To be well defined,
we must make sure, however, that we only ever use the values
of the function for arguments that come “before” the one we are
defining—in our case, when defining “proper subformula” for (B ∗
CHAPTER 5. SYNTAX AND SEMANTICS 70

C ) we only use the proper subformulas of the “earlier” formulas


B and C .

5.7 Free Variables and Sentences

Definition 5.17 (Free occurrences of a variable). The free oc-


currences of a variable in a formula are defined inductively as
follows:

1. A is atomic: all variable occurrences in A are free.

2. A ≡ ¬B: the free variable occurrences of A are exactly


those of B.

3. A ≡ (B ∗ C ): the free variable occurrences of A are those


in B together with those in C .

4. A ≡ ∀x B: the free variable occurrences in A are all of


those in B except for occurrences of x.

5. A ≡ ∃x B: the free variable occurrences in A are all of


those in B except for occurrences of x.

Definition 5.18 (Bound Variables). An occurrence of a variable


in a formula A is bound if it is not free.

Definition 5.19 (Scope). If ∀x B is an occurrence of a subfor-


mula in a formula A, then the corresponding occurrence of B
in A is called the scope of the corresponding occurrence of ∀x.
Similarly for ∃x.
If B is the scope of a quantifier occurrence ∀x or ∃x in A,
then all occurrences of x which are free in B are said to be bound
by the mentioned quantifier occurrence.
5.8. SUBSTITUTION 71

Example 5.20. Consider the following formula:

∃v0 A20 (v0, v1 )


| {z }
B

B represents the scope of ∃v0 . The quantifier binds the occurence


of v0 in B, but does not bind the occurence of v1 . So v1 is a free
variable in this case.
We can now see how this might work in a more complicated
formula A:
D
z }| {
∀v0 (A10 (v0 ) →A20 (v0, v1 )) → ∃v1 (A21 (v0, v1 ) ∨ ∀v0 ¬A11 (v0 ))
| {z } | {z }
B C

B is the scope of the first ∀v0 , C is the scope of ∃v1 , and D is the
scope of the second ∀v0 . The first ∀v0 binds the occurrences of v0
in B, ∃v1 the occurrence of v1 in C , and the second ∀v0 binds the
occurrence of v0 in D. The first occurrence of v1 and the fourth
occurrence of v0 are free in A. The last occurrence of v0 is free
in D, but bound in C and A.

Definition 5.21 (Sentence). A formula A is a sentence iff it con-


tains no free occurrences of variables.

5.8 Substitution

Definition 5.22 (Substitution in a term). We define s [t/x], the


result of substituting t for every occurrence of x in s , recursively:

1. s ≡ c : s [t/x] is just s .

2. s ≡ y: s [t/x] is also just s , provided y is a variable other


than x.

3. s ≡ x: s [t/x] is t .
CHAPTER 5. SYNTAX AND SEMANTICS 72

4. s ≡ f (t1, . . . , tn ): s [t/x] is f (t1 [t/x], . . . , tn [t/x]).

Definition 5.23. A term t is free for x in A if none of the free


occurrences of x in A occur in the scope of a quantifier that binds
a variable in t .

Definition 5.24 (Substitution in a formula). If A is a formula, x


is a variable, and t is a term free for x in A, then A[t/x] is the
result of substituting t for all free occurrences of x in A.

1. A ≡ P (t1, . . . , tn ): A[t/x] is P (t1 [t/x], . . . , tn [t/x]).

2. A ≡ t1 = t2 : A[t/x] is t1 [t/x] = t2 [t/x].

3. A ≡ ¬B: A[t/x] is ¬B[t/x].

4. A ≡ (B ∧ C ): A[t/x] is (B[t/x] ∧ C [t/x]).

5. A ≡ (B ∨ C ): A[t/x] is (B[t/x] ∨ C [t/x]).

6. A ≡ (B → C ): A[t/x] is (B[t/x] → C [t/x]).

7. A ≡ ∀y B: A[t/x] is ∀y B[t/x], provided y is a variable


other than x; otherwise A[t/x] is just A.

8. A ≡ ∃y B: A[t/x] is ∃y B[t/x], provided y is a variable


other than x; otherwise A[t/x] is just A.

Note that substitution may be vacuous: If x does not occur in


A at all, then A[t/x] is just A.
The restriction that t must be free for x in A is necessary
to exclude cases like the following. If A ≡ ∃y x < y and t ≡ y,
then A[t/x] would be ∃y y < y. In this case the free variable y
is “captured” by the quantifier ∃y upon substitution, and that is
undesirable. For instance, we would like it to be the case that
whenever ∀x B holds, so does B[t/x]. But consider ∀x ∃y x < y
5.9. STRUCTURES FOR FIRST-ORDER LANGUAGES 73

(here B is ∃y x < y). It is sentence that is true about, e.g., the


natural numbers: for every number x there is a number y greater
than it. If we allowed y as a possible substitution for x, we would
end up with B[y/x] ≡ ∃y y < y, which is false. We prevent this by
requiring that none of the free variables in t would end up being
bound by a quantifier in A.
We often use the following convention to avoid cumbersume
notation: If A is a formula with a free variable x, we write A(x)
to indicate this. When it is clear which A and x we have in mind,
and t is a term (assumed to be free for x in A(x)), then we write
A(t ) as short for A(x)[t/x].

5.9 Structures for First-order Languages


First-order languages are, by themselves, uninterpreted: the con-
stant symbols, function symbols, and predicate symbols have no
specific meaning attached to them. Meanings are given by spec-
ifying a structure. It specifies the domain, i.e., the objects which
the constant symbols pick out, the function symbols operate on,
and the quantifiers range over. In addition, it specifies which
constant symbols pick out which objects, how a function symbol
maps objects to objects, and which objects the predicate symbols
apply to. Structures are the basis for semantic notions in logic,
e.g., the notion of consequence, validity, satisfiablity. They are
variously called “structures,” “interpretations,” or “models” in
the literature.

Definition 5.25 (Structures). A structure M, for a language L of


first-order logic consists of the following elements:

1. Domain: a non-empty set, |M|

2. Interpretation of constant symbols: for each constant symbol c


of L, an element c M ∈ |M|
CHAPTER 5. SYNTAX AND SEMANTICS 74

3. Interpretation of predicate symbols: for each n-place predicate


symbol R of L (other than =), an n-place relation R M ⊆
|M|n

4. Interpretation of function symbols: for each n-place function


symbol f of L, an n-place function f M : |M|n → |M|

Example 5.26. A structure M for the language of arithmetic


consists of a set, an element of |M|, M , as interpretation of the
constant symbol , a one-place function 0M : |M| → |M|, two two-
place functions +M and ×M , both |M|2 → |M|, and a two-place
relation <M ⊆ |M|2 .
An obvious example of such a structure is the following:

1. |N| = N

2. N = 0

3. 0N (n) = n + 1 for all n ∈ N

4. +N (n, m) = n + m for all n, m ∈ N

5. ×N (n, m) = n · m for all n, m ∈ N

6. <N = {hn, mi : n ∈ N, m ∈ N, n < m}

The structure N for LA so defined is called the standard model of


arithmetic, because it interprets the non-logical constants of LA
exactly how you would expect.
However, there are many other possible structures for LA . For
instance, we might take as the domain the set Z of integers instead
of N, and define the interpretations of , 0, +, ×, < accordingly.
But we can also define structures for LA which have nothing even
remotely to do with numbers.

Example 5.27. A structure M for the language LZ of set theory


requires just a set and a single-two place relation. So technically,
e.g., the set of people plus the relation “x is older than y” could
5.10. COVERED STRUCTURES FOR FIRST-ORDER LANGUAGES 75

be used as a structure for LZ , as well as N together with n ≥ m


for n, m ∈ N.
A particularly interesting structure for LZ in which the ele-
ments of the domain are actually sets, and the interpretation of
∈ actually is the relation “x is an element of y” is the structure
HF of hereditarily finite sets:

1. |HF| = ∅ ∪ ℘(∅) ∪ ℘(℘(∅)) ∪ ℘(℘(℘(∅))) ∪ . . . ;

2. ∈HF = {hx, yi : x, y ∈ |HF| , x ∈ y }.

The stipulations we make as to what counts as a structure


impact our logic. For example, the choice to prevent empty do-
mains ensures, given the usual account of satisfaction (or truth)
for quantified sentences, that ∃x (A(x)∨¬A(x)) is valid—that is, a
logical truth. And the stipulation that all constant symbols must
refer to an object in the domain ensures that the existential gener-
alization is a sound pattern of inference: A(a), therefore ∃x A(x).
If we allowed names to refer outside the domain, or to not refer,
then we would be on our way to a free logic, in which existential
generalization requires an additional premise: A(a) and ∃x x = a,
therefore ∃x A(x).

5.10 Covered Structures for First-order


Languages
Recall that a term is closed if it contains no variables.

Definition 5.28 (Value of closed terms). If t is a closed term of


the language L and M is a structure for L, the value ValM (t ) is
defined as follows:

1. If t is just the constant symbol c , then ValM (c ) = c M .


CHAPTER 5. SYNTAX AND SEMANTICS 76

2. If t is of the form f (t1, . . . , tn ), then

ValM (t ) = f M
(ValM (t1 ), . . . , ValM (tn )).

Definition 5.29 (Covered structure). A structure is covered if ev-


ery element of the domain is the value of some closed term.

Example 5.30. Let L be the language with constant symbols


zer o, one, tw o, . . . , the binary predicate symbol <, and the
binary function symbols + and ×. Then a structure M for L is the
one with domain |M| = {0, 1, 2, . . .} and assignments z er o M = 0,
one M = 1, tw o M = 2, and so forth. For the binary relation
symbol <, the set <M is the set of all pairs hc 1, c 2 i ∈ |M|2 such
that c 1 is less than c 2 : for example, h1, 3i ∈<M but h2, 2i <<M . For
the binary function symbol +, define +M in the usual way—for
example, +M (2, 3) maps to 5, and similarly for the binary function
symbol ×. Hence, the value of f our is just 4, and the value of
×(tw o, +(thr ee, z er o)) (or in infix notation, tw o ×(thr ee +z er o)
) is

ValM (×(tw o, +(thr ee, z er o)) =


= ×M (ValM (tw o), ValM (tw o, +(thr ee, z er o)))
= ×M (ValM (tw o), +M (ValM (thr ee), ValM (z er o)))
= ×M (tw o M, +M (thr ee M, z er o M ))
= ×M (2, +M (3, 0))
= ×M (2, 3)
=6

5.11 Satisfaction of a Formula in a


Structure
The basic notion that relates expressions such as terms and for-
mulas, on the one hand, and structures on the other, are those
5.11. SATISFACTION OF A FORMULA IN A STRUCTURE 77

of value of a term and satisfaction of a formula. Informally, the


value of a term is an element of a structure—if the term is just a
constant, its value is the object assigned to the constant by the
structure, and if it is built up using function symbols, the value is
computed from the values of constants and the functions assigned
to the functions in the term. A formula is satisfied in a structure
if the interpretation given to the predicates makes the formula
true in the domain of the structure. This notion of satisfaction
is specified inductively: the specification of the structure directly
states when atomic formulas are satisfied, and we define when a
complex formula is satisfied depending on the main connective
or quantifier and whether or not the immediate subformulas are
satisfied. The case of the quantifiers here is a bit tricky, as the
immediate subformula of a quantified formula has a free variable,
and structures don’t specify the values of variables. In order to
deal with this difficulty, we also introduce variable assignments and
define satisfaction not with respect to a structure alone, but with
respect to a structure plus a variable assignment.

Definition 5.31 (Variable Assignment). A variable assignment s


for a structure M is a function which maps each variable to an
element of |M|, i.e., s : Var → |M|.

A structure assigns a value to each constant symbol, and a


variable assignment to each variable. But we want to use terms
built up from them to also name elements of the domain. For
this we define the value of terms inductively. For constant sym-
bols and variables the value is just as the structure or the variable
assignment specifies it; for more complex terms it is computed re-
cursively using the functions the structure assigns to the function
symbols.
CHAPTER 5. SYNTAX AND SEMANTICS 78

Definition 5.32 (Value of Terms). If t is a term of the lan-


guage L, M is a structure for L, and s is a variable assignment
for M, the value ValsM (t ) is defined as follows:

1. t ≡ c : ValsM (t ) = c M .

2. t ≡ x: ValsM (t ) = s (x).

3. t ≡ f (t1, . . . , tn ):

ValsM (t ) = f M
(ValsM (t1 ), . . . , ValsM (tn )).

Definition 5.33 (x-Variant). If s is a variable assignment for a


structure M, then any variable assignment s 0 for M which differs
from s at most in what it assigns to x is called an x-variant of s .
If s 0 is an x-variant of s we write s ∼x s 0.

Note that an x-variant of an assignment s does not have to


assign something different to x. In fact, every assignment counts
as an x-variant of itself.

Definition 5.34 (Satisfaction). Satisfaction of a formula A in


a structure M relative to a variable assignment s , in symbols:
M, s |= A, is defined recursively as follows. (We write M, s 6|= A to
mean “not M, s |= A.”)

1. A ≡ ⊥: not M, s |= A.

2. A ≡ R(t1, . . . , tn ): M, s |= A iff hValsM (t1 ), . . . , ValsM (tn )i ∈


RM.

3. A ≡ t1 = t2 : M, s |= A iff ValsM (t1 ) = ValsM (t2 ).

4. A ≡ ¬B: M, s |= A iff M, s 6|= B.

5. A ≡ (B ∧ C ): M, s |= A iff M, s |= B and M, s |= C .
5.11. SATISFACTION OF A FORMULA IN A STRUCTURE 79

6. A ≡ (B ∨ C ): M, s |= A iff M, s |= A or M, s |= B (or both).

7. A ≡ (B → C ): M, s |= A iff M, s 6|= B or M, s |= C (or both).

8. A ≡ ∀x B: M, s |= A iff for every x-variant s 0 of s , M, s 0 |= B.

9. A ≡ ∃x B: M, s |= A iff there is an x-variant s 0 of s so that


M, s 0 |= B.

The variable assignments are important in the last two clauses.


We cannot define satisfaction of ∀x B(x) by “for all a ∈ |M|,
M |= B(a).” We cannot define satisfaction of ∃x B(x) by “for
at least one a ∈ |M|, M |= B(a).” The reason is that a is not
symbol of the language, and so B(a) is not a formula (that is,
B[a/x] is undefined). We also cannot assume that we have con-
stant symbols or terms available that name every element of M,
since there is nothing in the definition of structures that requires
it. Even in the standard language the set of constant symbols
is countably infinite, so if |M| is not countable there aren’t even
enough constant symbols to name every object.
A variable assignment s provides a value for every variable
in the language. This is of course not necessary: whether or
not a formula A is satisfied in a structure with respect to s only
depends on the assignments s makes to the free variables that
actually occur in A. This is the content of the next theorem.
We require variable assignments to assign values to all variables
simply because it makes things a lot easier.

Proposition 5.35. If the variables in a term t are among x 1 , . . . , x n ,


and s 1 (x i ) = s 2 (x i ) for i = 1, . . . , n, then ValsM1 (t ) = ValsM2 (t ).

Proof. By induction on the complexity of t . For the base case, t


can be a constant symbol or one one of the variables x 1 , . . . , x n .
If t = c , then ValsM1 (t ) = c M = ValsM2 (t ). If t = x i , then ValsM1 (t ) =
s 1 (x i ) = s 2 (x i ) (by the hypothesis of the proposition) = ValsM2 (t ).
For the inductive step, assume that t = f (t1, . . . , tk ) for some
CHAPTER 5. SYNTAX AND SEMANTICS 80

terms t1 , . . . , tk , and that the claim holds for t1 , . . . , tk . Then


ValsM1 (t ) =

ValsM1 (f (t1, . . . , tk )) = f M
(ValsM1 (t1 ), . . . , ValsM1 (tk ))

For i = 1, . . . , tk , its variables are among x 1 , . . . , x n . So by


induction hypothesis, ValsM1 (ti ) = ValsMk (ti ). So,

=f M
(ValsM1 (t1 ), . . . , ValsM1 (tk )) =
=f M
(ValsM2 (t1 ), . . . , ValsM2 (tk )) =
= ValsM2 (f (t1, . . . , tk )) = ValsM2 (t ).

Proposition 5.36. If the free variables in A are among x 1 , . . . , x n ,


and s 1 (x i ) = s 2 (x i ) for i = 1, . . . , n, then M, s 1 |= A iff M, s 2 |= A.

Proof. We use induction on the complexity of A. For the base


case, where A is atomic, A can be: ⊥, R(t1, . . . , tk ) for a k -place
predicate R and terms t1 , . . . , tk , or t1 = t2 for terms t1 and t2 .

1. A ≡ ⊥: both M, s 1 6|= A and M, s 2 6|= A.

2. A ≡ R(t1, . . . , tk ): let M, s 1 |= A. Then

hValsM1 (t1 ), . . . , ValsM1 (tk )i ∈ R M .

For i = 1, . . . , k , ValsM1 (ti ) = ValsM2 (ti ) by Proposition 5.35.


So we also have hValsM2 (ti ), . . . , ValsM2 (tk )i ∈ R M .

3. A ≡ t1 = t2 : if M, s 1 |= A, ValsM2 (t1 ) = ValsM1 (t1 ) (by Proposi-


tion 5.35) = ValsM1 (t2 ) (since M, s1 |= t1 = t2 ) = ValsM2 (t2 ) (by
Proposition 5.35), so M, s 2 |= t1 = t2 .
5.11. SATISFACTION OF A FORMULA IN A STRUCTURE 81

Now assume M, s 1 |= B iff M, s 2 |= B for all formulas B less


complex than A. The induction step proceeds by cases deter-
mined by the main operator of A. In each case, we only demon-
strate the forward direction of the biconditional; the proof of the
reverse direction is symmetrical.

1. A ≡ ¬B: if M, s1 |= A, then M, s 1 6|= B, so by the induction


hypothesis, M, s 2 6|= B, hence M, s 2 |= A.

2. A ≡ B ∧ C : exercise.

3. A ≡ B ∨ C : if M, s1 |= A, then M, s 1 |= B or M, s 1 |= C . By
induction hypothesis, M, s2 |= B or M, s 2 |= C , so M, s2 |= A.

4. A ≡ B → C : exercise.

5. A ≡ ∃x B: if M, s1 |= A, there is an x-variant s¯1 of s1 so that


M, s¯1 |= B. Let s¯2 denote the x-variant of s2 that assigns
the same thing to x as does s¯1 . The free variables of B are
among x 1 , . . . , x n , and x. s¯1 (x i ) = s¯2 (x i ), since s¯1 and s¯2
are x-variants of s1 and s2 , respectively, and by hypothesis
s 1 (x i ) = s 2 (x i ). s¯1 (x) = s¯2 (x) by the way we have defined s¯2 .
Then the induction hypothesis applies to B and s¯1 , s¯2 , so
M, s¯2 |= B. Hence, there is an x-variant of s2 that satisfies
B, so M, s 2 |= A.

6. A ≡ ∀x B: exercise.

By induction, we get that M, s 1 |= A iff M, s 2 |= A whenever the


free variables in A are among x 1 , . . . , x n and s (x i ) = s 0(x i ) for
i = 1, . . . , n. 

Definition 5.37. If A is a sentence, we say that a structure M


satisfies A, M |= A, iff M, s |= A for all variable assignments s .
If M |= A, we also say that A is true in M.
CHAPTER 5. SYNTAX AND SEMANTICS 82

Proposition 5.38. Suppose A(x) only contains x free, and M is a


structure. Then:

1. M |= ∃x A(x) iff M, s |= A(x) for at least one variable assign-


ment s .

2. M |= ∀x A(x) iff M, s |= A(x) for all variable assignments s .

Proof. Exercise. 

5.12 Extensionality
Extensionality, sometimes called relevance, can be expressed in-
formally as follows: the only thing that bears upon the satisfaction
of formula A in a structure M relative to a variable assignment s ,
are the assignments made by M and s to the elements of the
language that actually appear in A.
One immediate consequence of extensionality is that where
two structures M and M 0 agree on all the elements of the lan-
guage appearing in a sentence A and have the same domain, M
and M 0 must also agree on A itself.

Proposition 5.39 (Extensionality). Let A be a sentence, and M and


0 0 0
M 0 be structures. If c M = c M , R M = R M , and f M = f M for every
constant symbol c , relation symbol R, and function symbol f occurring
in A, then M |= A iff M 0 |= A.

Moreover, the value of a term, and whether or not a structure


satisfies a formula, only depends on the values of its subterms.

Proposition 5.40. Let M be a structure, t and t 0 terms, and s a


variable assignment. Let s 0 ∼x s be the x-variant of s given by s 0(x) =
ValsM (t 0). Then ValsM (t [t 0/x]) = ValsM0 (t ).

Proof. By induction on t .
5.13. SEMANTIC NOTIONS 83

1. If t is a constant, say, t ≡ c , then t [t 0/x] = c , and ValsM (c ) =


c M = ValsM0 (c ).

2. If t is a variable other than x, say, t ≡ y, then t [t 0/x] = y,


and ValsM (y) = ValsM0 (y) since s 0 ∼x s .

3. If t ≡ x, then t [t 0/x] = t 0. But ValsM0 (x) = ValsM (t 0) by


definition of s 0.

4. If t ≡ f (t1, . . . , tn ) then we have:

ValsM (t [t 0/x]) =
= ValsM (f (t1 [t 0/x], . . . , tn [t 0/x]))
by definition of t [t 0/x]
=f M
(ValsM (t1 [t 0/x]), . . . , ValsM (tn [t 0/x]))
by definition of ValsM (f (. . . ))
=f M
(ValsM0 (t1 ), . . . , ValsM0 (tn ))
by induction hypothesis
= ValsM0 (t ) by definition of ValsM0 (f (. . . ))

Proposition 5.41. Let M be a structure, A a formula, t a term,


and s a variable assignment. Let s 0 ∼x s be the x-variant of s given
by s 0(x) = ValsM (t ). Then M, s |= A[t/x] iff M, s 0 |= A.

Proof. Exercise. 

5.13 Semantic Notions


Give the definition of structures for first-order languages, we can
define some basic semantic properties of and relationships be-
tween sentences. The simplest of these is the notion of validity
CHAPTER 5. SYNTAX AND SEMANTICS 84

of a sentence. A sentence is valid if it is satisfied in every struc-


ture. Valid sentences are those that are satisfied regardless of how
the non-logical symbols in it are interpreted. Valid sentences are
therefore also called logical truths—they are true, i.e., satisfied, in
any structure and hence their truth depends only on the logical
symbols occurring in them and their syntactic structure, but not
on the non-logical symbols or their interpretation.

Definition 5.42 (Validity). A sentence A is valid,  A, iff M |= A


for every structure M.

Definition 5.43 (Entailment). A set of sentences Γ entails a sen-


tence A, Γ  A, iff for every structure M with M |= Γ, M |= A.

Definition 5.44 (Satisfiability). A set of sentences Γ is satisfiable


if M |= Γ for some structure M. If Γ is not satisfiable it is called
unsatisfiable.

Proposition 5.45. A sentence A is valid iff Γ  A for every set of


sentences Γ.

Proof. For the forward direction, let A be valid, and let Γ be a


set of sentences. Let M be a structure so that M |= Γ. Since A is
valid, M |= A, hence Γ  A.
For the contrapositive of the reverse direction, let A be in-
valid, so there is a structure M with M 6|= A. When Γ = {>},
since > is valid, M |= Γ. Hence, there is a structure M so that
M |= Γ but M 6|= A, hence Γ does not entail A. 
5.13. SEMANTIC NOTIONS 85

Proposition 5.46. Γ  A iff Γ ∪ {¬A} is unsatisfiable.

Proof. For the forward direction, suppose Γ  A and suppose to


the contrary that there is a structure M so that M |= Γ ∪ {¬A}.
Since M |= Γ and Γ  A, M |= A. Also, since M |= Γ∪{¬A}, M |=
¬A, so we have both M |= A and M 6|= A, a contradiction. Hence,
there can be no such structure M, so Γ ∪ {A} is unsatisfiable.
For the reverse direction, suppose Γ ∪ {¬A} is unsatisfiable.
So for every structure M, either M 6|= Γ or M |= A. Hence, for
every structure M with M |= Γ, M |= A, so Γ  A. 

Proposition 5.47. If Γ ⊆ Γ 0 and Γ  A, then Γ 0  A.

Proof. Suppose that Γ ⊆ Γ 0 and Γ  A. Let M be such that


M |= Γ 0; then M |= Γ, and since Γ  A, we get that M |= A.
Hence, whenever M |= Γ 0, M |= A, so Γ 0  A. 

Theorem 5.48 (Semantic Deduction Theorem). Γ ∪ {A}  B iff


Γ  A → B.

Proof. For the forward direction, let Γ ∪ {A}  B and let M be a


structure so that M |= Γ. If M |= A, then M |= Γ ∪ {A}, so since
Γ ∪ {A} entails B, we get M |= B. Therefore, M |= A → B, so
Γ  A → B.
For the reverse direction, let Γ  A → B and M be a structure
so that M |= Γ ∪ {A}. Then M |= Γ, so M |= A → B, and since
M |= A, M |= B. Hence, whenever M |= Γ ∪ {A}, M |= B, so
Γ ∪ {A}  B. 

Summary
A first-order language consists of constant, function, and pred-
icate symbols. Function and constant symbols take a specified
number of arguments. In the language of arithmetic, e.g., we
CHAPTER 5. SYNTAX AND SEMANTICS 86

have a single constant symbol , one 1-place function symbol 0,


two 2-place function symbols + and ×, and one 2-place predicate
symbol <. From variables and constant and function symbols
we form the terms of a language. From the terms of a language
together with its predicate symbol, as well as the identity sym-
bol =, we form the atomic formulas. And in turn from them,
using the logical connectives ¬, ∨, ∧, →, ↔ and the quantifiers ∀
and ∃ we form its formulas. Since we are careful to always include
necessary parentheses in the process of forming terms and formu-
las, there is always exactly one way of reading a formula. This
makes it possible to define things by induction on the structure
of formulas.
Occurrences of variables in formulas are sometimes governed
by a corresponding quantifier: if a variable occurs in the scope
of a quantifier it is considered bound, otherwise free. These
concepts all have inductive definitions, and we also inductively
define the operation of substitution of a term for a variable in
a formula. Formulas without free variable occurrences are called
sentences.
The semantics for a first-order language is given by a struc-
ture for that language. It consists of a domain and elements
of that domain are assigned to each constant symbol. Function
symbols are interpreted by functions and relation symbols by re-
lation on the domain. A function from the set of variables to the
domain is a variable assignment. The relation of satisfaction
relates structures, variable assignments and formulas; M |= [s ]A
is defined by induction on the structure of A. M |= [s ]A only
depends on the interpretation of the symbols actually occurring
in A, and in particular does not depend on s if A contains no free
variables. So if A is a sentence, M |= A if M |= [s ]A for any (or
all) s .
The satisfaction relation is the basis for all semantic notions.
A sentence is valid, A |= , if it is satisfied in every structure. A
sentence A is entailed by set of sentences Γ, Γ  A, iff M |= A
for all M which satisfy every sentence in Γ. A set Γ is satisfiable
iff there is some structure that satisfies every sentence in Γ, oth-
5.13. SEMANTIC NOTIONS 87

erwise unsatisfiable. These notions are interrelated, e.g., Γ  A


iff Γ ∪ {¬A} is unsatisfiable.

Problems
Problem 5.1. Prove Lemma 5.10.

Problem 5.2. Prove Proposition 5.11 (Hint: Formulate and prove


a version of Lemma 5.10 for terms.)

Problem 5.3. Give an inductive definition of the bound variable


occurrences along the lines of Definition 5.17.

Problem 5.4. Is N, the standard model of arithmetic, covered?


Explain.

Problem 5.5. Let L = {c, f , A} with one constant symbol, one


one-place function symbol and one two-place predicate symbol,
and let the structure M be given by

1. |M| = {1, 2, 3}

2. c M = 3

3. f M (1) = 2, f M (2) = 3, f M (3) =2

4. AM = {h1, 2i, h2, 3i, h3, 3i}

(a) Let s (v ) = 1 for all variables v . Find out whether

M, s |= ∃x (A(f (z ), c ) → ∀y (A(y, x) ∨ A(f (y), x)))

Explain why or why not.


(b) Give a different structure and variable assignment in which
the formula is not satisfied.

Problem 5.6. Complete the proof of Proposition 5.36.

Problem 5.7. Show that if A is a sentence, M |= A iff there is a


variable assignment s so that M, s |= A.
CHAPTER 5. SYNTAX AND SEMANTICS 88

Problem 5.8. Prove Proposition 5.38.


Problem 5.9. Suppose L is a language without function sym-
bols. Given a structure M and a ∈ |M|, define M[a/c ] to be
the structure that is just like M, except that c M[a/c ] = a. Define
M ||= A for sentences A by:
1. A ≡ ⊥: not M ||= A.
2. A ≡ R(d1, . . . , dn ): M ||= A iff hd1M, . . . , dnM i ∈ R M .
3. A ≡ d1 = d2 : M ||= A iff d1M = d2M .
4. A ≡ ¬B: M ||= A iff not M ||= B.
5. A ≡ (B ∧ C ): M ||= A iff M ||= B and M ||= C .
6. A ≡ (B ∨ C ): M ||= A iff M ||= B or M ||= C (or both).
7. A ≡ (B → C ): M ||= A iff not M ||= B or M ||= C (or both).
8. A ≡ ∀x B: M ||= A iff for all a ∈ |M|, M[a/c ] ||= B[c /x], if
c does not occur in B.
9. A ≡ ∃x B: M ||= A iff there is an a ∈ |M| such that
M[a/c ] ||= B[c /x], if c does not occur in B.
Let x 1 , . . . , x n be all free variables in A, c 1 , . . . , c n constant sym-
bols not in A, a1 , . . . , an ∈ |M|, and s (x i ) = ai .
Show that M, s |= A iff M[a 1 /c 1, . . . , an /c n ] ||= A[c 1 /x 1 ] . . . [c n /x n ].
Problem 5.10. Suppose that f is a function symbol not in A(x, y).
Show that there is a M such that M |= ∀x ∃y A(x, y) iff there is a M 0
such that M 0 |= ∀x A(x, f (x)).
Problem 5.11. Prove Proposition 5.41
Problem 5.12. 1. Show that Γ  ⊥ iff Γ is unsatisfiable.
2. Show that Γ ∪ {A}  ⊥ iff Γ  ¬A.
3. Suppose c does not occur in A or Γ. Show that Γ  ∀x A
iff Γ  A[c /x].
CHAPTER 6

Theories and
Their Models
6.1 Introduction
The development of the axiomatic method is a significant achieve-
ment in the history of science, and is of special importance in the
history of mathematics. An axiomatic development of a field in-
volves the clarification of many questions: What is the field about?
What are the most fundamental concepts? How are they related?
Can all the concepts of the field be defined in terms of these
fundamental concepts? What laws do, and must, these concepts
obey?
The axiomatic method and logic were made for each other.
Formal logic provides the tools for formulating axiomatic theo-
ries, for proving theorems from the axioms of the theory in a
precisely specified way, for studying the properties of all systems
satisfying the axioms in a systematic way.

89
CHAPTER 6. THEORIES AND THEIR MODELS 90

Definition 6.1. A set of sentences Γ is closed iff, whenever Γ  A


then A ∈ Γ. The closure of a set of sentences Γ is {A : Γ  A}.
We say that Γ is axiomatized by a set of sentences ∆ if Γ is the
closure of ∆

We can think of an axiomatic theory as the set of sentences


that is axiomatized by its set of axioms ∆. In other words, when
we have a first-order language which contains non-logical sym-
bols for the primitives of the axiomatically developed science we
wish to study, together with a set of sentences that express the
fundamental laws of the science, we can think of the theory as
represented by all the sentences in this language that are entailed
by the axioms. This ranges from simple examples with only a
single primitive and simple axioms, such as the theory of partial
orders, to complex theories such as Newtonian mechanics.
The important logical facts that make this formal approach
to the axiomatic method so important are the following. Suppose
Γ is an axiom system for a theory, i.e., a set of sentences.

1. We can state precisely when an axiom system captures an


intended class of structures. That is, if we are interested
in a certain class of structures, we will successfully capture
that class by an axiom system Γ iff the structures are exactly
those M such that M |= Γ.

2. We may fail in this respect because there are M such that


M |= Γ, but M is not one of the structures we intend. This
may lead us to add axioms which are not true in M.

3. If we are successful at least in the respect that Γ is true


in all the intended structures, then a sentence A is true in
all intended structures whenever Γ  A. Thus we can use
logical tools (such as proof methods) to show that sentences
are true in all intended structures simply by showing that
they are entailed by the axioms.
6.1. INTRODUCTION 91

4. Sometimes we don’t have intended structures in mind, but


instead start from the axioms themselves: we begin with
some primitives that we want to satisfy certain laws which
we codify in an axiom system. One thing that we would
like to verify right away is that the axioms do not contradict
each other: if they do, there can be no concepts that obey
these laws, and we have tried to set up an incoherent theory.
We can verify that this doesn’t happen by finding a model
of Γ. And if there are models of our theory, we can use
logical methods to investigate them, and we can also use
logical methods to construct models.

5. The independence of the axioms is likewise an important


question. It may happen that one of the axioms is actu-
ally a consequence of the others, and so is redundant. We
can prove that an axiom A in Γ is redundant by proving
Γ \ {A}  A. We can also prove that an axiom is not redun-
dant by showing that (Γ \ {A}) ∪ {¬A} is satisfiable. For
instance, this is how it was shown that the parallel postulate
is independent of the other axioms of geometry.

6. Another important question is that of definability of con-


cepts in a theory: The choice of the language determines
what the models of a theory consists of. But not every
aspect of a theory must be represented separately in its
models. For instance, every ordering ≤ determines a corre-
sponding strict ordering <—given one, we can define the
other. So it is not necessary that a model of a theory in-
volving such an order must also contain the corresponding
strict ordering. When is it the case, in general, that one
relation can be defined in terms of others? When is it im-
possible to define a relation in terms of other (and hence
must add it to the primitives of the language)?
CHAPTER 6. THEORIES AND THEIR MODELS 92

6.2 Expressing Properties of Structures


It is often useful and important to express conditions on func-
tions and relations, or more generally, that the functions and re-
lations in a structure satisfy these conditions. For instance, we
would like to have ways of distinguishing those structures for a
language which “capture” what we want the predicate symbols
to “mean” from those that do not. Of course we’re completely
free to specify which structures we “intend,” e.g., we can specify
that the interpretation of the predicate symbol ≤ must be an or-
dering, or that we are only interested in interpretations of L in
which the domain consists of sets and ∈ is interpreted by the “is
an element of” relation. But can we do this with sentences of the
language? In other words, which conditions on a structure M can
we express by a sentence (or perhaps a set of sentences) in the
language of M? There are some conditions that we will not be
able to express. For instance, there is no sentence of LA which is
only true in a structure M if |M| = N. We cannot express “the do-
main contains only natural numbers.” But there are “structural
properties” of structures that we perhaps can express. Which
properties of structures can we express by sentences? Or, to put
it another way, which collections of structures can we describe as
those making a sentence (or set of sentences) true?
6.3. EXAMPLES OF FIRST-ORDER THEORIES 93

Definition 6.2 (Model of a set). Let Γ be a set of sentences in a


language L. We say that a structure M is a model of Γ if M |= A
for all A ∈ Γ.

Example 6.3. The sentence ∀x x ≤ x is true in M iff ≤M is a


reflexive relation. The sentence ∀x ∀y ((x ≤ y ∧ y ≤ x) → x = y) is
true in M iff ≤M is anti-symmetric. The sentence ∀x ∀y ∀z ((x ≤
y ∧ y ≤ z ) → x ≤ z ) is true in M iff ≤M is transitive. Thus, the
models of

{ ∀x x ≤ x,
∀x ∀y ((x ≤ y ∧ y ≤ x) → x = y),
∀x ∀y ∀z ((x ≤ y ∧ y ≤ z ) → x ≤ z ) }

are exactly those structures in which ≤ M is reflexive, anti-symmetric,


and transitive, i.e., a partial order. Hence, we can take them as
axioms for the first-order theory of partial orders.

6.3 Examples of First-Order Theories


Example 6.4. The theory of strict linear orders in the language L<
is axiomatized by the set

∀x ¬x < x,
∀x ∀y ((x < y ∨ y < x) ∨ x = y),
∀x ∀y ∀z ((x < y ∧ y < z ) → x < z )

It completely captures the intended structures: every strict linear


order is a model of this axiom system, and vice versa, if R is a
linear order on a set X , then the structure M with |M| = X and
<M = R is a model of this theory.
CHAPTER 6. THEORIES AND THEIR MODELS 94

Example 6.5. The theory of groups in the language  (constant


symbol), · (two-place function symbol) is axiomatized by

∀x (x · ) = x
∀x ∀y ∀z (x · (y · z )) = ((x · y) · z )
∀x ∃y (x · y) = 

Example 6.6. The theory of Peano arithmetic is axiomatized by


the following sentences in the language of arithmetic LA .

¬∃x x 0 = 
∀x ∀y (x 0 = y 0 → x = y)
∀x ∀y (x < y ↔ ∃z (x + z 0 = y))
∀x (x + ) = x
∀x ∀y (x + y 0) = (x + y)0
∀x (x × ) = 
∀x ∀y (x × y 0) = ((x × y) + x)

plus all sentences of the form

(A() ∧ ∀x (A(x) → A(x 0))) → ∀x A(x)

Since there are infinitely many sentences of the latter form, this
axiom system is infinite. The latter form is called the induction
schema. (Actually, the induction schema is a bit more complicated
than we let on here.)
The third axiom is an explicit definition of <.

Example 6.7. The theory of pure sets plays an important role in


the foundations (and in the philosophy) of mathematics. A set is
pure if all its elements are also pure sets. The empty set counts
therefore as pure, but a set that has something as an element that
is not a set would not be pure. So the pure sets are those that are
formed just from the empty set and no “urelements,” i.e., objects
that are not themselves sets.
6.3. EXAMPLES OF FIRST-ORDER THEORIES 95

The following might be considered as an axiom system for a


theory of pure sets:

∃x ¬∃y y ∈ x
∀x ∀y (∀z (z ∈ x ↔ z ∈ y) → x = y)
∀x ∀y ∃z ∀u (u ∈ z ↔ (u = x ∨ u = y))
∀x ∃y ∀z (z ∈ y ↔ ∃u (z ∈ u ∧ u ∈ x))

plus all sentences of the form

∃x ∀y (y ∈ x ↔ A(y))

The first axiom says that there is a set with no elements (i.e., ∅
exists); the second says that sets are extensional; the third that
for any sets X and Y , the set {X,Y } exists; the fourth that for
any sets X and Y , the set X ∪ Y exists.
The sentences mentioned last are collectively called the naive
comprehension scheme. It essentially says that for every A(x), the set
{x : A(x)} exists—so at first glance a true, useful, and perhaps
even necessary axiom. It is called “naive” because, as it turns out,
it makes this theory unsatisfiable: if you take A(y) to be ¬y ∈ y,
you get the sentence

∃x ∀y (y ∈ x ↔ ¬y ∈ y)

and this sentence is not satisfied in any structure.

Example 6.8. In the area of mereology, the relation of parthood is


a fundamental relation. Just like theories of sets, there are theo-
ries of parthood that axiomatize various conceptions (sometimes
conflicting) of this relation.
The language of mereology contains a single two-place pred-
icate symbol P , and P (x, y) “means” that x is a part of y. When
we have this interpretation in mind, a structure for this language
is called a parthood structure. Of course, not every structure for a
single two-place predicate will really deserve this name. To have
CHAPTER 6. THEORIES AND THEIR MODELS 96

a chance of capturing “parthood,” P M must satisfy some condi-


tions, which we can lay down as axioms for a theory of parthood.
For instance, parthood is a partial order on objects: every object
is a part (albeit an improper part) of itself; no two different objects
can be parts of each other; a part of a part of an object is itself
part of that object. Note that in this sense “is a part of” resembles
“is a subset of,” but does not resemble “is an element of” which
is neither reflexive nor transitive.

∀x P (x, x),
∀x ∀y ((P (x, y) ∧ P (y, x)) → x = y),
∀x ∀y ∀z ((P (x, y) ∧ P (y, z )) → P (x, z )),

Moreover, any two objects have a mereological sum (an object


that has these two objects as parts, and is minimal in this respect).

∀x ∀y ∃z ∀u (P (z, u) ↔ (P (x, u) ∧ P (y, u)))

These are only some of the basic principles of parthood consid-


ered by metaphysicians. Further principles, however, quickly be-
come hard to formulate or write down without first introducting
some defined relations. For instance, most metaphysicians inter-
ested in mereology also view the following as a valid principle:
whenever an object x has a proper part y, it also has a part z that
has no parts in common with y, and so that the fusion of y and
z is x.

6.4 Expressing Relations in a Structure


One main use formulas can be put to is to express properties and
relations in a structure M in terms of the primitives of the lan-
guage L of M. By this we mean the following: the domain of M
is a set of objects. The constant symbols, function symbols, and
predicate symbols are interpreted in M by some objects in|M|,
functions on |M|, and relations on |M|. For instance, if A20 is in
M
L, then M assigns to it a relation R = A20 . Then the formula
6.4. EXPRESSING RELATIONS IN A STRUCTURE 97

A20 (v1, v2 ) expresses that very relation, in the following sense: if a


variable assignment s maps v1 to a ∈ |M| and v2 to b ∈ |M|, then

Rab iff M, s |= A20 (v1, v2 ).

Note that we have to involve variable assignments here: we can’t


just say “Rab iff M |= A20 (a, b)” because a and b are not symbols
of our language: they are elements of |M|.
Since we don’t just have atomic formulas, but can combine
them using the logical connectives and the quantifiers, more com-
plex formulas can define other relations which aren’t directly built
into M. We’re interested in how to do that, and specifically, which
relations we can define in a structure.

Definition 6.9. Let A(v1, . . . , vn ) be a formula of L in which only


v1 ,. . . , vn occur free, and let M be a structure for L. A(v1, . . . , vn )
expresses the relation R ⊆ |M|n iff

Ra1 . . . an iff M, s |= A(v1, . . . , vn )

for any variable assignment s with s (vi ) = ai (i = 1, . . . , n).

Example 6.10. In the standard model of arithmetic N, the for-


mula v1 < v2 ∨ v1 = v2 expresses the ≤ relation on N. The
formula v2 = v10 expresses the successor relation, i.e., the relation
R ⊆ N2 where Rnm holds if m is the successor of n. The for-
mula v1 = v20 expresses the predecessor relation. The formulas
∃v3 (v3 , ∧v2 = (v1 +v3 )) and ∃v3 (v1 +v3 0) = v 2 both express the
< relation. This means that the predicate symbol < is actually
superfluous in the language of arithmetic; it can be defined.

This idea is not just interesting in specific structures, but gen-


erally whenever we use a language to describe an intended model
or models, i.e., when we consider theories. These theories often
only contain a few predicate symbols as basic symbols, but in the
domain they are used to describe often many other relations play
an important role. If these other relations can be systematically
CHAPTER 6. THEORIES AND THEIR MODELS 98

expressed by the relations that interpret the basic predicate sym-


bols of the language, we say we can define them in the language.

6.5 The Theory of Sets


Almost all of mathematics can be developed in the theory of
sets. Developing mathematics in this theory involves a number
of things. First, it requires a set of axioms for the relation ∈. A
number of different axiom systems have been developed, some-
times with conflicting properties of ∈. The axiom system known
as ZFC, Zermelo-Fraenkel set theory with the axiom of choice
stands out: it is by far the most widely used and studied, because
it turns out that its axioms suffice to prove almost all the things
mathematicians expect to be able to prove. But before that can
be established, it first is necessary to make clear how we can even
express all the things mathematicians would like to express. For
starters, the language contains no constant symbols or function
symbols, so it seems at first glance unclear that we can talk about
particular sets (such as ∅ or N), can talk about operations on sets
(such as X ∪ Y and ℘(X )), let alone other constructions which
involve things other than sets, such as relations and functions.
To begin with, “is an element of” is not the only relation we
are interested in: “is a subset of” seems almost as important. But
we can define “is a subset of” in terms of “is an element of.” To
do this, we have to find a formula A(x, y) in the language of set
theory which is satisfied by a pair of sets hX,Y i iff X ⊆ Y . But X
is a subset of Y just in case all elements of X are also elements
of Y . So we can define ⊆ by the formula

∀z (z ∈ x → z ∈ y)

Now, whenever we want to use the relation ⊆ in a formula, we


could instead use that formula (with x and y suitably replaced,
and the bound variable z renamed if necessary). For instance,
extensionality of sets means that if any sets x and y are contained
in each other, then x and y must be the same set. This can be
6.5. THE THEORY OF SETS 99

expressed by ∀x ∀y ((x ⊆ y ∧ y ⊆ x) → x = y), or, if we replace ⊆


by the above definition, by

∀x ∀y ((∀z (z ∈ x → z ∈ y) ∧ ∀z (z ∈ y → z ∈ x)) → x = y).

This is in fact one of the axioms of ZFC, the “axiom of exten-


sionality.”
There is no constant symbol for ∅, but we can express “x is
empty” by ¬∃y y ∈ x. Then “∅ exists” becomes the sentence ∃x ¬∃y y ∈
x. This is another axiom of ZFC. (Note that the axiom of ex-
tensionality implies that there is only one empty set.) Whenever
we want to talk about ∅ in the language of set theory, we would
write this as “there is a set that’s empty and . . . ” As an example,
to express the fact that ∅ is a subset of every set, we could write

∃x (¬∃y y ∈ x ∧ ∀z x ⊆ z )

where, of course, x ⊆ z would in turn have to be replaced by its


definition.
To talk about operations on sets, such has X ∪ Y and ℘(X ),
we have to use a similar trick. There are no function symbols
in the language of set theory, but we can express the functional
relations X ∪ Y = Z and ℘(X ) = Y by

∀u ((u ∈ x ∨ u ∈ y) ↔ u ∈ z )
∀u (u ⊆ x ↔ u ∈ y)

since the elements of X ∪ Y are exactly the sets that are either
elements of X or elements of Y , and the elements of ℘(X ) are
exactly the subsets of X . However, this doesn’t allow us to use
x ∪ y or ℘(x) as if they were terms: we can only use the entire
formulas that define the relations X ∪ Y = Z and ℘(X ) = Y . In
fact, we do not know that these relations are ever satisfied, i.e.,
we do not know that unions and power sets always exist. For
instance, the sentence ∀x ∃y ℘(x) = y is another axiom of ZFC
(the power set axiom).
Now what about talk of ordered pairs or functions? Here we
have to explain how we can think of ordered pairs and functions
CHAPTER 6. THEORIES AND THEIR MODELS 100

as special kinds of sets. One way to define the ordered pair hx, yi
is as the set {{x }, {x, y }}. But like before, we cannot introduce
a function symbol that names this set; we can only define the
relation hx, yi = z , i.e., {{x }, {x, y }} = z :

∀u (u ∈ z ↔ (∀v (v ∈ u ↔ v = x) ∨ ∀v (v ∈ u ↔ (v = x ∨ v = y))))

This says that the elements u of z are exactly those sets which
either have x as its only element or have x and y as its only
elements (in other words, those sets that are either identical to
{x } or identical to {x, y }). Once we have this, we can say further
things, e.g., that X × Y = Z :

∀z (z ∈ Z ↔ ∃x ∃y (x ∈ X ∧ y ∈ Y ∧ hx, yi = z ))

A function f : X → Y can be thought of as the relation f (x) =


y, i.e., as the set of pairs {hx, yi : f (x) = y }. We can then say that
a set f is a function from X to Y if (a) it is a relation ⊆ X × Y ,
(b) it is total, i.e., for all x ∈ X there is some y ∈ Y such that
hx, yi ∈ f and (c) it is functional, i.e., whenever hx, yi, hx, y 0i ∈ f ,
y = y 0 (because values of functions must be unique). So “f is a
function from X to Y ” can be written as:

∀u (u ∈ f → ∃x ∃y (x ∈ X ∧ y ∈ Y ∧ hx, yi = u)) ∧
∀x (x ∈ X → (∃y (y ∈ Y ∧ maps(f , x, y)) ∧
(∀y ∀y 0 ((maps(f , x, y) ∧ maps(f , x, y 0)) → y = y 0)))

where maps(f , x, y) abbreviates ∃v (v ∈ f ∧ hx, yi = v ) (this for-


mula expresses “f (x) = y”).
It is now also not hard to express that f : X → Y is injective,
for instance:

f : X → Y ∧ ∀x ∀x 0 ((x ∈ X ∧ x 0 ∈ X ∧
∃y (maps(f , x, y) ∧ maps(f , x 0, y))) → x = x 0)

A function f : X → Y is injective iff, whenever f maps x, x 0 ∈ X


to a single y, x = x 0. If we abbreviate this formula as inj(f , X,Y ),
6.6. EXPRESSING THE SIZE OF STRUCTURES 101

we’re already in a position to state in the language of set theory


something as non-trivial as Cantor’s theorem: there is no injective
function from ℘(X ) to X :

∀X ∀Y (℘(X ) = Y → ¬∃f inj(f ,Y, X ))

One might think that set theory requires another axiom that
guarantees the existence of a set for every defining property. If
A(x) is a formula of set theory with the variable x free, we can
consider the sentence

∃y ∀x (x ∈ y ↔ A(x)).

This sentence states that there is a set y whose elements are all
and only those x that satisfy A(x). This schema is called the
“comprehension principle.” It looks very useful; unfortunately
it is inconsistent. Take A(x) ≡ ¬x ∈ x, then the comprehension
principle states
∃y ∀x (x ∈ y ↔ x < x),
i.e., it states the existence of a set of all sets that are not elements
of themselves. No such set can exist—this is Russell’s Paradox.
ZFC, in fact, contains a restricted—and consistent—version of
this principle, the separation principle:

∀z ∃y ∀x (x ∈ y ↔ (x ∈ z ∧ A(x)).

6.6 Expressing the Size of Structures


There are some properties of structures we can express even with-
out using the non-logical symbols of a language. For instance,
there are sentences which are true in a structure iff the domain of
the structure has at least, at most, or exactly a certain number n
of elements.
CHAPTER 6. THEORIES AND THEIR MODELS 102

Proposition 6.11. The sentence

A ≥n ≡ ∃x 1 ∃x 2 . . . ∃x n (x 1 , x 2 ∧ x 1 , x 3 ∧ x 1 , x 4 ∧ · · · ∧ x 1 , x n ∧
x2 , x3 ∧ x2 , x4 ∧ · · · ∧ x2 , xn ∧
..
.
x n−1 , x n )

is true in a structure M iff |M| contains at least n elements. Conse-


quently, M |= ¬A ≥n+1 iff |M| contains at most n elements.

Proposition 6.12. The sentence

A=n ≡ ∃x 1 ∃x 2 . . . ∃x n (x 1 , x 2 ∧ x 1 , x 3 ∧ x 1 , x 4 ∧ · · · ∧ x 1 , x n ∧
x2 , x3 ∧ x2 , x4 ∧ · · · ∧ x2 , xn ∧
..
.
x n−1 , x n ∧
∀y (y = x 1 ∨ . . . y = x n ) . . . ))

is true in a structure M iff |M| contains exactly n elements.

Proposition 6.13. A structure is infinite iff it is a model of

{A ≥1, A ≥2, A ≥3, . . . }

There is no single purely logical sentence which is true in M iff


|M| is infinite. However, one can give sentences with non-logical
predicate symbols which only have infinite models (although not
every infinite structure is a model of them). The property of being
a finite structure, and the property of being a uncountable struc-
ture cannot even be expressed with an infinite set of sentences.
These facts follow from the compactness and Löwenheim-Skolem
theorems.
6.6. EXPRESSING THE SIZE OF STRUCTURES 103

Summary
Sets of sentences in a sense describe the structures in which they
are jointly true; these structures are their models. Conversely, if
we start with a structure or set of structures, we might be inter-
ested in the set of sentences they are models of, this is the theory
of the structure or set of structures. Any such set of sentences has
the property that every sentence entailed by them is already in
the set; they are closed. More generally, we call a set Γ a theory
if it is closed under entailment, and say Γ is axiomatized by ∆
is Γ consists of all sentences entailed by ∆.
Mathematics yields many examples of theories, e.g., the the-
ories of linear orders, of groups, or theories of arithmetic, e.g.,
the theory axiomatized by Peano’s axioms. But there are many
examples of important theories in other disciplines as well, e.g.,
relational databases may be thought of as theories, and meta-
physics concerns itself with theories of parthood which can be
axiomatized.
One significant question when setting up a theory for study is
whether its language is expressive enough to allow us to formu-
late everything we want the theory to talk about, and another is
whether it is strong enough to prove what we want it to prove. To
express a relation we need a formula with the requisite number
of free variables. In set theory, we only have ∈ as a relation sym-
bol, but it allows us to express x ⊆ y using ∀u (u ∈ x → u ∈ y).
Zermelo-Fraenkel set theory ZFC, in fact, is strong enough to
both express (almost) every mathematical claim and to (almost)
prove every mathematical theorem using a handful of axioms and
a chain of increasingly complicated definitions such as that of ⊆.

Problems
Problem 6.1. Find formulas in LA which define the following
relations:

1. n is between i and j ;
CHAPTER 6. THEORIES AND THEIR MODELS 104

2. n evenly divides m (i.e., m is a multiple of n);

3. n is a prime number (i.e., no number other than 1 and n


evenly divides n).

Problem 6.2. Suppose the formula A(v1, v2 ) expresses the rela-


tion R ⊆ |M|2 in a structure M. Find formulas that express the
following relations:

1. the inverse R −1 of R;

2. the relative product R | R;

Can you find a way to express R + , the transitive closure of R?

Problem 6.3. Let L be the language containing a 2-place predi-


cate symbol < only (no other constant symbols, function symbols
or predicate symbols— except of course =). Let N be the struc-
ture such that |N| = N, and <N = {hn, mi : n < m}. Prove the
following:

1. {0} is definable in N;

2. {1} is definable in N;

3. {2} is definable in N;

4. for each n ∈ N, the set {n} is definable in N;

5. every finite subset of |N| is definable in N;

6. every co-finite subset of |N| is definable in N (where X ⊆ N


is co-finite iff N \ X is finite).

Problem 6.4. Show that the comprehension principle is incon-


sistent by giving a derivation that shows

∃y ∀x (x ∈ y ↔ x < x) ` ⊥.

It may help to first show (A → ¬A) ∧ (¬A → A) ` ⊥.


CHAPTER 7

Natural
Deduction
7.1 Introduction
Logical systems commonly have not just a semantics, but also
proof systems. The purpose of proof systems is to provide a
purely syntactic method of establishing entailment and validity.
They are purely syntactic in the sense that a derivation in such
a system is a finite syntactic object, usually a sequence (or other
finite arrangement) of formulas. Moreover, good proof systems
have the property that any given sequence or arrangement of for-
mulas can be verified mechanically to be a “correct” proof. The
simplest (and historically first) proof systems for first-order logic
were axiomatic. A sequence of formulas counts as a derivation
in such a system if each individual formula in it is either among
a fixed set of “axioms” or follows from formulas coming before it
in the sequence by one of a fixed number of “inference rules”—
and it can be mechanically verified if a formula is an axiom and
whether it follows correctly from other formulas by one of the in-
ference rules. Axiomatic proof systems are easy to describe—and
also easy to handle meta-theoretically—but derivations in them
are hard to read and understand, and are also hard to produce.

105
CHAPTER 7. NATURAL DEDUCTION 106

Other proof systems have been developed with the aim of


making it easier to construct derivations or easier to understand
derivations once they are complete. Examples are truth trees,
also known as tableaux proofs, and the sequent calculus. Some
proof systems are designed especially with mechanization in mind,
e.g., the resolution method is easy to implement in software (but
its derivations are essentially impossible to understand). Most of
these other proof systems represent derivations as trees of formu-
las rather than sequences. This makes it easier to see which parts
of a derivation depend on which other parts.
The proof system we will study is Gentzen’s natural deduc-
tion. Natural deduction is intended to mirror actual reasoning
(especially the kind of regimented reasoning employed by math-
ematicians). Actual reasoning proceeds by a number of “natural”
patterns. For instance proof by cases allows us to establish a con-
clusion on the basis of a disjunctive premise, by establishing that
the conclusion follows from either of the disjuncts. Indirect proof
allows us to establish a conclusion by showing that its negation
leads to a contradiction. Conditional proof establishes a con-
ditional claim “if . . . then . . . ” by showing that the consequent
follows from the antecedent. Natural deduction is a formaliza-
tion of some of these natural inferences. Each of the logical con-
nectives and quantifiers comes with two rules, an introduction
and an elimination rule, and they each correspond to one such
natural inference pattern. For instance, → Intro corresponds to
conditional proof, and ∨Elim to proof by cases.
One feature that distinguishes natural deduction from other
proof systems is its use of assumptions. In almost every proof sys-
tem a single formula is at the root of the tree of formulas—usually
the conclusion—and the “leaves” of the tree are formulas from
which the conclusion is derived. In natural deduction, some leaf
formulas play a role inside the derivation but are “used up” by the
time the derivation reaches the conclusion. This corresponds to
the practice, in actual reasoning, of introducing hypotheses which
only remain in effect for a short while. For instance, in a proof by
cases, we assume the truth of each of the disjuncts; in conditional
7.2. RULES AND DERIVATIONS 107

proof, we assume the truth of the antecedent; in indirect proof,


we assume the truth of the negation of the conclusion. This way
of introducing hypotheticals and then doing away with them in
the service of establishing an intermediate step is a hallmark of
natural deduction. The formulas at the leaves of a natural de-
duction derivation are called assumptions, and some of the rules
of inference may “discharge” them. An assumption that remains
undischarged at the end of the derivation is (usually) essential to
the truth of the conclusion, and so a derivation establishes that
its undischarged assumptions entail its conclusion.
For any proof system it is crucial to verify that it in fact does
what it’s supposed to: provide a way to verify that a sentence is
entailed by some others. This is called soundness; and we will
prove it for the natural deduction system we use. It is also crucial
to verify the converse: that the proof system is strong enough to
verify that Γ  A whenever this holds, that there is a derivation
of A from Γ whenever Γ  A. This is called completeness—but
it is much harder to prove.

7.2 Rules and Derivations


Let L be a first-order language with the usual constant symbols,
variables, logical symbols, and auxiliary symbols (parentheses
and the comma).

Definition 7.1 (Inference). An inference is an expression of the


form
A or A B
C C
where A, B, and C are formulas. A and B are called the upper
formulas or premises and C the lower formulas or conclusion of the
inference.

The rules for natural deduction are divided into two main
types: propositional rules (quantifier-free) and quantifier rules. The
rules come in pairs, an introduction and an elimination rule for
CHAPTER 7. NATURAL DEDUCTION 108

each logical operator. They introduced a logical operator in the


conclusion or remove a logical operator from a premise of the
rule. Some of the rules allow an assumption of a certain type
to be discharged. To indicate which assumption is discharged by
which inference, we also assign labels to both the assumption
and the inference. This is indicated by writing the assumption
formula as “[A]n ”.
It is customary to consider rules for all logical operators, even
for those (if any) that we consider as defined.

Propositional Rules
Rules for ⊥

A ¬A ⊥
⊥Intro ⊥Elim
⊥ A

Rules for ∧

A B A∧B A∧B
∧Intro ∧Elim ∧Elim
A∧B A B

Rules for ∨

[A]n [B]n
A B
∨Intro ∨Intro
A∨B A∨B
n
A∨B C C
∨Elim
C

Rules for ¬
[A]n
¬¬A
¬Elim
A
n ⊥
¬Intro
¬A
7.2. RULES AND DERIVATIONS 109

Rules for →
[A]n
A A→B
→ Elim
B
B
n → Intro
A→B

Quantifier Rules
Rules for ∀
A(a) ∀x A(x)
∀Intro ∀Elim
∀x A(x) A(t )
where t is a ground term (a term that does not contain any vari-
ables), and a is a constant symbol which does not occur in A, or
in any assumption which is undischarged in the derivation end-
ing with the premise A. We call a the eigenvariable of the ∀Intro
inference.

Rules for ∃
[A(a)]n
A(t )
∃Intro
∃x A(x)
∃x A(x) C
n ∃Elim
C
where t is a ground term, and a is a constant which does not
occur in the premise ∃x A(x), in C , or any assumption which is
undischarged in the derivations ending with the two premises C
(other than the assumptions A(a)). We call a the eigenvariable of
the ∃Elim inference.
The condition that an eigenvariable not occur in the upper
sequent of the ∀ intro or ∃ elim inference is called the eigenvariable
condition.
We use the term “eigenvariable” even though a in the above
rules is a constant. This has historical reasons.
CHAPTER 7. NATURAL DEDUCTION 110

In ∃Intro and ∀Elim there are no restrictions, and the term t


can be anything, so we do not have to worry about any conditions.
However, because the t may appear elsewhere in the derivation,
the values of t for which the formula is satisfied are constrained.
On the other hand, in the ∃Elim and ∀ intro rules, the eigenvari-
able condition requires that a does not occur anywhere else in
the formula. Thus, if the upper formula is valid, the truth values
of the formulas other than A(a) are independent of a.
Natural deduction systems are meant to closely parallel the
informal reasoning used in mathematical proof (hence it is some-
what “natural”). Natural deduction proofs begin with assump-
tions. Inference rules are then applied. Assumptions are “dis-
charged” by the ¬Intro, → Intro, ∨Elim and ∃Elim inference
rules, and the label of the discharged assumption is placed be-
side the inference for clarity.

Definition 7.2 (Initial Formula). An initial formula or assumption


is any formula in the topmost position of any branch.
7.3. EXAMPLES OF DERIVATIONS 111

Definition 7.3 (Derivation). A derivation of a formula A from


assumptions Γ is a tree of formulas satisfying the following con-
ditions:

1. The topmost formulas of the tree are either in Γ or are


discharged by an inference in the tree.

2. Every formula in the tree is an upper formula of an infer-


ence whose lower formula stands directly below that for-
mula in the tree.

We then say that A is the end-formula of the derivation and that


A is derivable from Γ.

7.3 Examples of Derivations


Example 7.4. Let’s give a derivation of the formula (A ∧B) → A.
We begin by writing the desired end-formula at the bottom of
the derivation.
(A ∧ B) → A
Next, we need to figure out what kind of inference could result
in a formula of this form. The main operator of the end-formula
is →, so we’ll try to arrive at the end-formula using the → Intro
rule. It is best to write down the assumptions involved and label
the inference rules as you progress, so it is easy to see whether
all assumptions have been discharged at the end of the proof.

[A ∧ B]1

A
1 → Intro
(A ∧ B) → A

We now need to fill in the steps from the assumption A ∧ B


to A. Since we only have one connective to deal with, ∧, we must
CHAPTER 7. NATURAL DEDUCTION 112

use the ∧ elim rule. This gives us the following proof:

[A ∧ B]1
∧Elim
A
1 → Intro
(A ∧ B) → A

We now have a correct derivation of the formula (A ∧B) → A.

Example 7.5. Now let’s give a derivation of the formula (¬A ∨


B) → (A → B).
We begin by writing the desired end-formula at the bottom of
the derivation.
(¬A ∨ B) → (A → B)
To find a logical rule that could give us this end-formula, we look
at the logical connectives in the end-formula: ¬, ∨, and →. We
only care at the moment about the first occurence of → because it
is the main operator of the sentence in the end-sequent, while ¬,
∨ and the second occurence of → are inside the scope of another
connective, so we will take care of those later. We therefore start
with the → Intro rule. A correct application must look as follows:

[¬A ∨ B]1

A→B
1 → Intro
(¬A ∨ B) → (A → B)

This leaves us with two possibilities to continue. Either we


can keep working from the bottom up and look for another ap-
plication of the → Intro rule, or we can work from the top down
and apply a ∨Elim rule. Let us apply the latter. We will use the
assumption ¬A ∨ B as the leftmost premise of ∨Elim. For a valid
application of ∨Elim, the other two premises must be identical
to the conclusion A → B, but each may be derived in turn from
another assumption, namely the two disjuncts of ¬A ∨ B. So our
7.3. EXAMPLES OF DERIVATIONS 113

derivation will look like this:


[¬A]2 [B]2

[¬A ∨ B]1 A→B A→B


2 ∨Elim
A→B
1 → Intro
(¬A ∨ B) → (A → B)
In each of the two branches on the right, we want to derive
A → B, which is best done using → Intro.

[¬A]2, [A]3 [B]2, [A]4

B B
3 → Intro 4 → Intro
[¬A ∨ B]1 A→B A→B
2 ∨Elim
A→B
1 → Intro
(¬A ∨ B) → (A → B)
For the two missing parts of the derivation, we need deriva-
tions of B from ¬A and A in the middle, and from A and B on the
left. Let’s take the former first. ¬A and A are the two premises of
⊥Intro:
[¬A]2 [A]3
⊥ ⊥Intro

B
By using ⊥Elim, we can obtain B as a conclusion and complete
the branch.
[B]2, [A]4
[¬A]2 [A]3
⊥ ⊥Intro
⊥Elim
B B
3 → Intro 4 → Intro
[¬A ∨ B]1 A→B A→B
2 ∨Elim
A→B
1 → Intro
(¬A ∨ B) → (A → B)
CHAPTER 7. NATURAL DEDUCTION 114

Let’s now look at the rightmost branch. Here it’s important


to realize that the definition of derivation allows assumptions to be
discharged but does not require them to be. In other words, if we
can derive B from one of the assumptions A and B without using
the other, that’s ok. And to derive B from B is trivial: B by itself
is such a derivation, and no inferences are needed. So we can
simply delete the assumtion A.

[¬A]2 [A]3
⊥ ⊥Intro
B
⊥Elim [B]2
3 → Intro → Intro
[¬A ∨ B]1 A→B A→B
2 ∨Elim
A→B
1 → Intro
(¬A ∨ B) → (A → B)

Note that in the finished derivation, the rightmost → Intro infer-


ence does not actually discharge any assumptions.

Example 7.6. When dealing with quantifiers, we have to make


sure not to violate the eigenvariable condition, and sometimes
this requires us to play around with the order of carrying out
certain inferences. In general, it helps to try and take care of rules
subject to the eigenvariable condition first (they will be lower
down in the finished proof).
Let’s see how we’d give a derivation of the formula ∃x ¬A(x) →
¬∀x A(x). Starting as usual, we write

∃x ¬A(x) → ¬∀x A(x)

We start by writing down what it would take to justify that last


step using the → Intro rule.

[∃x ¬A(x)]1

¬∀x A(x)
→ Intro
∃x ¬A(x) → ¬∀x A(x)
7.3. EXAMPLES OF DERIVATIONS 115

Since there is no obvious rule to apply to ¬∀x A(x), we will pro-


ceed by setting up the derivation so we can use the ∃Elim rule.
Here we must pay attention to the eigenvariable condition, and
choose a constant that does not appear in ∃x A(x) or any assump-
tions that it depends on. (Since no constant symbols appear,
however, any choice will do fine.)

[¬A(a)]2

[∃x ¬A(x)]1 ¬∀x A(x)


2 ∃Elim
¬∀x A(x)
→ Intro
∃x ¬A(x) → ¬∀x A(x)

In order to derive ¬∀x A(x), we will attempt to use the ¬Intro


rule: this requires that we derive a contradiction, possibly using
∀x A(x) as an additional assumption. Of coursem, this contradic-
tion may involve the assumption ¬A(a) which will be discharged
by the → Intro inference. We can set it up as follows:

[¬A(a)]2, [∀x A(x)]3

3

¬Intro
[∃x ¬A(x)]1 ¬∀x A(x)
2 ∃Elim
¬∀x A(x)
→ Intro
∃x ¬A(x) → ¬∀x A(x)

It looks like we are close to getting a contradiction. The easiest


rule to apply is the ∀Elim, which has no eigenvariable conditions.
Since we can use any term we want to replace the universally
quantified x, it makes the most sense to continue using a so we
CHAPTER 7. NATURAL DEDUCTION 116

can reach a contradiction.

[∀x A(x)]3
∀Elim
[¬A(a)]2 A(a)
⊥ ⊥Intro
1 3 ¬Intro
[∃x ¬A(x)] ¬∀x A(x)
2 ∃Elim
¬∀x A(x)
→ Intro
∃x ¬A(x) → ¬∀x A(x)

It is important, especially when dealing with quantifiers, to


double check at this point that the eigenvariable condition has
not been violated. Since the only rule we applied that is subject
to the eigenvariable condition was ∃Elim, and the eigenvariable a
does not occur in any assumptions it depends on, this is a correct
derivation.

Example 7.7. Sometimes we may derive a formula from other


formulas. In these cases, we may have undischarged assumptions.
It is important to keep track of our assumptions as well as the end
goal.
Let’s see how we’d give a derivation of the formula ∃x C (x, b)
from the assumptions ∃x (A(x) ∧ B(x)) and ∀x (B(x) → C (x, b).
Starting as usual, we write the end-formula at the bottom.

∃x C (x, b)

We have two premises to work with. To use the first, i.e., try
to find a derivation of ∃x C (x, b) from ∃x (A(x) ∧ B(x)) we would
use the ∃Elim rule. Since it has an eigenvariable condition, we
will apply that rule first. We get the following:

[A(a) ∧ B(a)]1

∃x (A(x) ∧ B(x)) ∃x C (x, b)


1 ∃Elim
∃x C (x, b)
7.3. EXAMPLES OF DERIVATIONS 117

The two assumptions we are working with share B. It may be


useful at this point to apply ∧Elim to separate out B(a).

[A(a) ∧ B(a)]1
∧Elim
B(a)

∃x (A(x) ∧ B(x)) ∃x C (x, b)


1 ∃Elim
∃x C (x, b)

The second assumption we have to work with is ∀x (B(x) →


C (x, b). Since there is no eigenvariable condition we can instan-
tiate x with the constant symbol a using ∀Elim to get B(a) →
C (a, b). We now have B(a) and B(a) → C (a, b). Our next move
should be a straightforward application of the → Elim rule.

[A(a) ∧ B(a)]1 ∀x (B(x) → C (x, b))


∧Elim ∀Elim
B(a) B(a) → C (a, b)
→ Elim
C (a, b)

∃x (A(x) ∧ B(x)) ∃x C (x, b)


1 ∃Elim
∃x C (x, b)

We are so close! One application of ∃Intro and we have reached


our goal.

[A(a) ∧ B(a)]1 ∀x (B(x) → C (x, b))


∧Elim ∀Elim
B(a) B(a) → C (a, b)
→ Elim
C (a, b)
∃Intro
∃x (A(x) ∧ B(x)) ∃x C (x, b)
1 ∃Elim
∃x C (x, b)

Since we ensured at each step that the eigenvariable conditions


were not violated, we can be confident that this is a correct deriva-
tion.
CHAPTER 7. NATURAL DEDUCTION 118

Example 7.8. Give a derivation of the formula ¬∀x A(x) from


the assumptions ∀x A(x) → ∃y B(y) and ¬∃y B(y). Starting as
usual, we write the target formula at the bottom.

¬∀x A(x)

The last line of the derivation is a negation, so let’s try using


¬Intro. This will require that we figure out how to derive a con-
tradiction.
[∀x A(x)]1

1

¬Intro
¬∀x A(x)
So far so good. We can use ∀Elim but it’s not obvious if that will
help us get to our goal. Instead, let’s use one of our assumptions.
∀x A(x) → ∃y B(y) together with ∀x A(x) will allow us to use the
→ Elim rule.

[∀x A(x)]1 ∀x A(x) → ∃y B(y)


→ Elim
∃y B(y)

1

¬Intro
¬∀x A(x)

We now have one final assumption to work with, and it looks like
this will help us reach a contradiction by using ⊥Intro.

[∀x A(x)]1 ∀x A(x) → ∃y B(y)


→ Elim
∃y B(y) ¬∃y B(y)
⊥ ⊥Intro
1 ¬Intro
¬∀x A(x)

Example 7.9. Give a derivation of the formula A(x) ∨ ¬A(x).

A(x) ∨ ¬A(x)
7.3. EXAMPLES OF DERIVATIONS 119

The main connective of the formula is a disjunction. Since we


have no assumptions to work from, we can’t use ∨Intro. Since we
don’t want any undischarged assumptions in our proof, our best
bet is to use ¬Intro with the assumption ¬(A(x) ∨ ¬A(x)). This
will allow us to discharge the assumption at the end.

[¬(A(x) ∨ ¬A(x))]1

1

¬Intro
¬¬(A(x) ∨ ¬A(x))
¬Elim
A(x) ∨ ¬A(x)

Note that a straightforward application of the ¬Intro rule leaves


us with two negations. We can remove them with the ¬Elim rule.
We appear to be stuck again, since the assumption we in-
troduced has a negation as the main operator. So let’s try to de-
rive another contradiction! Let’s assume A(x) for another ¬Intro.
From there we can derive A(x) ∨ ¬A(x) and get our first contra-
diction.

[A(x)]2
∨Intro
[¬(A(x) ∨ ¬A(x))]1 A(x) ∨ ¬A(x)
⊥ ⊥Intro
2 ¬Intro
¬A(x)

1

¬Intro
¬¬(A(x) ∨ ¬A(x))
¬Elim
A(x) ∨ ¬A(x)

Our second assumption is now discharged. We only need to de-


rive one more contradiction in order to discharge our first as-
sumption. Now we have something to work with—¬A(x). We can
use the same strategy as last time (∨Intro) to derive a contradic-
CHAPTER 7. NATURAL DEDUCTION 120

tion with our first assumption.

[A(x)]2
∨Intro
[¬(A(x) ∨ ¬A(x))]1 A(x) ∨ ¬A(x)
⊥ ⊥Intro
2 ¬Intro
¬A(x)
∨Intro
A(x) ∨ ¬A(x) [¬(A(x) ∨ ¬A(x))]1
1 ¬Intro
¬¬(A(x) ∨ ¬A(x))
¬Elim
A(x) ∨ ¬A(x)

And the proof is done!

7.4 Proof-Theoretic Notions


Just as we’ve defined a number of important semantic notions
(validity, entailment, satisfiabilty), we now define corresponding
proof-theoretic notions. These are not defined by appeal to satisfac-
tion of sentences in structures, but by appeal to the derivability
or non-derivability of certain formulas. It was an important dis-
covery, due to Gödel, that these notions coincide. That they do
is the content of the completeness theorem.

Definition 7.10 (Derivability). A formula A is derivable from a set


of formulas Γ, Γ ` A, if there is a derivation with end-formula A
and in which every assumption is either discharged or is in Γ. If
A is not derivable from Γ we write Γ 0 A.

Definition 7.11 (Theorems). A formula A is a theorem if there


is a derivation of A from the empty set, i.e., a derivation with
end-formula A in which all assumptions are discharged. We write
` A if A is a theorem and 0 A if it is not.
7.5. PROPERTIES OF DERIVABILITY 121

Definition 7.12 (Consistency). A set of sentences Γ is consistent


iff Γ 0 ⊥. If Γ is not consistent, i.e., if Γ ` ⊥, we say it is
inconsistent.

Proposition 7.13. Γ ` A iff Γ ∪ {¬A} is inconsistent.

Proof. Exercise. 

Proposition 7.14. Γ is inconsistent iff Γ ` A for every sentence A.

Proof. Exercise. 

Proposition 7.15. If Γ ` A iff for some finite Γ0 ⊆ Γ, Γ0 ` A.

Proof. Any derivation of A from Γ can only contain finitely many


undischarged assumtions. If all these undischarged assumptions
are in Γ, then the set of them is a finite subset of Γ. The other
direction is trivial, since a derivation from a subset of Γ is also a
derivation from Γ. 

7.5 Properties of Derivability


We will now establish a number of properties of the derivability
relation. They are independently interesting, but each will play
a role in the proof of the completeness theorem.

Proposition 7.16 (Monotony). If Γ ⊆ ∆ and Γ ` A, then ∆ ` A.

Proof. Any derivation of A from Γ is also a derivation of A from ∆.



CHAPTER 7. NATURAL DEDUCTION 122

Proposition 7.17. If Γ ` A and Γ ∪ {A} ` ⊥, then Γ is inconsistent.

Proof. Let the derivation of A from Γ be δ1 and the derivation of


⊥ from Γ ∪ {A} be δ2 . We can then derive:
[A]1
δ2
δ1

1 ¬Intro
A ¬A
⊥ ¬Elim

In the new derivation, the assumption A is discharged, so it is


a derivation from Γ. 

Proposition 7.18. If Γ ∪ {A} ` ⊥, then Γ ` ¬A.

Proof. Suppose that Γ ∪ {A} ` ⊥. Then there is a derivation of ⊥


from Γ ∪ {A}. Let δ be the derivation of ⊥, and consider
[A]1

1

¬Intro
¬A


Proposition 7.19. If Γ ∪ {A} ` ⊥ and Γ ∪ {¬A} ` ⊥, then Γ ` ⊥.

Proof. There are derivations δ1 and δ2 of ⊥ from Γ∪, {A} and ⊥


from Γ ∪ {¬A}, respectively. We can then derive

[A]1 [¬A]2
δ1 δ2

1
⊥ 2

¬Intro ¬Intro
¬A ¬¬A
⊥ ¬Elim
7.5. PROPERTIES OF DERIVABILITY 123

Since the assumptions A and ¬A are discharged, this is a deriva-


tion from Γ alone. Hence Γ ` ⊥. 

Proposition 7.20. If Γ ∪ {A} ` ⊥ and Γ ∪ {B } ` ⊥, then Γ ∪ {A ∨


B } ` ⊥.

Proof. Exercise. 

Proposition 7.21. If Γ ` A or Γ ` B, then Γ ` A ∨ B.

Proof. Suppose Γ ` A. There is a derivation δ of A from Γ. We


can derive

δ
A
∨Intro
A∨B
Therefore Γ ` A ∨ B. The proof for when Γ ` B is similar. 

Proposition 7.22. If Γ ` A ∧ B then Γ ` A and Γ ` B.

Proof. Exercise 

Proposition 7.23. If Γ ` A and Γ ` B, then Γ ` A ∧ B.

Proof. Exercise. 

Proposition 7.24. If Γ ` A and Γ ` A → B, then Γ ` B.

Proof. Exercise. 
CHAPTER 7. NATURAL DEDUCTION 124

Proposition 7.25. If Γ ` ¬A or Γ ` B, then Γ ` A → B.

Proof. Exercise. 

Theorem 7.26. If c is a constant not occurring in Γ or A(x) and


Γ ` A(c ), then Γ ` ∀x A(c ).

Proof. Let δ be an derivation of A(c ) from Γ. By adding a ∀Intro


inference, we obtain a proof of ∀x A(x). Since c does not occur
in Γ or A(x), the eigenvariable condition is satisfied. 

Theorem 7.27. 1. If Γ ` A(t ) then Γ ` ∃x A(x).

2. If Γ ` ∀x A(x) then Γ ` A(t ).

Proof. 1. Suppose Γ ` A(t ). Then there is a derivation δ of


A(t ) from Γ. The derivation

δ
A(t )
∃Intro
∃x A(x)

shows that Γ ` ∃x A(x).

2. Suppose Γ ` ∀x A(x). Then there is a derivation δ of


∀x A(x) from Γ. The derivation

δ
∀x A(x)
∀Elim
A(t )

shows that Γ ` A(t ).



7.6. SOUNDNESS 125

7.6 Soundness
A derivation system, such as natural deduction, is sound if it
cannot derive things that do not actually follow. Soundness is
thus a kind of guaranteed safety property for derivation systems.
Depending on which proof theoretic property is in question, we
would like to know for instance, that

1. every derivable sentence is valid;

2. if a sentence is derivable from some others, it is also a


consequence of them;

3. if a set of sentences is inconsistent, it is unsatisfiable.

These are important properties of a derivation system. If any of


them do not hold, the derivation system is deficient—it would
derive too much. Consequently, establishing the soundness of a
derivation system is of the utmost importance.

Theorem 7.28 (Soundness). If A is derivable from the undischarged


assumptions Γ, then Γ  A.

Proof. Inductive Hypothesis: The premises of an inference rule fol-


low from the undischarged assumptions of the subproofs ending
in those premises.
Inductive Step: Show that A follows from the undischarged
assumptions of the entire proof.
Let δ be a derivation of A. We proceed by induction on the
number of inferences in δ.
If the number of inferences is 0, then δ consists only of an
initial formula. Every initial formula A is an undischarged as-
sumption, and as such, any structure M that satisfies all of the
undischarged assumptions of the proof also satisfies A.
If the number of inferences is greater than 0, we distinguish
cases according to the type of the lowermost inference. By induc-
tion hypothesis, we can assume that the premises of that inference
CHAPTER 7. NATURAL DEDUCTION 126

follow from the undischarged assumptions of the sub-derivations


ending in those premises, respectively.
First, we consider the possible inferences with only one premise.

1. Suppose that the last inference is ¬Intro: By inductive hy-


pothesis, ⊥ follows from the undischarged assumptions Γ ∪
{A}. Consider a structure M. We need to show that, if M |=
Γ, then M |= ¬A. Suppose for reductio that M |= Γ, but
M 6|= ¬A, i.e., M |= A. This would mean that M |= Γ ∪ {A}.
This is contrary to our inductive hypothesis. So, M |= ¬A.
2. The last inference is ¬Elim: Exercise.
3. The last inference is ∧Elim: There are two variants: A or
B may be inferred from the premise A ∧ B. Consider the
first case. By inductive hypothesis, A ∧ B follows from the
undischarged assumptions Γ. Consider a structure M. We
need to show that, if M |= Γ, then M |= A. By our inductive
hypothesis, we know that M |= A ∧ B. So, M |= A. The case
where B is inferred from A ∧ B is handled similarly.
4. The last inference is ∨Intro: There are two variants: A ∨ B
may be inferred from the premise A or the premise B. Con-
sider the first case. By inductive hypothesis, A follows from
the undischarged assumptions Γ. Consider a structure M.
We need to show that, if M |= Γ, then M |= A ∨ B. Since
M |= Γ, it must be the case that M |= A, by inductive hy-
pothesis. So it must also be the case that M |= A ∨ B. The
case where A ∨ B is inferred from B is handled similarly.
5. The last inference is → Intro: A → B is inferred from a sub-
proof with assumption A and conclusion B. By inductive
hypothesis, B follows from the undischarged assumptions
Γ and A. Consider a structure M. We need to show that,
if Γ  A → B. For reductio, suppose that for some struc-
ture M, M |= Γ but M 6|= A → B. So, M |= A and M 6|= B.
But by hypothesis, B is a consequence of Γ ∪ {A}. So,
M |= A → B.
7.6. SOUNDNESS 127

6. The last inference is ∀Intro: The premise A(a) is a con-


sequence of the undischarged assumptions Γ by induction
hypothesis. Consider some structure, M, such that M |= Γ.
0
Let M 0 be exactly like M except that a M , a M . We must
have M 0 |= A(a).
We now show that M |= ∀x A(x). Since ∀x A(x) is a sen-
tence, this means we have to show that for every variable
assignment s , M, s |= A(x). Since Γ consists entirely of sen-
tences, M, s |= B for all B ∈ Γ. Let M 0 be like M except that
0
a M = s (x). Then M, s |= A(x) iff M 0 |= A(a) (as A(x) does
not contain a). Since a also does not occur in Γ, M 0 |= Γ.
Since Γ  A(a), M 0 |= A(a). This means that M, s |= A(x).
Since s is an arbitrary variable assignment, M |= ∀x A(x).
7. The last inference is ∃Intro: Exercise.
8. The last inference is ∀Elim: Exercise.

Now let’s consider the possible inferences with several premises:


∨Elim, ∧Intro, → Elim, and ∃Elim.
1. The last inference is ∧Intro. A ∧ B is inferred from the
premises A and B. By induction hypothesis, A follows from
the undischarged assumptions Γ and B follows from the
undischarged assumptions ∆. We have to show that Γ ∪ ∆ 
A ∧ B. Consider a structure M with M |= Γ ∪ ∆. Since
M |= Γ, it must be the case that M |= A, and since M |= ∆,
M |= B, by inductive hypothesis. Together, M |= A ∧ B.
2. The last inference is ∨Elim: Exercise.
3. The last inference is → Elim. B is inferred from the premises
A → B and A. By induction hypothesis, A → B follows
from the undischarged assumptions Γ and A follows from
the undischarged assumptions ∆. Consider a structure M.
We need to show that, if M |= Γ ∪ ∆, then M |= B. It must
be the case that M |= A → B, and M |= A, by inductive
hypothesis. Thus it must be the case that M |= B.
CHAPTER 7. NATURAL DEDUCTION 128

4. The last inference is ∃Elim: Exercise.

Corollary 7.29. If ` A, then A is valid.

Corollary 7.30. If Γ is satisfiable, then it is consistent.

Proof. We prove the contrapositive. Suppose that Γ is not con-


sistent. Then Γ ` ⊥, i.e., there is a derivation of ⊥ from undis-
charged assumptions in Γ. By Theorem 7.28, any structure M
that satisfies Γ must satisfy ⊥. Since M 6|= ⊥ for every struc-
ture M, no M can satisfy Γ, i.e., Γ is not satisfiable. 

7.7 Derivations with Identity predicate


Derivations with the identity predicate require additional infer-
ence rules.

Rules for =:

t = t = Intro
t1 = t2 A(t1 ) t1 = t2 A(t2 )
= Elim and = Elim
A(t2 ) A(t1 )
where t1 and t2 are closed terms. The = Intro rule allows us to
derive any identity statement of the form t = t outright, from no
assumptions.

Example 7.31. If s and t are closed terms, then A(s ), s = t ` A(t ):

A(s ) s =t
= Elim
A(t )
This may be familiar as the “principle of substitutability of iden-
ticals,” or Leibniz’ Law.
7.7. DERIVATIONS WITH IDENTITY PREDICATE 129

Example 7.32. We derive the sentence

∀x ∀y ((A(x) ∧ A(y)) → x = y)

from the sentence

∃x ∀y (A(y) → y = x)

We develop the derivation backwards:

∃x ∀y (A(y) → y = x) [A(a) ∧ A(b)]1

a =b
1 → Intro
((A(a) ∧ A(b)) → a = b)
∀Intro
∀y ((A(a) ∧ A(y)) → a = y)
∀Intro
∀x ∀y ((A(x) ∧ A(y)) → x = y)

We’ll now have to use the main assumption: since it is an existen-


tial formula, we use ∃Elim to derive the intermediary conclusion
a = b.

[∀y (A(y) → y = c )]2


[A(a) ∧ A(b)]1

∃x ∀y (A(y) → y = x) a =b
2 ∃Elim
a =b
1 → Intro
((A(a) ∧ A(b)) → a = b)
∀Intro
∀y ((A(a) ∧ A(y)) → a = y)
∀Intro
∀x ∀y ((A(x) ∧ A(y)) → x = y)

The sub-derivation on the top right is completed by using its


assumptions to show that a = c and b = c . This requies two
separate derivations. The derivation for a = c is as follows:
CHAPTER 7. NATURAL DEDUCTION 130

[∀y (A(y) → y = c )]2 [A(a) ∧ A(b)]1


∀Elim ∧Elim
A(a) → a = c A(a)
a =c → Elim
From a = c and b = c we derive a = b by = Elim.

7.8 Soundness of Identity predicate Rules

Proposition 7.33. Natural deduction with rules for identity is sound.

Proof. Any formula of the form t = t is valid, since for every


structure M, M |= t = t . (Note that we assume the term t to be
ground, i.e., it contains no variables, so variable assignments are
irrelevant).
Suppose the last inference in a derivation is = Elim. Then
the premises are t1 = t2 and A(t1 ); they are derived from undis-
charged assumptions Γ and ∆, respectively. We want to show that
A(s ) follows from Γ ∪ ∆. Consider a structure M with M |= Γ ∪ ∆.
By induction hypothesis, M satisfies the two premises by induc-
tion hypothesis. So, M |= t1 = t2 . Therefore, ValM (t1 ) = ValM (t2 ).
Let s be any variable assignment, and s 0 be the x-variant given
by s 0(x) = ValM (t1 ) = ValM (t2 ). By Proposition 5.41, M, s |= A(t2 )
iff M, s 0 |= A(x) iff M, s |= A(t1 ). Since M |= A(t1 ) therefore
M |= A(t2 ). 

Summary
Proof systems provide purely syntactic methods for characteriz-
ing consequence and compatibility between sentences. Natural
deduction is one such proof system. A derivation in it con-
sists of a tree of formulas. The topmost formula a derivation are
assumptions. All other formulas, for the derivation to be cor-
rect, must be correctly justified by one of a number of inference
rules. These come in pairs; an introduction and an elimination
rule for each connective and quantifier. For instance, if a for-
mula A is justified by a → Elim rule, the preceding formulas (the
7.8. SOUNDNESS OF IDENTITY PREDICATE RULES 131

premises) must be B → A and B (for some B). Some inference


rules also allow assumptions to be discharged. For instance, if
A → B is inferred from B using → Intro, any occurrences of A as
assumptions in the derivation leading to the premise B may be
discharged, given a label that is also recorded at the inference.
If there is a derivation with end formula A and all assump-
tions are discharged, we say A is a theorem and write ` A. If all
undischarged assumptions are in some set Γ, we say A is deriv-
able from Γ and write Γ ` A. If Γ ` ⊥ we say Γ is inconsistent,
otherwise consistent. These notions are interrelated, e.g., Γ ` A
iff Γ ∪ {¬A} ` ⊥. They are also related to the corresponding
semantic notions, e.g., if Γ ` A then Γ  A. This property of
natural deduction—what can be derived from Γ is guaranteed to
be entailed by Γ—is called soundness. The soundness theo-
rem is proved by induction on the length of derivations, showing
that each individual inference preserves entailment of its conclu-
sion from open assumptions provided its premises are entailed
by their open assumptions.

Problems
Problem 7.1. Give derivations of the following formulas:

1. ¬(A → B) → (A ∧ ¬B)

2. ∀x (A(x) → B) → (∃y A(y) → B)

Problem 7.2. Prove Proposition 7.13

Problem 7.3. Prove Proposition 7.14

Problem 7.4. Prove Proposition 7.20

Problem 7.5. Prove Proposition 7.21.

Problem 7.6. Prove Proposition 7.22.

Problem 7.7. Prove Proposition 7.23.


CHAPTER 7. NATURAL DEDUCTION 132

Problem 7.8. Prove Proposition 7.24.

Problem 7.9. Prove Proposition 7.25.

Problem 7.10. Complete the proof of Theorem 7.28.

Problem 7.11. Prove that = is both symmetric and transitive,


i.e., give derivations of ∀x ∀y (x = y → y = x) and ∀x ∀y ∀z ((x =
y ∧ y = z) → x = z)

Problem 7.12. Give derivations of the following formulas:

1. ∀x ∀y ((x = y ∧ A(x)) → A(y))

2. ∃x A(x) ∧ ∀y ∀z ((A(y) ∧ A(z )) → y = z ) → ∃x (A(x) ∧


∀y (A(y) → y = x))
CHAPTER 8

The
Completeness
Theorem
8.1 Introduction
The completeness theorem is one of the most fundamental re-
sults about logic. It comes in two formulations, the equivalence
of which we’ll prove. In its first formulation it says something fun-
damental about the relationship between semantic consequence
and our proof system: if a sentence A follows from some sen-
tences Γ, then there is also a derivation that establishes Γ ` A.
Thus, the proof system is as strong as it can possibly be without
proving things that don’t actually follow. In its second formula-
tion, it can be stated as a model existence result: every consistent
set of sentences is satisfiable.
These aren’t the only reasons the completeness theorem—or
rather, its proof—is important. It has a number of important con-
sequences, some of which we’ll discuss separately. For instance,
since any derivation that shows Γ ` A is finite and so can only

133
CHAPTER 8. THE COMPLETENESS THEOREM 134

use finitely many of the sentences in Γ, it follows by the com-


pleteness theorem that if A is a consequence of Γ, it is already
a consequence of a finite subset of Γ. This is called compactness.
Equivalently, if every finite subset of Γ is consistent, then Γ itself
must be consistent. It also follows from the proof of the complete-
ness theorem that any satisfiable set of sentences has a finite or
countably infinite model. This result is called the Löwenheim-
Skolem theorem.

8.2 Outline of the Proof


The proof of the completeness theorem is a bit complex, and
upon first reading it, it is easy to get lost. So let us outline the
proof. The first step is a shift of perspective, that allows us to see
a route to a proof. When completeness is thought of as “whenever
Γ  A then Γ ` A,” it may be hard to even come up with an idea:
for to show that Γ ` A we have to find a derivation, and it does
not look like the hypothesis that Γ  A helps us for this in any
way. For some proof systems it is possible to directly construct a
derivation, but we will take a slightly different tack. The shift in
perspective required is this: completeness can also be formulated
as: “if Γ is consistent, it has a model.” Perhaps we can use the
information in Γ together with the hypothesis that it is consistent
to construct a model. After all, we know what kind of model we
are looking for: one that is as Γ describes it!
If Γ contains only atomic sentences, it is easy to construct a
model for it: for atomic sentences are all of the form P (a1, . . . , an )
where the ai are constant symbols. So all we have to do is come
up with a domain |M| and an interpretation for P so that M |=
P (a1, . . . , an ). But nothing’s easier than that: put |M| = N, ciM = i ,
and for every P (a1, . . . , an ) ∈ Γ, put the tuple hk1, . . . , kn i into P M ,
where ki is the index of the constant symbol ai (i.e., ai ≡ cki ).
Now suppose Γ contains some sentence ¬B, with B atomic.
We might worry that the construction of M interferes with the
possibility of making ¬B true. But here’s where the consistency
8.2. OUTLINE OF THE PROOF 135

of Γ comes in: if ¬B ∈ Γ, then B < Γ, or else Γ would be


inconsistent. And if B < Γ, then according to our construction
of M, M 6|= B, so M |= ¬B. So far so good.
Now what if Γ contains complex, non-atomic formulas? Say,
it contains A ∧ B. Then we should proceed as if both A and B
were in Γ. And if A ∨ B ∈ Γ, then we will have to make at least
one of them true, i.e., proceed as if one of them was in Γ.
This suggests the following idea: we add additional sentences
to Γ so as to (a) keep the resulting set consistent and (b) make
sure that for every possible atomic sentence A, either A is in the
resulting set, or ¬A, and (c) such that, whenever A ∧ B is in the
set, so are both A and B, if A ∨ B is in the set, at least one of A or
B is also, etc. We keep doing this (potentially forever). Call the
set of all sentences so added Γ ∗ . Then our construction above
would provide us with a structure for which we could prove, by
induction, that all sentences in Γ ∗ are true in M, and hence also
all sentence in Γ since Γ ⊆ Γ ∗ .
There is one wrinkle in this plan: if ∃x A(x) ∈ Γ we would
hope to be able to pick some constant symbol c and add A(c )
in this process. But how do we know we can always do that?
Perhaps we only have a few constant symbols in our language,
and for each one of them we have ¬B(c ) ∈ Γ. We can’t also add
B(c ), since this would make the set inconsistent, and we wouldn’t
know whether M has to make B(c ) or ¬B(c ) true. Moreover, it
might happen that Γ contains only sentences in a language that
has no constant symbols at all (e.g., the language of set theory).
The solution to this problem is to simply add infinitely many
constants at the beginning, plus sentences that connect them with
the quantifiers in the right way. (Of course, we have to verify that
this cannot introduce an inconsistency.)
Our original construction works well if we only have constant
symbols in the atomic sentences. But the language might also
contain function symbols. In that case, it might be tricky to find
the right functions on N to assign to these function symbols to
make everything work. So here’s another trick: instead of using
i to interpret ci , just take the set of constant symbols itself as
CHAPTER 8. THE COMPLETENESS THEOREM 136

the domain. Then M can assign every constant symbol to itself:


ciM = ci . But why not go all the way: let |M| be all terms of
the language! If we do this, there is an obvious assignment of
functions (that take terms as arguments and have terms as values)
to function symbols: we assign to the function symbol fin the
function which, given n terms t1 , . . . , tn as input, produces the
term fin (t1, . . . , tn ) as value.
The last piece of the puzzle is what to do with =. The predi-
cate symbol = has a fixed interpretation: M |= t = t 0 iff ValM (t ) =
ValM (t 0). Now if we set things up so that the value of a term t is t
itself, then this structure will make no sentence of the form t = t 0
true unless t and t 0 are one and the same term. And of course
this is a problem, since basically every interesting theory in a
language with function symbols will have as theorems sentences
t = t 0 where t and t 0 are not the same term (e.g., in theories of
arithmetic: ( + ) = ). To solve this problem, we change the
domain of M: instead of using terms as the objects in |M|, we use
sets of terms, and each set is so that it contains all those terms
which the sentences in Γ require to be equal. So, e.g., if Γ is a
theory of arithmetic, one of these sets will contain: , ( + ),
( × ), etc. This will be the set we assign to , and it will turn
out that this set is also the value of all the terms in it, e.g., also
of ( + ). Therefore, the sentence ( + ) =  will be true in this
revised structure.

8.3 Maximally Consistent Sets of Sentences

Definition 8.1 (Maximally consistent set). A set Γ of sentences


is maximally consistent iff

1. Γ is consistent, and

2. if Γ ( Γ 0, then Γ 0 is inconsistent.

An alternate definition equivalent to the above is: a set Γ of


sentences is maximally consistent iff
8.3. MAXIMALLY CONSISTENT SETS OF SENTENCES 137

1. Γ is consistent, and

2. If Γ ∪ {A} is consistent, then A ∈ Γ.

In other words, one cannot add sentences not already in Γ to a


maximally consistent set Γ without making the resulting larger
set inconsistent.
Maximally consistent sets are important in the completeness
proof since we can guarantee that every consistent set of sen-
tences Γ is contained in a maximally consistent set Γ ∗ , and a
maximally consistent set contains, for each sentence A, either A
or its negation ¬A. This is true in particular for atomic sentences,
so from a maximally consistent set in a language suitably ex-
panded by constant symbols, we can construct a structure where
the interpretation of predicate symbols is defined according to
which atomic sentences are in Γ ∗ . This structure can then be
shown to make all sentences in Γ ∗ (and hence also in Γ) true.
The proof of this latter fact requires that ¬A ∈ Γ ∗ iff A < Γ ∗ ,
(A ∨ B) ∈ Γ ∗ iff A ∈ Γ ∗ or B ∈ Γ ∗ , etc.

Proposition 8.2. Suppose Γ is maximally consistent. Then:

1. If Γ ` A, then A ∈ Γ.

2. For any A, either A ∈ Γ or ¬A ∈ Γ.

3. (A ∧ B) ∈ Γ iff both A ∈ Γ and B ∈ Γ.

4. (A ∨ B) ∈ Γ iff either A ∈ Γ or B ∈ Γ.

5. (A → B) ∈ Γ iff either A < Γ or B ∈ Γ.

Proof. Let us suppose for all of the following that Γ is maximally


consistent.

1. If Γ ` A, then A ∈ Γ.
Suppose that Γ ` A. Suppose to the contrary that A < Γ:
then since Γ is maximally consistent, Γ ∪ {A} is inconsis-
CHAPTER 8. THE COMPLETENESS THEOREM 138

tent, hence Γ ∪ {A} ` ⊥. By Proposition 7.17, Γ is inconsis-


tent. This contradicts the assumption that Γ is consistent.
Hence, it cannot be the case that A < Γ, so A ∈ Γ.

2. For any A, either A ∈ Γ or ¬A ∈ Γ.


Suppose to the contrary that for some A both A < Γ and
¬A < Γ. Since Γ is maximally consistent, Γ ∪ {A} and Γ ∪
{¬A} are both inconsistent, so Γ ∪ {A} ` ⊥ and Γ ∪ {¬A} `
⊥. By Proposition 7.19, Γ is inconsistent, a contradiction.
Hence there cannot be such a sentence A and, for every A,
A ∈ Γ or ¬A ∈ Γ.

3. Exercise.

4. (A ∨ B) ∈ Γ iff either A ∈ Γ or B ∈ Γ.
For the contrapositive of the forward direction, suppose
that A < Γ and B < Γ. We want to show that (A ∨ B) < Γ.
Since Γ is maximally consistent, Γ ∪ {A} ` ⊥ and Γ ∪ {B } `
⊥. By Proposition 7.20, Γ ∪{(A∨B)} is inconsistent. Hence,
(A ∨ B) < Γ, as required.
For the reverse direction, suppose that A ∈ Γ or B ∈ Γ.
Then Γ ` A or Γ ` B. By Proposition 7.21, Γ ` A ∨ B. By
(1), (A ∨ B) ∈ Γ, as required.

5. Exercise.

8.4 Henkin Expansion


Part of the challenge in proving the completeness theorem is that
the model we construct from a maximally consistent set Γ must
make all the quantified formulas in Γ true. In order to guarantee
this, we use a trick due to Leon Henkin. In essence, the trick
consists in expanding the language by infinitely many constants
8.4. HENKIN EXPANSION 139

and adding, for each formula with one free variable A(x) a for-
mula of the form ∃x A → A(c ), where c is one of the new constant
symbols. When we construct the structure satisfying Γ, this will
guarantee that each true existential sentence has a witness among
the new constants.

Lemma 8.3. If Γ is consistent in L and L0 is obtained from L by


adding a countably infinite set of new constant symbols d1 , d2 , . . . , then
Γ is consistent in L0.

Definition 8.4 (Saturated set). A set Γ of formulas of a language


L is saturated if and only if for each formula A ∈ Frm(L) and
variable x there is a constant symbol c such that ∃x A → A(c ) ∈ Γ.

The following definition will be used in the proof of the next


theorem.

Definition 8.5. Let L0 be as in Lemma 8.3. Fix an enumeration


hA1, x 1 i, hA2, x 2 i, . . . of all formula-variable pairs of L0. We define
the sentences D n by recursion on n. Assuming that D 1 , . . . , D n
have already been defined, let c n+1 be the first new constant sym-
bol among the di that does not occur in D 1 , . . . , D n , and let D n+1
be the formula ∃x n+1 An+1 (x n+1 ) → An+1 (c n+1 ). This includes the
case where n = 0 and the list of previous D i ’s is empty, i.e., D 1 is
∃x 1 A1 → A1 (c 1 ).

Theorem 8.6. Every consistent set Γ can be extended to a saturated


consistent set Γ 0.

Proof. Given a consistent set of sentences Γ in a language L,


expand the language by adding a countably infinite set of new
constant symbols to form L0. By the previous Lemma, Γ is still
consistent in the richer language. Further, let D i be as in the
CHAPTER 8. THE COMPLETENESS THEOREM 140

previous definition: then Γ ∪ {D 1, D 2, . . . } is saturated by con-


struction. Let
Γ0 = Γ
Γn+1 = Γn ∪ {D n+1 }
i.e., Γn = Γ ∪ {D 1, . . . , D n }, and let Γ 0 = n Γn . To show that
S
Γ 0 is consistent it suffices to show, by induction on n, that each
set Γn is consistent.
The induction basis is simply the claim that Γ0 = Γ is consis-
tent, which is the hypothesis of the theorem. For the induction
step, suppose that Γn−1 is consistent but Γn = Γn−1 ∪ {D n } is in-
consistent. Recall that D n is ∃x n An (x n ) → An (c n ). where A(x) is
a formula of L0 with only the variable x n free and not containing
any constant symbols c i where i ≥ n.
If Γn−1 ∪ {D n } is inconsistent, then Γn−1 ` ¬D n , and hence
both of the following hold:
Γn−1 ` ∃x n An (x n ) Γn−1 ` ¬An (c n )
Here c n does not occur in Γn−1 or An (x n ) (remember, it was added
only with D n ). By Theorem 7.26, from Γ ` ¬An (c n ), we obtain Γ `
∀x n ¬An (x n ). Thus we have that both Γn−1 ` ∃x n An and Γn−1 `
∀x n ¬An (x n ), so Γ itself is inconsistent. (Note that ∀x n ¬An (x n ) `
¬∃x n An (x n ).) Contradiction: Γn−1 was supposed to be consistent.
Hence Γn ∪ {D n } is consistent. 

8.5 Lindenbaum’s Lemma


We now prove a lemma that shows that any consistent set of sen-
tences is contained in some set of sentences which is not just
consistent, but maximally so, and moreover, is saturated. The
proof works by first extending the set to a saturated set, and
then adding one sentence at a time, guaranteeing at each step
that the set remains consistent. The union of all stages in that
construction then contains, for each sentence A, either it or its
negation ¬A, is saturated, and is also consistent.
8.6. CONSTRUCTION OF A MODEL 141

Lemma 8.7 (Lindenbaum’s Lemma). Every consistent set Γ can be


extended to a maximally consistent saturated set Γ ∗ .

Proof. Let Γ be consistent, and let Γ 0 be as in the proof of Theo-


rem 8.6: we proved there that Γ ∪ Γ 0 is a consistent saturated set
in the richer language L0 (with the countably infinite set of new
constants). Let A0 , A1 , . . . be an enumeration of all the formulas
of L0. Define Γ0 = Γ ∪ Γ 0, and

 Γn ∪ {An }
 if Γn ∪ {An } is consistent;
Γn+1 = 
 Γn ∪ {¬An } otherwise.

Let Γ ∗ = n ≥0 Γn . Since Γ 0 ⊆ Γ ∗ , for each formula A, Γ ∗
S
contains a formula of the form ∃x A → A(c ) and thus is saturated.
Each Γn is consistent: Γ0 is consistent by definition. If Γn+1 =
Γn ∪ {A}, this is because the latter is consistent. If it isn’t, Γn+1 =
Γn ∪ {¬A}, which must be consistent. If it weren’t, i.e., both
Γn ∪{A} and Γn ∪{¬A} are inconsistent, then Γn ` ¬A and Γn ` A,
so Γn would be inconsistent contrary to induction hypothesis.
Every formula of Frm(L0) appears on the list used to de-
fine Γ ∗ . If An < Γ ∗ , then that is because Γn ∪ {An } was inconsis-
tent. But that means that Γ ∗ is maximally consistent. 

8.6 Construction of a Model


We will begin by showing how to construct a structure which
satisfies a maximally consistent, saturated set of sentences in a
language L without =.

Definition 8.8 (Term model). Let Γ ∗ be a maximally consistent,


saturated set of sentences in a language L. The term model M(Γ ∗ )
of Γ ∗ is the structure defined as follows:

1. The domain |M(Γ ∗ )| is the set of all closed terms of L.


CHAPTER 8. THE COMPLETENESS THEOREM 142


2. The interpretation of a constant symbol c is c itself: c M(Γ ) =
c.

3. The function symbol f is assigned the function which, given


as arguments the closed terms t1 , . . . , tn , has as value the
closed term f (t1, . . . , tn ):
M(Γ ∗ )
f (t1, . . . , tn ) = f (t1, . . . , tn )

∗)
4. If R is an n-place predicate symbol, then ht1, . . . , tn i ∈ R M(Γ
iff R(t1, . . . , tn ) ∈ Γ ∗ .

Lemma 8.9 (Truth Lemma). Suppose A does not contain =. Then


M(Γ ∗ ) |= A iff A ∈ Γ ∗ .

Proof. We prove both directions simultaneously, and by induction


on A.

1. A ≡ ⊥: M(Γ ∗ ) 6|= ⊥ by definition of satisfaction. On the


other hand, ⊥ < Γ ∗ since Γ ∗ is consistent.

2. A ≡ R(t1, . . . , tn ): M(Γ ∗ ) |= R(t1, . . . , tn ) iff ht1, . . . , tn i ∈



R M(Γ ) (by the definition of satisfaction) iff R(t1, . . . , tn ) ∈
Γ ∗ (the construction of M(Γ ∗ ).

3. A ≡ ¬B: M(Γ ∗ ) |= A iff M(Γ ∗ ) 6|= B (by definition of


satisfaction). By induction hypothesis, M(Γ ∗ ) 6|= B iff B <
Γ ∗ . By Proposition 8.2(2), ¬B ∈ Γ ∗ if B < Γ ∗ ; and ¬B < Γ ∗
if B ∈ Γ ∗ since Γ ∗ is consistent.

4. A ≡ B ∧ C : exercise.

5. A ≡ B ∨ C : M(Γ ∗ ) |= A iff at M(Γ ∗ ) |= B or M(Γ ∗ ) |= C


(by definition of satisfaction) iff B ∈ Γ ∗ or C ∈ Γ ∗ (by
induction hypothesis). This is the case iff (B ∨ C ) ∈ Γ ∗ (by
Proposition 8.2(4)).
8.7. IDENTITY 143

6. A ≡ B → C : exercise.

7. A ≡ ∀x B(x): exercise.

8. A ≡ ∃x B(x): First suppose that M(Γ ∗ ) |= A. By the


definition of satisfaction, for some variable assignment s ,
M(Γ ∗ ), s |= B(x). The value s (x) is some term t ∈ |M(Γ ∗ )|.
Thus, M(Γ ∗ ) |= B(t ), and by our induction hypothesis,
B(t ) ∈ Γ ∗ . By Theorem 7.27 we have Γ ∗ ` ∃x B(x). Then,
by Proposition 8.2(1), we can conclude that A ∈ Γ ∗ .
Conversely, suppose that ∃x B(x) ∈ Γ ∗ . Because Γ ∗ is satu-
rated, (∃x B(x) → B(c )) ∈ Γ ∗ . By Proposition 7.24 together
with Proposition 8.2(1), B(c ) ∈ Γ ∗ . By inductive hypothe-
sis, M(Γ ∗ ) |= B(c ). Now consider the variable assignment

with s (x) = c M(Γ ) . Then M(Γ ∗ ), s |= B(x). By definition of
satisfaction, M(Γ ∗ ) |= ∃x B(x).

8.7 Identity
The construction of the term model given in the preceding sec-
tion is enough to establish completeness for first-order logic for
sets Γ that do not contain =. The term model satisfies every
A ∈ Γ ∗ which does not contain = (and hence all A ∈ Γ). It does
not work, however, if = is present. The reason is that Γ ∗ then
may contain a sentence t = t 0, but in the term model the value of
any term is that term itself. Hence, if t and t 0 are different terms,
their values in the term model—i.e., t and t 0, respectively—are
different, and so t = t 0 is false. We can fix this, however, using a
construction known as “factoring.”
CHAPTER 8. THE COMPLETENESS THEOREM 144

Definition 8.10. Let Γ ∗ be a maximally consistent set of sen-


tences in L. We define the relation ≈ on the set of closed terms
of L by
t ≈ t 0 iff t = t 0 ∈ Γ ∗

Proposition 8.11. The relation ≈ has the following properties:

1. ≈ is reflexive.

2. ≈ is symmetric.

3. ≈ is transitive.

4. If t ≈ t 0, f is a function symbol, and t1 , . . . , ti −1 , ti +1 , . . . , tn


are terms, then

f (t1, . . . , ti −1, t, ti +1, . . . , tn ) ≈ f (t1, . . . , ti −1, t 0, ti +1, . . . , tn ).

5. If t ≈ t 0, R is a predicate symbol, and t1 , . . . , ti −1 , ti +1 , . . . , tn


are terms, then

R(t1, . . . , ti −1, t, ti +1, . . . , tn ) ∈ Γ ∗ iff


R(t1, . . . , ti −1, t 0, ti +1, . . . , tn ) ∈ Γ ∗ .

Proof. Since Γ ∗ is maximally consistent, t = t 0 ∈ Γ ∗ iff Γ ∗ ` t = t 0.


Thus it is enough to show the following:
1. Γ ∗ ` t = t for all terms t .
2. If Γ ∗ ` t = t 0 then Γ ∗ ` t 0 = t .
3. If Γ ∗ ` t = t 0 and Γ ∗ ` t 0 = t 00, then Γ ∗ ` t = t 00.
4. If Γ ∗ ` t = t 0, then
Γ ∗ ` f (t1, . . . , ti −1, t, ti +1, , . . . , tn ) = f (t1, . . . , ti −1, t 0, ti +1, . . . , tn )
for every n-place function symbol f and terms t1 , . . . , ti −1 ,
ti +1 , . . . , tn .
8.7. IDENTITY 145

5. If Γ ∗ ` t = t 0 and Γ ∗ ` R(t1, . . . , ti −1, t, ti +1, . . . , tn ), then


Γ ∗ ` R(t1, . . . , ti −1, t 0, ti +1, . . . , tn ) for every n-place predicate
symbol R and terms t1 , . . . , ti −1 , ti +1 , . . . , tn .

Definition 8.12. Suppose Γ ∗ is a maximally consistent set in a


language L, t is a term, and ≈ as in the previous definition. Then:

[t ]≈ = {t 0 : t 0 ∈ Trm(L), t ≈ t 0 }

and Trm(L)/≈ = {[t ]≈ : t ∈ Trm(L)}.

Definition 8.13. Let M = M(Γ ∗ ) be the term model for Γ ∗ . Then


M/≈ is the following structure:

1. |M/≈ | = Trm(L)/≈ .

2. c M/≈ = [c ]≈

3. f M/≈ ([t
1 ]≈, . . . , [tn ]≈ ) = [f (t1, . . . , tn )]≈

4. h[t1 ]≈, . . . , [tn ]≈ i ∈ R M/≈ iff M |= R(t1, . . . , tn ).

Note that we have defined f M/≈ and R M/≈ for elements of


Trm(L)/≈ by referring to them as [t ]≈ , i.e., via representatives t ∈
[t ]≈ . We have to make sure that these definitions do not depend
on the choice of these representatives, i.e., that for some other
choices t 0 which determine the same equivalence classes ([t ]≈ =
[t 0]≈ ), the definitions yield the same result. For instance, if R is a
one-place predicate symbol, the last clause of the definition says
that [t ]≈ ∈ R M/≈ iff M |= R(t ). If for some other term t 0 with
t ≈ t 0, M 6|= R(t ), then the definition would require [t 0]≈ < R M/≈ .
If t ≈ t 0, then [t ]≈ = [t 0]≈ , but we can’t have both [t ]≈ ∈ R M/≈
and [t ]≈ < R M/≈ . However, Proposition 8.11 guarantees that this
cannot happen.
CHAPTER 8. THE COMPLETENESS THEOREM 146

Proposition 8.14. M/≈ is well defined, i.e., if t1 , . . . , tn , t10 , . . . , tn0


are terms, and ti ≈ ti0 then

1. [f (t1, . . . , tn )]≈ = [f (t10, . . . , tn0 )]≈ , i.e.,

f (t1, . . . , tn ) ≈ f (t10, . . . , tn0 )

and

2. M |= R(t1, . . . , tn ) iff M |= R(t10, . . . , tn0 ), i.e.,

R(t1, . . . , tn ) ∈ Γ ∗ iff R(t10, . . . , tn0 ) ∈ Γ ∗ .

Proof. Follows from Proposition 8.11 by induction on n. 

Lemma 8.15. M/≈ |= A iff A ∈ Γ ∗ for all sentences A.

Proof. By induction on A, just as in the proof of Lemma 8.9. The


only case that needs additional attention is when A ≡ t = t 0.

M/≈ |= t = t 0 iff [t ]≈ = [t 0]≈ (by definition of M/≈ )


iff t ≈ t 0 (by definition of [t ]≈ )
iff t = t 0 ∈ Γ ∗ (by definition of ≈).

Note that while M(Γ ∗ ) is always countable and infinite, M/≈


may be finite, since it may turn out that there are only finitely
many classes [t ]≈ . This is to be expected, since Γ may contain
sentences which require any structure in which they are true to
be finite. For instance, ∀x ∀y x = y is a consistent sentence, but
is satisfied only in structures with a domain that contains exactly
one element.
8.8. THE COMPLETENESS THEOREM 147

8.8 The Completeness Theorem


Let’s combine our results: we arrive at the Gödel’s completeness
theorem.

Theorem 8.16 (Completeness Theorem). Let Γ be a set of sen-


tences. If Γ is consistent, it is satisfiable.

Proof. Suppose Γ is consistent. By Lemma 8.7, there is a Γ ∗ ⊇


Γ which is maximally consistent and saturated. If Γ does not
contain =, then by Lemma 8.9, M(Γ ∗ ) |= A iff A ∈ Γ ∗ . From this
it follows in particular that for all A ∈ Γ, M(Γ ∗ ) |= A, so Γ is
satisfiable. If Γ does contain =, then by Lemma 8.15, M/≈ |= A
iff A ∈ Γ ∗ for all sentences A. In particular, M/≈ |= A for all
A ∈ Γ, so Γ is satisfiable. 

Corollary 8.17 (Completeness Theorem, Second Version). For


all Γ and A sentences: if Γ  A then Γ ` A.

Proof. Note that the Γ’s in Corollary 8.17 and Theorem 8.16 are
universally quantified. To make sure we do not confuse ourselves,
let us restate Theorem 8.16 using a different variable: for any set
of sentences ∆, if ∆ is consistent, it is satisfiable. By contraposi-
tion, if ∆ is not satisfiable, then ∆ is inconsistent. We will use this
to prove the corollary.
Suppose that Γ  A. Then Γ ∪ {¬A} is unsatisfiable by Propo-
sition 5.46. Taking Γ ∪ {¬A} as our ∆, the previous version of
Theorem 8.16 gives us that Γ ∪ {¬A} is inconsistent. By Propo-
sition 7.13, Γ ` A. 

8.9 The Compactness Theorem


One important consequence of the completeness theorem is the
compactness theorem. The compactness theorem states that if
each finite subset of a set of sentences is satisfiable, the entire
CHAPTER 8. THE COMPLETENESS THEOREM 148

set is satisfiable—even if the set itself is infinite. This is far from


obvious. There is nothing that seems to rule out, at first glance at
least, the possibility of there being infinite sets of sentences which
are contradictory, but the contradiction only arises, so to speak,
from the infinite number. The compactness theorem says that
such a scenario can be ruled out: there are no unsatisfiable infinite
sets of sentences each finite subset of which is satisfiable. Like the
copmpleteness theorem, it has a version related to entailment:
if an infinite set of sentences entails something, already a finite
subset does.

Definition 8.18. A set Γ of formulas is finitely satisfiable if and


only if every finite Γ0 ⊆ Γ is satisfiable.

Theorem 8.19 (Compactness Theorem). The following hold for


any sentences Γ and A:

1. Γ  A iff there is a finite Γ0 ⊆ Γ such that Γ0  A.

2. Γ is satisfiable if and only if it is finitely satisfiable.

Proof. We prove (2). If Γ is satisfiable, then there is a structure M


such that M |= A for all A ∈ Γ. Of course, this M also satisfies
every finite subset of Γ, so Γ is finitely satisfiable.
Now suppose that Γ is finitely satisfiable. Then every finite
subset Γ0 ⊆ Γ is satisfiable. By soundness, every finite subset is
consistent. Then Γ itself must be consistent. For assume it is not,
i.e., Γ ` ⊥. But derivations are finite, and so already some finite
subset Γ0 ⊆ Γ must be inconsistent (cf. Proposition 7.15). But
we just showed they are all consistent, a contradiction. Now by
completeness, since Γ is consistent, it is satisfiable. 

Example 8.20. In every model M of a theory Γ, each term t of


course picks out an element of |M|. Can we guarantee that it is
also true that every element of |M| is picked out by some term or
other? In other words, are there theories Γ all models of which
8.9. THE COMPACTNESS THEOREM 149

are covered? The compactness theorem shows that this is not the
case if Γ has infinite models. Here’s how to see this: Let M be
an infinite model of Γ, and let c be a constant symbol not in the
language of Γ. Let ∆ be the set of all sentences c , t for t a term
in the language L of Γ, i.e.,

∆ = {c , t : t ∈ Trm(L)}.

A finite subset of Γ ∪ ∆ can be written as Γ 0 ∪ ∆0, with Γ 0 ⊆ Γ


and ∆0 ⊆ ∆. Since ∆0 is finite, it can contain only finitely many
terms. Let a ∈ |M| be an element of |M| not picked out by any
of them, and let M 0 be the structure that is just like M, but also
0
c M = a. Since a , ValM (t ) for all t occuring in ∆0, M 0 |= ∆0.
Since M |= Γ, Γ 0 ⊆ Γ, and c does not occur in Γ, also M 0 |= Γ 0.
Together, M 0 |= Γ 0 ∪ ∆0 for every finite subset Γ 0 ∪ ∆0 of Γ ∪ ∆. So
every finite subset of Γ ∪ ∆ is satisfiable. By compactness, Γ ∪ ∆
itself is satisfiable. So there are models M |= Γ ∪ ∆. Every such
M is a model of Γ, but is not covered, since ValM (c ) , ValM (t )
for all terms t of L.

Example 8.21. Consider a language L containing the predicate


symbol <, constant symbols , , and function symbols +, ×, −,
÷. Let Γ be the set of all sentences in this language true in Q
with domain Q and the obvious interpretations. Γ is the set of
all sentences of L true about the rational numbers. Of course,
in Q (and even in R), there are no numbers which are greater
than 0 but less than 1/k for all k ∈ Z+ . Such a number, if it
existed, would be an infinitesimal: non-zero, but infinitely small.
The compactness theorem shows that there are models of Γ in
which infinitesimals exist: Let ∆ be {0 < c }∪{c < (÷k ) : k ∈ Z+ }
(where k = ( + ( + · · · + ( + ) . . . )) with k ’s). For any finite
subset ∆0 of ∆ there is a K such that all the sentences c < k in ∆0
0
have k < K . If we expand Q to Q0 with c Q = 1/K we have that
Q0 |= Γ ∪ ∆0 , and so Γ ∪ ∆ is finitely satisfiable (Exercise: prove
this in detail). By compactness, Γ ∪ ∆ is satisfiable. Any model S
of Γ ∪ ∆ contains an infinitesimal, namely c S .
CHAPTER 8. THE COMPLETENESS THEOREM 150

Example 8.22. We know that first-order logic with identity pred-


icate can express that the size of the domain must have some
minimal size: The sentence A ≥n (which says “there are at least
n distinct objects”) is true only in structures where |M| has at
least n objects. So if we take

∆ = {A ≥n : n ≥ 1}

then any model of ∆ must be infinite. Thus, we can guarantee that


a theory only has infinite models by adding ∆ to it: the models
of Γ ∪ ∆ are all and only the infinite models of Γ.
So first-order logic can express infinitude. The compactness
theorem shows that it cannot express finitude, however. For sup-
pose some set of sentences Λ were satisfied in all and only finite
structures. Then ∆ ∪ Λ is finitely satisfiable. Why? Suppose
∆0 ∪ Λ0 ⊆ ∆ ∪ Λ is finite with ∆0 ⊆ ∆ and Λ0 ⊆ Λ. Let n be the
largest number such that A ≥n ∈ ∆0. Λ, being satisfied in all finite
structures, has a model M with finitely many but ≥ n elements.
But then M |= ∆0 ∪ Λ0. By compactness, ∆ ∪ Λ has an infinite
model, contradicting the assumption that Λ is satisfied only in
finite structures.

8.10 The Löwenheim-Skolem Theorem


The Löwenheim-Skolem Theorem says that if a theory has an in-
finite model, then it also has a model that is at most countably
infinite. An immediate consequene of this fact is that first-order
logic cannot express that the size of a structure is uncountable:
any sentence or set of sentences satisfied in all uncountable struc-
tures is also satisfied in some countably infinite structure.
8.10. THE LÖWENHEIM-SKOLEM THEOREM 151

Theorem 8.23. If Γ is consistent then it has a countably infinite


model, i.e., it is satisfiable in a structure whose domain is either finite
or infinite but countable.

Proof. If Γ is consistent, the structure M delivered by the proof


of the completeness theorem has a domain |M| whose cardinality
is bounded by that of the set of the terms of the language L. So
M is at most countably infinite. 

Theorem 8.24. If Γ is consistent set of sentences in the language


of first-order logic without identity, then it has a countably infinite
model, i.e., it is satisfiable in a structure whose domain is infinite and
countable.

Proof. If Γ is consistent and contains no sentences in which iden-


tity appears, then the structure M delivered by the proof of the
completness theorem has a domain |M| whose cardinality is iden-
tical to that of the set of the terms of the language L. So M is
denumerably infinite. 

Example 8.25 (Skolem’s Paradox). Zermelo-Fraenkel set the-


ory ZFC is a very powerful framework in which practically all
mathematical statements can be expressed, including facts about
the sizes of sets. So for instance, ZFC can prove that the set R
of real numbers is uncountable, it can prove Cantor’s Theorem
that the power set of any set is larger than the set itself, etc. If
ZFC is consistent, its models are all infinite, and moreover, they
all contain elements about which the theory says that they are
uncountable, such as the element that makes true the theorem
of ZFC that the power set of the natural numbers exists. By the
Löwenheim-Skolem Theorem, ZFC also has countable models—
models that contain “uncountable” sets but which themselves are
countable.
CHAPTER 8. THE COMPLETENESS THEOREM 152

Summary

The completeness theorem is the converse of the soundness


theorem. In one form it states that if Γ  A then Γ ` A, in an-
other that if Γ is consistent then it is satisfiable. We proved the
second form (and derived the first from the second). The proof is
involved and requires a number of steps. We start with a consis-
tent set Γ. First we add infinitely many new constant symbols c i
as well as formulas of the form ∃x A(x) → A(c ) where each for-
mula A(x) with a free variable in the expanded language is paired
with one of the new constants. This results in a saturated con-
sistent set of sentences containing Γ. It is still consistent. Now
we take that set and extend it to a maximally consistent set. A
maximally consistent set has the nice property that for any sen-
tence A, either A or ¬A is in the set. Since we started from a
saturated set, we now have a saturated and maximally consistent
set of sentences Γ ∗ that includes Γ. From this set it is now pos-
sible to define a structure M such that M(Γ ∗ ) |= A iff A ∈ Γ ∗ . In
particular, M(Γ ∗ ) |= Γ, i.e., Γ is satisfiable. If = is present, the
construction is slightly more complex.

Two important corollaries follow from the completeness the-


orem. The compactness theorem states that Γ  A iff Γ0 
A for some finite Γ0 ⊆ Γ. An equivalent formulation is that
Γ is satisfiable iff every finite Γ0 ⊆ Γ is satisfiable. The com-
pactness theorem is useful to prove the existence of structures
with certain properties. For instance, we can use it to show that
there are infinite models for every theory which has arbitrarily
large finite models. This means in particular that finitude can-
not be expressed in first-order logic. The second corollary, the
Löwenheim-Skolem Theorem, states that every satisfiable Γ
has a countable model. It in turn shows that uncountability can-
not be expressed in first-order logic.
8.10. THE LÖWENHEIM-SKOLEM THEOREM 153

Problems
Problem 8.1. Complete the proof of Proposition 8.2.

Problem 8.2. Complete the proof of Lemma 8.9.

Problem 8.3. Complete the proof of Proposition 8.11.

Problem 8.4. Use Corollary 8.17 to prove Theorem 8.16, thus


showing that the two formulations of the completeness theorem
are equivalent.

Problem 8.5. In order for a derivation system to be complete,


its rules must be strong enough to prove every unsatisfiable set
inconsistent. Which of the rules of LK were necessary to prove
completeness? Are any of these rules not used anywhere in the
proof? In order to answer these questions, make a list or diagram
that shows which of the rules of LK were used in which results
that lead up to the proof of Theorem 8.16. Be sure to note any
tacit uses of rules in these proofs.

Problem 8.6. Prove (1) of Theorem 8.19.

Problem 8.7. In the standard model of arithmetic N, there is no


element k ∈ |N| which satisfies every formula n < x (where n is
0...0 with n 0’s). Use the compactness theorem to show that the
set of sentences in the language of arithmetic which are true in
the standard model of arithmetic N are also true in a structure N 0
that contains an element which does satisfy every formula n < x.
CHAPTER 9

Beyond
First-order
Logic
9.1 Overview
First-order logic is not the only system of logic of interest: there
are many extensions and variations of first-order logic. A logic
typically consists of the formal specification of a language, usu-
ally, but not always, a deductive system, and usually, but not
always, an intended semantics. But the technical use of the term
raises an obvious question: what do logics that are not first-order
logic have to do with the word “logic,” used in the intuitive or
philosophical sense? All of the systems described below are de-
signed to model reasoning of some form or another; can we say
what makes them logical?
No easy answers are forthcoming. The word “logic” is used
in different ways and in different contexts, and the notion, like
that of “truth,” has been analyzed from numerous philosophical
stances. For example, one might take the goal of logical reason-

154
9.2. MANY-SORTED LOGIC 155

ing to be the determination of which statements are necessarily


true, true a priori, true independent of the interpretation of the
nonlogical terms, true by virtue of their form, or true by linguistic
convention; and each of these conceptions requires a good deal
of clarification. Even if one restricts one’s attention to the kind of
logic used in mathematics, there is little agreement as to its scope.
For example, in the Principia Mathematica, Russell and Whitehead
tried to develop mathematics on the basis of logic, in the logicist
tradition begun by Frege. Their system of logic was a form of
higher-type logic similar to the one described below. In the end
they were forced to introduce axioms which, by most standards,
do not seem purely logical (notably, the axiom of infinity, and
the axiom of reducibility), but one might nonetheless hold that
some forms of higher-order reasoning should be accepted as logi-
cal. In contrast, Quine, whose ontology does not admit “proposi-
tions” as legitimate objects of discourse, argues that second-order
and higher-order logic are really manifestations of set theory in
sheep’s clothing; in other words, systems involving quantification
over predicates are not purely logical.
For now, it is best to leave such philosophical issues for a rainy
day, and simply think of the systems below as formal idealizations
of various kinds of reasoning, logical or otherwise.

9.2 Many-Sorted Logic


In first-order logic, variables and quantifiers range over a single
domain. But it is often useful to have multiple (disjoint) domains:
for example, you might want to have a domain of numbers, a do-
main of geometric objects, a domain of functions from numbers
to numbers, a domain of abelian groups, and so on.
Many-sorted logic provides this kind of framework. One starts
with a list of “sorts”—the “sort” of an object indicates the “do-
main” it is supposed to inhabit. One then has variables and quan-
tifiers for each sort, and (usually) an identity predicate for each
sort. Functions and relations are also “typed” by the sorts of ob-
CHAPTER 9. BEYOND FIRST-ORDER LOGIC 156

jects they can take as arguments. Otherwise, one keeps the usual
rules of first-order logic, with versions of the quantifier-rules re-
peated for each sort.
For example, to study international relations we might choose
a language with two sorts of objects, French citizens and German
citizens. We might have a unary relation, “drinks wine,” for ob-
jects of the first sort; another unary relation, “eats wurst,” for
objects of the second sort; and a binary relation, “forms a multi-
national married couple,” which takes two arguments, where the
first argument is of the first sort and the second argument is of
the second sort. If we use variables a, b, c to range over French
citizens and x, y, z to range over German citizens, then
∀a ∀x[(Mar r i edT o(a, x) → (Dr i nksW i ne(a)∨¬EatsW ur st(x))]]
asserts that if any French person is married to a German, either
the French person drinks wine or the German doesn’t eat wurst.
Many-sorted logic can be embedded in first-order logic in a
natural way, by lumping all the objects of the many-sorted do-
mains together into one first-order domain, using unary predicate
symbols to keep track of the sorts, and relativizing quantifiers.
For example, the first-order language corresponding to the exam-
ple above would have unary predicate symbolss “Ger man” and
“F r ench,” in addition to the other relations described, with the
sort requirements erased. A sorted quantifier ∀x A, where x is a
variable of the German sort, translates to
∀x (Ger man(x) → A).
We need to add axioms that insure that the sorts are separate—
e.g., ∀x ¬(Ger man(x) ∧ F r ench(x))—as well as axioms that guar-
antee that “drinks wine” only holds of objects satisfying the pred-
icate F r ench(x), etc. With these conventions and axioms, it is
not difficult to show that many-sorted sentences translate to first-
order sentences, and many-sorted derivations translate to first-
order derivations. Also, many-sorted structures “translate” to cor-
responding first-order structures and vice-versa, so we also have
a completeness theorem for many-sorted logic.
9.3. SECOND-ORDER LOGIC 157

9.3 Second-Order logic


The language of second-order logic allows one to quantify not
just over a domain of individuals, but over relations on that do-
main as well. Given a first-order language L, for each k one adds
variables R which range over k -ary relations, and allows quantifi-
cation over those variables. If R is a variable for a k -ary rela-
tion, and t1 , . . . , tk are ordinary (first-order) terms, R(t1, . . . , tk )
is an atomic formula. Otherwise, the set of formulas is defined
just as in the case of first-order logic, with additional clauses for
second-order quantification. Note that we only have the identity
predicate for first-order terms: if R and S are relation variables
of the same arity k , we can define R = S to be an abbreviation
for
∀x 1 . . . ∀x k (R(x 1, . . . , x k ) ↔ S (x 1, . . . , x k )).
The rules for second-order logic simply extend the quanti-
fier rules to the new second order variables. Here, however, one
has to be a little bit careful to explain how these variables in-
teract with the predicate symbols of L, and with formulas of L
more generally. At the bare minimum, relation variables count
as terms, so one has inferences of the form
A(R) ` ∃R A(R)
But if L is the language of arithmetic with a constant relation
symbol <, one would also expect the following inference to be
valid:
x < y ` ∃R R(x, y)
or for a given formula A,
A(x 1, . . . , x k ) ` ∃R R(x 1, . . . , x k )
More generally, we might want to allow inferences of the form
x . B(~
A[λ~ x )/R] ` ∃R A
where A[λ~x . B(~
x )/R] denotes the result of replacing every atomic
formula of the form Rt1, . . . , tk in A by B(t1, . . . , tk ). This last rule
CHAPTER 9. BEYOND FIRST-ORDER LOGIC 158

is equivalent to having a comprehension schema, i.e., an axiom of


the form

∃R ∀x 1, . . . , x k (A(x 1, . . . , x k ) ↔ R(x 1, . . . , x k )),

one for each formula A in the second-order language, in which


R is not a free variable. (Exercise: show that if R is allowed to
occur in A, this schema is inconsistent!)
When logicians refer to the “axioms of second-order logic”
they usually mean the minimal extension of first-order logic by
second-order quantifier rules together with the comprehension
schema. But it is often interesting to study weaker subsystems of
these axioms and rules. For example, note that in its full gen-
erality the axiom schema of comprehension is impredicative: it
allows one to assert the existence of a relation R(x 1, . . . , x k ) that
is “defined” by a formula with second-order quantifiers; and these
quantifiers range over the set of all such relations—a set which
includes R itself! Around the turn of the twentieth century, a com-
mon reaction to Russell’s paradox was to lay the blame on such
definitions, and to avoid them in developing the foundations of
mathematics. If one prohibits the use of second-order quantifiers
in the formula A, one has a predicative form of comprehension,
which is somewhat weaker.
From the semantic point of view, one can think of a second-
order structure as consisting of a first-order structure for the lan-
guage, coupled with a set of relations on the domain over which
the second-order quantifiers range (more precisely, for each k
there is a set of relations of arity k ). Of course, if comprehension
is included in the proof system, then we have the added require-
ment that there are enough relations in the “second-order part”
to satisfy the comprehension axioms—otherwise the proof sys-
tem is not sound! One easy way to insure that there are enough
relations around is to take the second-order part to consist of all
the relations on the first-order part. Such a structure is called
full, and, in a sense, is really the “intended structure” for the lan-
guage. If we restrict our attention to full structures we have what
9.3. SECOND-ORDER LOGIC 159

is known as the full second-order semantics. In that case, specify-


ing a structure boils down to specifying the first-order part, since
the contents of the second-order part follow from that implicitly.
To summarize, there is some ambiguity when talking about
second-order logic. In terms of the proof system, one might have
in mind either

1. A “minimal” second-order proof system, together with some


comprehension axioms.

2. The “standard” second-order proof system, with full com-


prehension.

In terms of the semantics, one might be interested in either

1. The “weak” semantics, where a structure consists of a first-


order part, together with a second-order part big enough
to satisfy the comprehension axioms.

2. The “standard” second-order semantics, in which one con-


siders full structures only.

When logicians do not specify the proof system or the semantics


they have in mind, they are usually refering to the second item
on each list. The advantage to using this semantics is that, as
we will see, it gives us categorical descriptions of many natural
mathematical structures; at the same time, the proof system is
quite strong, and sound for this semantics. The drawback is that
the proof system is not complete for the semantics; in fact, no ef-
fectively given proof system is complete for the full second-order
semantics. On the other hand, we will see that the proof system
is complete for the weakened semantics; this implies that if a sen-
tence is not provable, then there is some structure, not necessarily
the full one, in which it is false.
The language of second-order logic is quite rich. One can
identify unary relations with subsets of the domain, and so in
CHAPTER 9. BEYOND FIRST-ORDER LOGIC 160

particular you can quantify over these sets; for example, one can
express induction for the natural numbers with a single axiom

∀R ((R() ∧ ∀x (R(x) → R(x 0))) → ∀x R(x)).

If one takes the language of arithmetic to have symbols , 0, +, ×


and <, one can add the following axioms to describe their behav-
ior:

1. ∀x ¬x 0 = 

2. ∀x ∀y (s (x) = s (y) → x = y)

3. ∀x (x + ) = x

4. ∀x ∀y (x + y 0) = (x + y)0

5. ∀x (x × ) = 

6. ∀x ∀y (x × y 0) = ((x × y) + x)

7. ∀x ∀y (x < y ↔ ∃z y = (x + z 0))

It is not difficult to show that these axioms, together with the


axiom of induction above, provide a categorical description of
the structure N, the standard model of arithmetic, provided we
are using the full second-order semantics. Given any structure A
in which these axioms are true, define a function f from N to the
domain of A using ordinary recursion on N, so that f (0) = A
and f (x + 1) = 0A (f (x)). Using ordinary induction on N and the
fact that axioms (1) and (2) hold in A, we see that f is injective.
To see that f is surjective, let P be the set of elements of |A|
that are in the range of f . Since A is full, P is in the second-
order domain. By the construction of f , we know that A is in P ,
and that P is closed under 0A . The fact that the induction axiom
holds in A (in particular, for P ) guarantees that P is equal to the
entire first-order domain of A. This shows that f is a bijection.
Showing that f is a homomorphism is no more difficult, using
ordinary induction on N repeatedly.
9.3. SECOND-ORDER LOGIC 161

In set-theoretic terms, a function is just a special kind of re-


lation; for example, a unary function f can be identified with
a binary relation R satisfying ∀x ∃y R(x, y). As a result, one can
quantify over functions too. Using the full semantics, one can
then define the class of infinite structures to be the class of struc-
tures A for which there is an injective function from the domain
of A to a proper subset of itself:

∃f (∀x ∀y (f (x) = f (y) → x = y) ∧ ∃y ∀x f (x) , y).

The negation of this sentence then defines the class of finite struc-
tures.
In addition, one can define the class of well-orderings, by
adding the following to the definition of a linear ordering:

∀P (∃x P (x) → ∃x (P (x) ∧ ∀y (y < x → ¬P (y)))).

This asserts that every non-empty set has a least element, modulo
the identification of “set” with “one-place relation”. For another
example, one can express the notion of connectedness for graphs,
by saying that there is no nontrivial separation of the vertices into
disconnected parts:

¬∃A (∃x A(x) ∧ ∃y ¬A(y) ∧ ∀w ∀z ((A(w) ∧ ¬A(z )) → ¬R(w, z ))).

For yet another example, you might try as an exercise to define


the class of finite structures whose domain has even size. More
strikingly, one can provide a categorical description of the real
numbers as a complete ordered field containing the rationals.
In short, second-order logic is much more expressive than
first-order logic. That’s the good news; now for the bad. We have
already mentioned that there is no effective proof system that
is complete for the full second-order semantics. For better or
for worse, many of the properties of first-order logic are absent,
including compactness and the Löwenheim-Skolem theorems.
On the other hand, if one is willing to give up the full second-
order semantics in terms of the weaker one, then the minimal
CHAPTER 9. BEYOND FIRST-ORDER LOGIC 162

second-order proof system is complete for this semantics. In other


words, if we read ` as “proves in the minimal system” and  as
“logically implies in the weaker semantics”, we can show that
whenever Γ  A then Γ ` A. If one wants to include specific com-
prehension axioms in the proof system, one has to restrict the
semantics to second-order structures that satisfy these axioms:
for example, if ∆ consists of a set of comprehension axioms (pos-
sibly all of them), we have that if Γ ∪ ∆  A, then Γ ∪ ∆ ` A. In
particular, if A is not provable using the comprehension axioms
we are considering, then there is a model of ¬A in which these
comprehension axioms nonetheless hold.
The easiest way to see that the completeness theorem holds
for the weaker semantics is to think of second-order logic as a
many-sorted logic, as follows. One sort is interpreted as the ordi-
nary “first-order” domain, and then for each k we have a domain
of “relations of arity k .” We take the language to have built-in
relation symbols “tr ue k (R, x 1, . . . , x k )” which is meant to assert
that R holds of x 1 , . . . , x k , where R is a variable of the sort “k -ary
relation” and x 1 , . . . , x k are objects of the first-order sort.
With this identification, the weak second-order semantics is
essentially the usual semantics for many-sorted logic; and we have
already observed that many-sorted logic can be embedded in first-
order logic. Modulo the translations back and forth, then, the
weaker conception of second-order logic is really a form of first-
order logic in disguise, where the domain contains both “objects”
and “relations” governed by the appropriate axioms.

9.4 Higher-Order logic


Passing from first-order logic to second-order logic enabled us
to talk about sets of objects in the first-order domain, within the
formal language. Why stop there? For example, third-order logic
should enable us to deal with sets of sets of objects, or perhaps
even sets which contain both objects and sets of objects. And
fourth-order logic will let us talk about sets of objects of that kind.
9.4. HIGHER-ORDER LOGIC 163

As you may have guessed, one can iterate this idea arbitrarily.
In practice, higher-order logic is often formulated in terms
of functions instead of relations. (Modulo the natural identifica-
tions, this difference is inessential.) Given some basic “sorts” A,
B, C , . . . (which we will now call “types”), we can create new ones
by stipulating

If σ and τ are finite types then so is σ → τ.

Think of types as syntactic “labels,” which classify the objects


we want in our domain; σ → τ describes those objects that are
functions which take objects of type σ to objects of type τ. For
example, we might want to have a type Ω of truth values, “true”
and “false,” and a type N of natural numbers. In that case, you
can think of objects of type N → Ω as unary relations, or subsets
of N; objects of type N → N are functions from natural numers to
natural numbers; and objects of type (N → N) → N are “function-
als,” that is, higher-type functions that take functions to numbers.
As in the case of second-order logic, one can think of higher-
order logic as a kind of many-sorted logic, where there is a sort for
each type of object we want to consider. But it is usually clearer
just to define the syntax of higher-type logic from the ground up.
For example, we can define a set of finite types inductively, as
follows:

1. N is a finite type.

2. If σ and τ are finite types, then so is σ → τ.

3. If σ and τ are finite types, so is σ × τ.

Intuitively, N denotes the type of the natural numbers, σ → τ


denotes the type of functions from σ to τ, and σ × τ denotes the
type of pairs of objects, one from σ and one from τ. We can then
define a set of terms inductively, as follows:

1. For each type σ, there is a stock of variables x, y, z , . . . of


type σ
CHAPTER 9. BEYOND FIRST-ORDER LOGIC 164

2.  is a term of type N

3. S (successor) is a term of type N → N

4. If s is a term of type σ, and t is a term of type N → (σ → σ),


then Rs t is a term of type N → σ

5. If s is a term of type τ → σ and t is a term of type τ, then


s (t ) is a term of type σ

6. If s is a term of type σ and x is a variable of type τ, then


λx . s is a term of type τ → σ.

7. If s is a term of type σ and t is a term of type τ, then hs, t i


is a term of type σ × τ.

8. If s is a term of type σ × τ then p 1 (s ) is a term of type σ


and p 2 (s ) is a term of type τ.

Intuitively, Rs t denotes the function defined recursively by

Rs t (0) = s
Rs t (x + 1) = t (x, R s t (x)),

hs, t i denotes the pair whose first component is s and whose sec-
ond component is t , and p 1 (s ) and p 2 (s ) denote the first and
second elements (“projections”) of s . Finally, λx . s denotes the
function f defined by
f (x) = s
for any x of type σ; so item (6) gives us a form of comprehension,
enabling us to define functions using terms. Formulas are built
up from identity predicate statements s = t between terms of the
same type, the usual propositional connectives, and higher-type
quantification. One can then take the axioms of the system to be
the basic equations governing the terms defined above, together
with the usual rules of logic with quantifiers and identity predi-
cate.
9.5. INTUITIONISTIC LOGIC 165

If one augments the finite type system with a type Ω of truth


values, one has to include axioms which govern its use as well. In
fact, if one is clever, one can get rid of complex formulas entirely,
replacing them with terms of type Ω! The proof system can then
be modified accordingly. The result is essentially the simple theory
of types set forth by Alonzo Church in the 1930s.
As in the case of second-order logic, there are different ver-
sions of higher-type semantics that one might want to use. In the
full version, variables of type σ → τ range over the set of all
functions from the objects of type σ to objects of type τ. As you
might expect, this semantics is too strong to admit a complete,
effective proof system. But one can consider a weaker semantics,
in which a structure consists of sets of elements Tτ for each type
τ, together with appropriate operations for application, projec-
tion, etc. If the details are carried out correctly, one can obtain
completeness theorems for the kinds of proof systems described
above.
Higher-type logic is attractive because it provides a frame-
work in which we can embed a good deal of mathematics in a
natural way: starting with N, one can define real numbers, con-
tinuous functions, and so on. It is also particularly attractive in
the context of intuitionistic logic, since the types have clear “con-
structive” intepretations. In fact, one can develop constructive
versions of higher-type semantics (based on intuitionistic, rather
than classical logic) that clarify these constructive interpretations
quite nicely, and are, in many ways, more interesting than the
classical counterparts.

9.5 Intuitionistic Logic


In constrast to second-order and higher-order logic, intuitionistic
first-order logic represents a restriction of the classical version,
intended to model a more “constructive” kind of reasoning. The
following examples may serve to illustrate some of the underlying
motivations.
CHAPTER 9. BEYOND FIRST-ORDER LOGIC 166

Suppose someone came up to you one day and announced


that they had determined a natural number x, with the property
that if x is prime, the Riemann hypothesis is true, and if x is com-
posite, the Riemann hypothesis is false. Great news! Whether the
Riemann hypothesis is true or not is one of the big open ques-
tions of mathematics, and here they seem to have reduced the
problem to one of calculation, that is, to the determination of
whether a specific number is prime or not.
What is the magic value of x? They describe it as follows: x is
the natural number that is equal to 7 if the Riemann hypothesis
is true, and 9 otherwise.
Angrily, you demand your money back. From a classical point
of view, the description above does in fact determine a unique
value of x; but what you really want is a value of x that is given
explicitly.
To take another, perhaps less contrived example, consider
the following question. We know that it is possible to raise an
irrational number to a rational power, and get a rational result.
√ 2
For example, 2 = 2. What is less clear is whether or not it is
possible to raise an irrational number to an irrational power, and
get a rational result. The following theorem answers this in the
affirmative:

Theorem 9.1. There are irrational numbers a and b such that a b is


rational.
√ √2
Proof. Consider
√ 2 . If this is rational, we are done: we can let
a = b = 2. Otherwise, it is irrational. Then we have
√ √2 √2 √ √2·√2 √ 2
( 2 ) = 2 = 2 = 2,
√ √2
which√is certainly rational. So, in this case, let a be 2 , and let
b be 2. 
Does this constitute a valid proof? Most mathematicians feel
that it does. But again, there is something a little bit unsatisfying
9.5. INTUITIONISTIC LOGIC 167

here: we have proved the existence of a pair of real numbers


with a certain property, without being able to say which pair of
numbers it is. It is possible to prove the same result, but √
in such
a way that the pair a, b is given in the proof: take a = 3 and
b = log3 4. Then
√ log 4
a b = 3 3 = 31/2·log3 4 = (3log3 4 )1/2 = 41/2 = 2,

since 3log3 x = x.
Intuitionistic logic is designed to model a kind of reasoning
where moves like the one in the first proof are disallowed. Proving
the existence of an x satisfying A(x) means that you have to give a
specific x, and a proof that it satisfies A, like in the second proof.
Proving that A or B holds requires that you can prove one or the
other.
Formally speaking, intuitionistic first-order logic is what you
get if you omit restrict a proof system for first-order logic in a
certain way. Similarly, there are intuitionistic versions of second-
order or higher-order logic. From the mathematical point of view,
these are just formal deductive systems, but, as already noted,
they are intended to model a kind of mathematical reasoning.
One can take this to be the kind of reasoning that is justified on
a certain philosophical view of mathematics (such as Brouwer’s
intuitionism); one can take it to be a kind of mathematical rea-
soning which is more “concrete” and satisfying (along the lines
of Bishop’s constructivism); and one can argue about whether or
not the formal description captures the informal motivation. But
whatever philosophical positions we may hold, we can study in-
tuitionistic logic as a formally presented logic; and for whatever
reasons, many mathematical logicians find it interesting to do so.
There is an informal constructive interpretation of the intu-
itionist connectives, usually known as the Brouwer-Heyting-Kolmogorov
interpretation. It runs as follows: a proof of A ∧ B consists of a
proof of A paired with a proof of B; a proof of A ∨ B consists
of either a proof of A, or a proof of B, where we have explicit
information as to which is the case; a proof of A → B consists
CHAPTER 9. BEYOND FIRST-ORDER LOGIC 168

of a procedure, which transforms a proof of A to a proof of B;


a proof of ∀x A(x) consists of a procedure which returns a proof
of A(x) for any value of x; and a proof of ∃x A(x) consists of a
value of x, together with a proof that this value satisfies A. One
can describe the interpretation in computational terms known
as the “Curry-Howard isomorphism” or the “formulas-as-types
paradigm”: think of a formula as specifying a certain kind of data
type, and proofs as computational objects of these data types that
enable us to see that the corresponding formula is true.
Intuitionistic logic is often thought of as being classical logic
“minus” the law of the excluded middle. This following theorem
makes this more precise.

Theorem 9.2. Intuitionistically, the following axiom schemata are


equivalent:

1. (A → ⊥) → ¬A.

2. A ∨ ¬A

3. ¬¬A → A

Obtaining instances of one schema from either of the others is a


good exercise in intuitionistic logic.
The first deductive systems for intuitionistic propositional logic,
put forth as formalizations of Brouwer’s intuitionism, are due, in-
dependently, to Kolmogorov, Glivenko, and Heyting. The first
formalization of intuitionistic first-order logic (and parts of intu-
itionist mathematics) is due to Heyting. Though a number of
classically valid schemata are not intuitionistically valid, many
are.
The double-negation translation describes an important rela-
tionship between classical and intuitionist logic. It is defined in-
ductively follows (think of AN as the “intuitionist” translation of
9.5. INTUITIONISTIC LOGIC 169

the classical formula A):

AN 𠪪A for atomic formulas A


(A ∧ B) ≡ (A ∧ B N )
N N

(A ∨ B)N ≡ ¬¬(AN ∨ B N )
(A → B)N ≡ (AN → B N )
(∀x A)N ≡ ∀x AN
(∃x A)N ≡ ¬¬∃x AN

Kolmogorov and Glivenko had versions of this translation for


propositional logic; for predicate logic, it is due to Gödel and
Gentzen, independently. We have

Theorem 9.3. 1. A ↔ AN is provable classically

2. If A is provable classically, then AN is provable intuitionistically.

We can now envision the following dialogue. Classical math-


ematician: “I’ve proved A!” Intuitionist mathematician: “Your
proof isn’t valid. What you’ve really proved is AN .” Classical
mathematician: “Fine by me!” As far as the classical mathemati-
cian is concerned, the intuitionist is just splitting hairs, since the
two are equivalent. But the intuitionist insists there is a differ-
ence.
Note that the above translation concerns pure logic only; it
does not address the question as to what the appropriate nonlog-
ical axioms are for classical and intuitionistic mathematics, or
what the relationship is between them. But the following slight
extension of the theorem above provides some useful informa-
tion:
CHAPTER 9. BEYOND FIRST-ORDER LOGIC 170

Theorem 9.4. If Γ proves A classically, Γ N proves AN intuitionisti-


cally.

In other words, if A is provable from some hypotheses classi-


cally, then AN is provable from their double-negation translations.
To show that a sentence or propositional formula is intuition-
istically valid, all you have to do is provide a proof. But how can
you show that it is not valid? For that purpose, we need a seman-
tics that is sound, and preferrably complete. A semantics due to
Kripke nicely fits the bill.
We can play the same game we did for classical logic: de-
fine the semantics, and prove soundness and completeness. It
is worthwhile, however, to note the following distinction. In the
case of classical logic, the semantics was the “obvious” one, in
a sense implicit in the meaning of the connectives. Though one
can provide some intuitive motivation for Kripke semantics, the
latter does not offer the same feeling of inevitability. In addi-
tion, the notion of a classical structure is a natural mathematical
one, so we can either take the notion of a structure to be a tool
for studying classical first-order logic, or take classical first-order
logic to be a tool for studying mathematical structures. In con-
trast, Kripke structures can only be viewed as a logical construct;
they don’t seem to have independent mathematical interest.
A Kripke structure for a propositional langauge consists of a
partial order Mod(P ) with a least element, and an “monotone”
assignment of propositional variables to the elements of Mod(P ).
The intuition is that the elements of Mod(P ) represent “worlds,”
or “states of knowledge”; an element p ≥ q represents a “possible
future state” of q ; and the propositional variables assigned to p
are the propositions that are known to be true in state p. The
forcing relation P, p A then extends this relationship to arbi-
trary formulas in the language; read P, p A as “A is true in
state p.” The relationship is defined inductively, as follows:

1. P, p pi iff pi is one of the propositional variables assigned


to p.
9.6. MODAL LOGICS 171

2. P, p 1 ⊥.

3. P, p (A ∧ B) iff P, p A and P, p B.

4. P, p (A ∨ B) iff P, p A or P, p B.

5. P, p (A → B) iff, whenever q ≥ p and P, q A, then


P, q B.

It is a good exercise to try to show that ¬(p ∧ q ) → (¬p ∨ ¬q ) is


not intuitionistically valid, by cooking up a Kripke structure that
provides a counterexample.

9.6 Modal Logics


Consider the following example of a conditional sentence:

If Jeremy is alone in that room, then he is drunk and


naked and dancing on the chairs.

This is an example of a conditional assertion that may be mate-


rially true but nonetheless misleading, since it seems to suggest
that there is a stronger link between the antecedent and conclu-
sion other than simply that either the antecedent is false or the
consequent true. That is, the wording suggests that the claim is
not only true in this particular world (where it may be trivially
true, because Jeremy is not alone in the room), but that, more-
over, the conclusion would have been true had the antecedent
been true. In other words, one can take the assertion to mean
that the claim is true not just in this world, but in any “possible”
world; or that it is necessarily true, as opposed to just true in this
particular world.
Modal logic was designed to make sense of this kind of ne-
cessity. One obtains modal propositional logic from ordinary
propositional logic by adding a box operator; which is to say, if
A is a formula, so is A. Intuitively, A asserts that A is neces-
sarily true, or true in any possible world. ♦A is usually taken to
CHAPTER 9. BEYOND FIRST-ORDER LOGIC 172

be an abbreviation for ¬¬A, and can be read as asserting that


A is possibly true. Of course, modality can be added to predicate
logic as well.
Kripke structures can be used to provide a semantics for
modal logic; in fact, Kripke first designed this semantics with
modal logic in mind. Rather than restricting to partial orders,
more generally one has a set of “possible worlds,” P , and a bi-
nary “accessibility” relation R(x, y) between worlds. Intuitively,
R(p, q ) asserts that the world q is compatible with p; i.e., if we
are “in” world p, we have to entertain the possibility that the
world could have been like q .
Modal logic is sometimes called an “intensional” logic, as op-
posed to an “extensional” one. The intended semantics for an
extensional logic, like classical logic, will only refer to a single
world, the “actual” one; while the semantics for an “intensional”
logic relies on a more elaborate ontology. In addition to structure-
ing necessity, one can use modality to structure other linguistic
constructions, reinterpreting  and ♦ according to the applica-
tion. For example:

1. In provability logic, A is read “A is provable” and ♦A is


read “A is consistent.”

2. In epistemic logic, one might read A as “I know A” or “I


believe A.”

3. In temporal logic, one can read A as “A is always true”


and ♦A as “A is sometimes true.”

One would like to augment logic with rules and axioms deal-
ing with modality. For example, the system S4 consists of the
ordinary axioms and rules of propositional logic, together with
the following axioms:

(A → B) → (A → B)


A → A
A → A
9.7. OTHER LOGICS 173

as well as a rule, “from A conclude A.” S5 adds the following


axiom:

♦A → ♦A

Variations of these axioms may be suitable for different applica-


tions; for example, S5 is usually taken to characterize the notion
of logical necessity. And the nice thing is that one can usually
find a semantics for which the proof system is sound and complete
by restricting the accessibility relation in the Kripke structures in
natural ways. For example, S4 corresponds to the class of Kripke
structures in which the accessibility relation is reflexive and tran-
sitive. S5 corresponds to the class of Kripke structures in which
the accessibility relation is universal, which is to say that every
world is accessible from every other; so A holds if and only if
A holds in every world.

9.7 Other Logics


As you may have gathered by now, it is not hard to design a new
logic. You too can create your own a syntax, make up a deductive
system, and fashion a semantics to go with it. You might have
to be a bit clever if you want the proof system to be complete
for the semantics, and it might take some effort to convince the
world at large that your logic is truly interesting. But, in return,
you can enjoy hours of good, clean fun, exploring your logic’s
mathematical and computational properties.
Recent decades have witnessed a veritable explosion of for-
mal logics. Fuzzy logic is designed to model reasoning about
vague properties. Probabilistic logic is designed to model reason-
ing about uncertainty. Default logics and nonmonotonic logics
are designed to model defeasible forms of reasoning, which is to
say, “reasonable” inferences that can later be overturned in the
face of new information. There are epistemic logics, designed
to model reasoning about knowledge; causal logics, designed to
model reasoning about causal relationships; and even “deontic”
CHAPTER 9. BEYOND FIRST-ORDER LOGIC 174

logics, which are designed to model reasoning about moral and


ethical obligations. Depending on whether the primary motiva-
tion for introducing these systems is philosophical, mathematical,
or computational, you may find such creatures studies under the
rubric of mathematical logic, philosophical logic, artificial intel-
ligence, cognitive science, or elsewhere.
The list goes on and on, and the possibilities seem endless.
We may never attain Leibniz’ dream of reducing all of human
reason to calculation—but that can’t stop us from trying.
PART III

Turing
Machines

177
CHAPTER 10

Turing
Machine
Computations
10.1 Introduction

What does it mean for a function, say, from N to N to be com-


putable? Among the first answers, and the most well known one,
is that a function is computable if it can be computed by a Tur-
ing machine. This notion was set out by Alan Turing in 1936.
Turing machines are an example of a model of computation—they
are a mathematically precise way of defining the idea of a “com-
putational procedure.” What exactly that means is debated, but
it is widely agreed that Turing machines are one way of speci-
fying computational procedures. Even though the term “Turing
machine” evokes the image of a physical machine with moving
parts, strictly speaking a Turing machine is a purely mathemat-
ical construct, and as such it idealizes the idea of a computa-
tional procedure. For instance, we place no restriction on either
the time or memory requirements of a Turing machine: Turing

178
10.1. INTRODUCTION 179

machines can compute something even if the computation would


require more storage space or more steps than there are atoms in
the universe.
It is perhaps best to think of a Turing machine as a program
for a special kind of imaginary mechanism. This mechanism con-
sists of a tape and a read-write head. In our version of Turing ma-
chines, the tape is infinite in one direction (to the right), and it
is divided into squares, each of which may contain a symbol from
a finite alphabet. Such alphabets can contain any number of dif-
ferent symbols, but we will mainly make do with three: ., t, and
I . When the mechanism is started, the tape is empty (i.e., each
square contains the symbol t) except for the leftmost square,
which contains ., and a finite number of squares which contain
the input. At any time, the mechanism is in one of a finite number
of states. At the outset, the head scans the leftmost square and in
a specified initial state. At each step of the mechanism’s run, the
content of the square currently scanned together with the state
the mechanism is in and the Turing machine program determine
what happens next. The Turing machine program is given by a
partial function which takes as input a state q and a symbol σ
and outputs a triple hq 0, σ 0, Di. Whenever the mechanism is in
state q and reads symbol σ, it replaces the symbol on the current
square with σ 0, the head moves left, right, or stays put according
to whether D is L, R, or N , and the mechanism goes into state q 0.
For instance, consider the situation below:

. I I I t I I I I t t t
q1

The tape of the Turing machine contains the end-of-tape sym-


bol . on the leftmost square, followed by three I ’s, a t, four more
I ’s, and the rest of the tape is filled with t’s. The head is read-
ing the third square from the left, which contains a I , and is
in state q 1 —we say “the machine is reading a I in state q 1 .” If
the program of the Turing machine returns, for input hq 1, I i, the
triple hq 5, t, Ri, we would now replace the I on the third square
CHAPTER 10. TURING MACHINE COMPUTATIONS 180

with a t, move right to the fourth square, and change the state
of the machine to q 5 .
We say that the machine halts when it encounters some state,
q n , and symbol, σ such that there is no instruction for hq n, σi,
i.e., the transition function for input hq n, σi is undefined. In other
words, the machine has no instruction to carry out, and at that
point, it ceases operation. Halting is sometimes represented by
a specific halt state h. This will be demonstrated in more detail
later on.
The beauty of Turing’s paper, “On computable numbers,”
is that he presents not only a formal definition, but also an ar-
gument that the definition captures the intuitive notion of com-
putability. From the definition, it should be clear that any func-
tion computable by a Turing machine is computable in the intu-
itive sense. Turing offers three types of argument that the con-
verse is true, i.e., that any function that we would naturally regard
as computable is computable by such a machine. They are (in
Turing’s words):

1. A direct appeal to intuition.

2. A proof of the equivalence of two definitions (in case the


new definition has a greater intuitive appeal).

3. Giving examples of large classes of numbers which are com-


putable.

Our goal is to try to define the notion of computability “in prin-


ciple,” i.e., without taking into account practical limitations of
time and space. Of course, with the broadest definition of com-
putability in place, one can then go on to consider computation
with bounded resources; this forms the heart of the subject known
as “computational complexity.”

Historical Remarks Alan Turing invented Turing machines in


1936. While his interest at the time was the decidability of first-
order logic, the paper has been described as a definitive paper
10.2. REPRESENTING TURING MACHINES 181

on the foundations of computer design. In the paper, Turing


focuses on computable real numbers, i.e., real numbers whose
decimal expansions are computable; but he notes that it is not
hard to adapt his notions to computable functions on the nat-
ural numbers, and so on. Notice that this was a full five years
before the first working general purpose computer was built in
1941 (by the German Konrad Zuse in his parent’s living room),
seven years before Turing and his colleagues at Bletchley Park
built the code-breaking Colossus (1943), nine years before the
American ENIAC (1945), twelve years before the first British gen-
eral purpose computer—the Manchester Small-Scale Experimen-
tal Machine—was built in Manchester (1948), and thirteen years
before the Americans first tested the BINAC (1949). The Manch-
ester SSEM has the distinction of being the first stored-program
computer—previous machines had to be rewired by hand for each
new task.

10.2 Representing Turing Machines


Turing machines can be represented visually by state diagrams.
The diagrams are composed of state cells connected by arrows.
Unsurprisingly, each state cell represents a state of the machine.
Each arrow represents an instruction that can be carried out from
that state, with the specifics of the instruction written above or
below the appropriate arrow. Consider the following machine,
which has only two internal states, q 0 and q 1 , and one instruction:

t, I , R
start q0 q1

Recall that the Turing machine has a read/write head and a tape
with the input written on it. The instruction can be read as if
reading a blank in state q 0 , write a stroke, move right, and move to
state q 1 . This is equivalent to the transition function mapping
hq 0, ti to hq 1, I , Ri.
CHAPTER 10. TURING MACHINE COMPUTATIONS 182

Example 10.1. Even Machine: The following Turing machine


halts if, and only if, there are an even number of strokes on the
tape.
t, t, R
I,I,R

start q0 q1

I,I,R

The state diagram corresponds to the following transition


function:
δ(q 0, I ) = hq 1, I , Ri,
δ(q 1, I ) = hq 0, I , Ri,
δ(q 1, t) = hq 1, t, Ri
The above machine halts only when the input is an even num-
ber of strokes. Otherwise, the machine (theoretically) continues
to operate indefinitely. For any machine and input, it is possi-
ble to trace through the configurations of the machine in order to
determine the output. We will give a formal definition of config-
urations later. For now, we can intuitively think of configurations
as a series of diagrams showing the state of the machine at any
point in time during operation. Configurations show the con-
tent of the tape, the state of the machine and the location of the
read/write head.
Let us trace through the configurations of the even machine
if it is started with an input of 4 I s. In this case, we expect that
the machine will halt. We will then run the machine on an input
of 3 I s, where the machine will run forever.
The machine starts in state q 0 , scanning the leftmost I . We
can represent the initial state of the machine as follows:
.I 0 I I I t . . .
The above configuration is straightforward. As can be seen, the
machine starts in state one, scanning the leftmost I . This is rep-
10.2. REPRESENTING TURING MACHINES 183

resented by a subscript of the state name on the first I . The


applicable instruction at this point is δ(q 0, I ) = hq 1, I , Ri, and so
the machine moves right on the tape and changes to state q 1 .

.I I 1 I I t . . .

Since the machine is now in state q 1 scanning a stroke, we have


to “follow” the instruction δ(q 1, I ) = hq 0, I , Ri. This results in the
configuration
.I I I 0 I t . . .
As the machine continues, the rules are applied again in the same
order, resulting in the following two configurations:

.I I I I 1 t . . .

.I I I I t0 . . .
The machine is now in state q 0 scanning a blank. Based on the
transition diagram, we can easily see that there is no instruction
to be carried out, and thus the machine has halted. This means
that the input has been accepted.
Suppose next we start the machine with an input of three
strokes. The first few configurations are similar, as the same in-
structions are carried out, with only a small difference of the tape
input:
.I 0 I I t . . .
.I I 1 I t . . .
.I I I 0 t . . .
.I I I t1 . . .
The machine has now traversed past all the strokes, and is read-
ing a blank in state q 1 . As shown in the diagram, there is an
instruction of the form δ(q 1, t) = hq 1, t, Ri. Since the tape is in-
finitely blank to the right, the machine will continue to execute
this instruction forever, staying in state q 1 and moving ever further
CHAPTER 10. TURING MACHINE COMPUTATIONS 184

to the right. The machine will never halt, and does not accept
the input.
It is important to note that not all machines will halt. If halt-
ing means that the machine runs out of instructions to execute,
then we can create a machine that never halts simply by ensuring
that there is an outgoing arrow for each symbol at each state.
The even machine can be modified to run infinitely by adding an
instruction for scanning a blank at q 0 .
Example 10.2.
t, t, R t, t, R
I,I,R

start q0 q1

I,I,R

Machine tables are another way of representing Turing ma-


chines. Machine tables have the tape alphabet displayed on the
x-axis, and the set of machine states across the y-axis. Inside the
table, at the intersection of each state and symbol, is written the
rest of the instruction—the new state, new symbol, and direc-
tion of movement. Machine tables make it easy to determine in
what state, and for what symbol, the machine halts. Whenever
there is a gap in the table is a possible point for the machine to
halt. Unlike state diagrams and instruction sets, where the points
at which the machine halts are not always immediately obvious,
any halting points are quickly identified by finding the gaps in
the machine table.
Example 10.3. The machine table for the even machine is:
t I
q0 I , q 1, R
q1 t, q 1, t I , q 0, R
As we can see, the machine halts when scanning a blank in state
q0.
10.2. REPRESENTING TURING MACHINES 185

So far we have only considered machines that read and accept


input. However, Turing machines have the capacity to both read
and write. An example of such a machine (although there are
many, many examples) is a doubler. A doubler, when started with
a block of n strokes on the tape, outputs a block of 2n strokes.

Example 10.4. Before building a doubler machine, it is impor-


tant to come up with a strategy for solving the problem. Since the
machine (as we have formulated it) cannot remember how many
strokes it has read, we need to come up with a way to keep track
of all the strokes on the tape. One such way is to separate the
output from the input with a blank. The machine can then erase
the first stroke from the input, traverse over the rest of the input,
leave a blank, and write two new strokes. The machine will then
go back and find the second stroke in the input, and double that
one as well. For each one stroke of input, it will write two strokes
of output. By erasing the input as the machine goes, we can guar-
antee that no stroke is missed or doubled twice. When the entire
input is erased, there will be 2n strokes left on the tape.

I,I,R I,I,R

I , t, R t, t, R
start q0 q1 q2

t, t, R t, I , R

q5 q4 q3
t, t, L I,I,L

I,I,L I,I,L t, I , L
CHAPTER 10. TURING MACHINE COMPUTATIONS 186

10.3 Turing Machines


The formal definition of what constitutes a Turing machine looks
abstract, but is actually simple: it merely packs into one mathe-
matical structure all the information needed to specify the work-
ings of a Turing machine. This includes (1) which states the
machine can be in, (2) which symbols are allowed to be on the
tape, (3) which state the machine should start in, and (4) what
the instruction set of the machine is.

Definition 10.5 (Turing machine). A Turing machine T = hQ, Σ, q 0, δi


consists of

1. a finite set of states Q ,

2. a finite alphabet Σ which includes . and t,

3. an initial state q 0 ∈ Q ,

4. a finite instruction set δ : Q × Σ →


7 Q × Σ × {L, R, N }.

The partial function δ is also called the transition function of T .

We assume that the tape is infinite in one direction only. For


this reason it is useful to designate a special symbol . as a marker
for the left end of the tape. This makes it easier for Turing ma-
chine programs to tell when they’re “in danger” of running off
the tape.

Example 10.6. Even Machine: The even machine is formally the


quadruple hQ, Σ, q 0, δi where

Q = {q 0, q 1 }
Σ = {., t, I },
δ(q 0, I ) = hq 1, I , Ri,
δ(q 1, I ) = hq 0, I , Ri,
δ(q 1, t) = hq 1, t, Ri.
10.4. CONFIGURATIONS AND COMPUTATIONS 187

10.4 Configurations and Computations


Recall tracing through the configurations of the even machine
earlier. The imaginary mechanism consisting of tape, read/write
head, and Turing machine program is really just in intuitive way
of visualizing what a Turing machine computation is. Formally,
we can define the computation of a Turing machine on a given
input as a sequence of configurations—and a configuration in turn
is a sequence of symbols (corresponding to the contents of the
tape at a given point in the computation), a number indicating
the position of the read/write head, and a state. Using these,
we can define what the Turing machine M computes on a given
input.

Definition 10.7 (Configuration). A configuration of Turing ma-


chine M = hQ, Σ, q 0, δi is a triple hC, n, q i where

1. C ∈ Σ ∗ is a finite sequence of symbols from Σ ,

2. n ∈ N is a number < len(C ), and

3. q ∈ Q

Intuitively, the sequence C is the content of the tape (symbols of


all squares from the leftmost square to the last non-blank or previ-
ously visited square), n is the number of the square the read/write
head is scanning (beginning with 0 being the number of the left-
most square), and q is the current state of the machine.

The potential input for a Turing machine is a sequence of


symbols, usually a sequence that encodes a number in some form.
The initial configuration of the Turing machine is that configura-
tion in which we start the Turing machine to work on that input:
the tape contains the tape end marker immediately followed by
the input written on the squares to the right, the read/write head
is scanning the leftmost square of the input (i.e., the square to
CHAPTER 10. TURING MACHINE COMPUTATIONS 188

the right of the left end marker), and the mechanism is in the
designated start state q 0 .

Definition 10.8 (Initial configuration). The initial configuration


of M for input I ∈ Σ ∗ is

h. _ I , 1, q 0 i

The _ symbol is for concatenation—we want to ensure that


there are no blanks between the left end marker and the begin-
ning of the input.

Definition 10.9. We say that a configuration hC, n, q i yields hC 0, n 0, q 0i


in one step (according to M ), iff

1. the n-th symbol of C is σ,

2. the instruction set of M specifies δ(q, σ) = hq 0, σ 0, Di,

3. the n-th symbol of C 0 is σ 0, and

4. a) D = L and n 0 = n − 1, or
b) D = R and n 0 = n + 1, or
c) D = N and n 0 = n,

5. if n 0 > len(C ), then len(C 0) = len(C ) + 1 and the n 0-th


symbol of C 0 is t.

6. for all i such that i < len(C 0) and i , n, C 0(i ) = C (i ),

Definition 10.10. A run of M on input I is a sequence C i of


configurations of M , where C 0 is the initial configuration of M
for input I , and each C i yields C i +1 in one step.
We say that M halts on input I after k steps if C k = hC, n, q i,
the nth symbol of C is σ, and δ(q, σ) is undefined. In that case,
10.5. UNARY REPRESENTATION OF NUMBERS 189

the output of M for input I is O , where O is a string of symbols


not beginning or ending in t such that C = . _ ti _ O _ t j
for some i, j ∈ N.

According to this definition, the output O of M always begins


and ends in a symbol other than t, or, if at time k the entire tape
is filled with t (except for the leftmost .), O is the empty string.

10.5 Unary Representation of Numbers


Turing machines work on sequences of symbols written on their
tape. Depending on the alphabet a Turing machine uses, these
sequences of symbols can represent various inputs and outputs.
Of particular interest, of course, are Turing machines which com-
pute arithmetical functions, i.e., functions of natural numbers. A
simple way to represent positive integers is by coding them as
sequences of a single symbol I . If n ∈ N, let I n be the empty se-
quence if n = 0, and otherwise the sequence consisting of exactly
n I ’s.

Definition 10.11 (Computation). A Turing machine M computes


the function f : Nn → N iff M halts on input

I k 1 t I k 2 t . . . t I kn

with output I f (k1,...,kn ) .

Example 10.12. Addition: Build a machine that, when given an


input of two non-empty strings of I ’s of length n and m, computes
the function f (n, m) = n + m.
We want to come up with a machine that starts with two
blocks of strokes on the tape and halts with one block of strokes.
We first need a method to carry out. The input strokes are sepa-
rated by a blank, so one method would be to write a stroke on the
square containing the blank, and erase the first (or last) stroke.
This would result in a block of n + m I ’s. Alternatively, we could
CHAPTER 10. TURING MACHINE COMPUTATIONS 190

proceed in a similar way to the doubler machine, by erasing a


stroke from the first block, and adding one to the second block
of strokes until the first block has been removed completely. We
will proceed with the former example.

I,I,R I,I,R I , t, N

t, I , R t, t, L
start q0 q1 q2

10.6 Halting States


Although we have defined our machines to halt only when there
is no instruction to carry out, common representations of Turing
machines have a dedicated halting state, h, such that h ∈ Q .
The idea behind a halting state is simple: when the machine
has finished operation (it is ready to accept input, or has finished
writing the output), it goes into a state h where it halts. Some
machines have two halting states, one that accepts input and one
that rejects input.
Example 10.13. Halting States. To elucidate this concept, let us
begin with an alteration of the even machine. Instead of having
the machine halt in state q 0 if the input is even, we can add an
instruction to send the machine into a halt state.
t, t, R
I,I,R

start q0 q1

I,I,R
t, t, N

h
10.7. COMBINING TURING MACHINES 191

Let us further expand the example. When the machine de-


termines that the input is odd, it never halts. We can alter the
machine to include a reject state by replacing the looping instruc-
tion with an instruction to go to a reject state r .

I,I,R

start q0 q1

I,I,R
t, t, N t, t, N

h r

Adding a dedicated halting state can be advantageous in cases


like this, where it makes explicit when the machine accepts/rejects
certain inputs. However, it is important to note that no comput-
ing power is gained by adding a dedicated halting state. Similarly,
a less formal notion of halting has its own advantages. The def-
inition of halting used so far in this chapter makes the proof of
the Halting Problem intuitive and easy to demonstrate. For this
reason, we continue with our original definition.

10.7 Combining Turing Machines


The examples of Turing machines we have seen so far have been
fairly simple in nature. But in fact, any problem that can be solved
with any modern programming language can als o be solved with
Turing machines. To build more complex Turing machines, it
is important to convince ourselves that we can combine them,
so we can build machines to solve more complex problems by
breaking the procedure into simpler parts. If we can find a natu-
ral way to break a complex problem down into constituent parts,
we can tackle the problem in several stages, creating several sim-
ple Turing machines and combining then into one machine that
CHAPTER 10. TURING MACHINE COMPUTATIONS 192

can solve the problem. This point is especially important when


tackling the Halting Problem in the next section.
Example 10.14. Combining Machines: Design a machine that
computes the function f (m, n) = 2(m + n).
In order to build this machine, we can combine two machines
we are already familiar with: the addition machine, and the dou-
bler. We begin by drawing a state diagram for the addition ma-
chine.
I,I,R I,I,R I , t, N

t, I , R t, t, L
start q0 q1 q2

Instead of halting at state q 2 , we want to continue operation in or-


der to double the output. Recall that the doubler machine erases
the first stroke in the input and writes two strokes in a separate
output. Let’s add an instruction to make sure the tape head is
reading the first stroke of the output of the addition machine.
I,I,R I,I,R

t, I , R t, t, L
start q0 q1 q2

I , t, L

I,I,L q3

., ., R

q4
10.8. VARIANTS OF TURING MACHINES 193

It is now easy to double the input—all we have to do is connect


the doubler machine onto state q 4 . This requires renaming the
states of the doubler machine so that they start at q 4 instead of
q 0 —this way we don’t end up with two starting states. The final
diagram should look like:

I,I,R I,I,R

t, I , R t, t, L
start q0 q1 q2

I , t, L

I,I,L q3

I,I,L ., ., R

t, t, L t, t, R
q8 q9 q4

I,I,L I,I,L I , t, R

t, I , L q7 q6 q5
t, I , R t, t, R

I,I,R I,I,R

10.8 Variants of Turing Machines


There are in fact many possible ways to define Turing machines,
of which ours is only one. We allow arbitrary finite alphabets,
CHAPTER 10. TURING MACHINE COMPUTATIONS 194

a more restricted definition might allow only two tape symbols,


I and t. We allow the machine to write a symbol to the tape
and move at the same time, other definitions allow either writing
or moving. We allow the possibility of writing without moving
the tape head, other definitions leave out the N “instruction.”
Our definition assumes that the tape is infinite in one direction
only, other definitions allow the tape to be infinite both to the
left and the right. In fact, we might even allow any number of
separate tapes, or even an infinite grid of squares. We represent
the instruction set of the Turing machine by a transition function;
other definitions use a transition relation.

This last relaxation of the definition is particularly interest-


ing. In our definition, when the machine is in state q reading
symbol σ, δ(q, σ) determines what the new symbol, state, and
tape head position is. But if we allow the instruction set to be a
relation between current state-symbol pairs hq, σi and new state-
symbol-direction triples hq 0, σ 0, Di, the action of the Turing ma-
chine may not be uniquely determined—the instruction relation
may contain both hq, σ, q 0, σ 0, Di and hq, σ, q 00, σ 00, D 0i. In this
case we have a non-deterministic Turing machine. These play an
important role in computational complexity theory.

There are also different conventions for when a Turing ma-


chine halts: we say it halts when the transition function is un-
defined, other definitions require the machine to be in a special
designated halting state. And there are differnt ways of repre-
senting numbers: we use unary representation, but you can also
use binary representation (this requires two symbols in addition
to t).

Now here is an interesting fact: none of these variations mat-


ters as to which functions are Turing computable. If a function
is Turing computable according to one definition, it is Turing
computable according to all of them.
10.9. THE CHURCH-TURING THESIS 195

10.9 The Church-Turing Thesis


Turing machines are supposed to be a precise replacement for
the concept of an effective procedure. Turing took it that anyone
who grasped the concept of an effective procedure and the con-
cept of a Turing machine would have the intuition that anything
that could be done via an effective procedure could be done by
Turing machine. This claim is given support by the fact that all
the other proposed precise replacements for the concept of an
effective procedure turn out to be extensionally equivalent to the
concept of a Turing machine—that is, they can compute exactly
the same set of functions. This claim is called the Church-Turing
thesis.

Definition 10.15 (Church-Turing thesis). The Church-Turing The-


sis states that anything computable via an effective procedure is
Turing computable.

The Church-Turing thesis is appealed to in two ways. The first


kind of use of the Church-Turing thesis is an excuse for laziness.
Suppose we have a description of an effective procedure to com-
pute something, say, in “pseudo-code.” Then we can invoke the
Church-Turing thesis to justify the claim that the same function
is computed by some Turing machine, eve if we have not in fact
constructed it.
The other use of the Church-Turing thesis is more philosoph-
ically interesting. It can be shown that there are functions whch
cannot be computed by a Turing machines. From this, using the
Church-Turing thesis, one can conclude that it cannot be effec-
tively computed, using any procedure whatsoever. For if there
were such a procedure, by the Church-Turing thesis, it would fol-
low that there would be a Turing machine. So if we can prove that
there is no Turing machine that computes it, there also can’t be
an effective procedure. In particular, the Church-Turing thesis is
invoked to claim that the so-called halting problem not only can-
CHAPTER 10. TURING MACHINE COMPUTATIONS 196

not be solved by Turing machines, it cannot be effectively solved


at all.

Summary
A Turing machine is a kind of idealized computation mecha-
nism. It consists of a one-way infinite tape, divided into squares,
each of which can contain a symbol from a pre-determined al-
phabet. The machine operates by moving a read-write head
along the tape. It may also be in one of a pre-determined num-
ber of states. The actions of the read-write head are determined
by a set of instructions; each instruction is conditional on the ma-
chine being in a certain state and reading a certain symbol, and
specifies which symbol the machine will write onto the current
square, whether it will move the read-write head one square left,
right, or stay put, and which state it will switch to. If the tape
contains a certain input, represented as a sequence of symbols
on the tape, and the machine is put into the designated start state
with the read-write head reading the leftmost square of the input,
the instruction set will step-wise determine a sequence of config-
urations of the machine: content of tape, position of read-write
head, and state of the machine. Should the machine encounter
a configuration in which the instruction set does not contain an
instruction for the current symbol read/state combination, the
machine halts, and the content of the tape is the output.
Numbers can very easily be represented as sequences of strokes
on the Tape of a Turing machine. We say a function N → N is Tur-
ing computable if there is a Turing machine which, whenever it
is started on the unary representation of n as input, eventually
halts with its tape containing the unary representation of f (n) as
output. Many familiar arithmetical functions are easily (or not-
so-easily) shown to be Turing computable. Many other models
of computation other than Turing machines have been proposed;
and it has always turned out that the arithmetical functions com-
putable there are also Turing computable. This is seen as support
10.9. THE CHURCH-TURING THESIS 197

for the Church-Turing Thesis, that every arithmetical function


that can effectively be computed is Turing computable.

Problems
Problem 10.1. Choose an arbitary input and trace through the
configurations of the doubler machine in Example 10.4.

Problem 10.2. The double machine in Example 10.4 writes its


output to the right of the input. Come up with a new method
for solving the doubler problem which generates its output im-
mediately to the right of the end-of-tape marker. Build a machine
that executes your method. Check that your machine works by
tracing through the configurations.

Problem 10.3. Design a Turing-machine with alphabet {t, A, B }


that accepts any string of As and Bs where the number of As
is the same as the number of Bs and all the As precede all the
Bs, and rejects any string where the number of As is not equal
to the number of Bs or the As do not precede all the Bs. (E.g.,
the machine should accept AABB, and AAABBB, but reject both
AAB and AABBAABB.)

Problem 10.4. Design a Turing-machine with alphabet {t, A, B }


that takes as input any string α of As and Bs and duplicates them
to produce an output of the form αα. (E.g. input ABBA should
result in output ABBAABBA).

Problem 10.5. Alphabetical?: Design a Turing-machine with al-


phabet {t, A, B } that when given as input a finite sequence of As
and Bs checks to see if all the As appear left of all the Bs or not.
The machine should leave the input string on the tape, and out-
put either halt if the string is “alphabetical”, or loop forever if
the string is not.

Problem 10.6. Alphabetizer: Design a Turing-machine with al-


phabet {t, A, B } that takes as input a finite sequence of As and Bs
CHAPTER 10. TURING MACHINE COMPUTATIONS 198

rearranges them so that all the As are to the left of all the Bs. (e.g.,
the sequence BABAA should become the sequence AAABB, and
the sequence ABBABB should become the sequence AABBBB).

Problem 10.7. Trace through the configurations of the machine


for input h3, 5i.

Problem 10.8. Subtraction: Design a Turing machine that when


given an input of two non-empty strings of strokes of length n
and m, where n > m, computes the function f (n, m) = n − m.

Problem 10.9. Equality: Design a Turing machine to compute


the following function:

1
 if x = y
equality(x, y) = 
0
 if x , y

where x and y are integers greater than 0.

Problem 10.10. Design a Turing machine to compute the func-


tion min(x, y) where x and y are positive integers represented on
the tape by strings of I ’s separated by a t. You may use addi-
tional symbols in the alphabet of the machine.
The function min selects the smallest value from its argu-
ments, so min(3, 5) = 3, min(20, 16) = 16, and min(4, 4) = 4, and
so on.
CHAPTER 11

Undecidability
11.1 Introduction
It might seem obvious that not every function, even every arith-
metical function, can be computable. There are just too many,
whose behavior is too complicated. Functions defined from the
decay of radioactive particles, for instance, or other chaotic or
random behavior. Suppose we start counting 1-second intervals
from a given time, and define the function f (n) as the number
of particles in the universe that decay in the n-th 1-second inter-
val after that initial moment. This seems like a candidate for a
function we cannot ever hope to compute.
But it is one thing to not be able to imagine how one would
compute such functions, and quite another to actually prove that
they are uncomputable. In fact, even functions that seem hope-
lessly complicated may, in an abstract sense, be computable. For
instance, suppose the universe is finite in time—some day, in the
very distant future the universe will contract into a single point,
as some cosmological theories predict. Then there is only a fi-
nite (but incredibly large) number of seconds from that initial
moment for which f (n) is defined. And any function which is
defined for only finitely many inputs is computable: we could list
the outputs in one big table, or code it in one very big Turing
machine state transition diagram.

199
CHAPTER 11. UNDECIDABILITY 200

We are often interested in special cases of functions whose


values give the answers to yes/no questions. For instance, the
question “is n a prime number?” is associated with the function

1
 if n is prime
isprime(n) = 
0
 otherwise.

We say that a yes/no question can be effectively decided, if the as-


sociated 1/0-valued function is effectively computable.
To prove mathematically that there are functions which can-
not be effectively computed, or problems that cannot effectively
decided, it is essential to fix a specific model of computation,
and show about it that there are functions it cannot compute or
problems it cannot decide. We can show, for instance, that not
every function can be computed by Turing machines, and not
every problem can be decided by Turing machines. We can then
appeal to the Church-Turing thesis to conclude that not only are
Turing machines not powerful enough to compute every function,
but no effective procedure can.
The key to proving such negative results is the fact that we
can assign numbers to Turing machines themselves. The easiest
way to do this is to enumerate them, perhaps by fixing a specific
way to write down Turing machines and their programs, and then
listing them in a systematic fashion. Once we see that this can
be done, then the existence of Turing-uncomputable functions
follows by simple cardinality considerations: the set of functions
from N to N (in fact, even just from N to {0, 1}) are uncountable,
but since we can enumerate all the Turing machines, the set of
Turing-computable functions is only countably infinite.
We can also define specific functions and problems which we
can prove to be uncomputable and undecidable, respectively.
One such problem is the so-called Halting Problem. Turing ma-
chines can be finitely described by listing their instructions. Such
a description of a Turing machine, i.e., a Turing machine pro-
gram, can of course be used as input to another Turing machine.
So we can consider Turing machines that decide questions about
11.2. ENUMERATING TURING MACHINES 201

other Turing machines. One particularly interesting question is


this: “Does the given Turing machine eventually halt when started
on input n?” It would be nice if there were a Turing machine that
could decide this question: think of it as a quality-control Turing
machine which ensures that Turing machines don’t get caught
in infinite loops and such. The interestign fact, which Turing
proved, is that there cannot be such a Turing machine. There
cannot be a single Turing machine which, when started on in-
put consisting of a description of a Turing machine M and some
number n, will always halt with either output 1 or 0 according to
whether M machine would have halted when started on input n
or not.
Once we have examples of specific undecidable problems we
can use them to show that other problems are undecidable, too.
For instance, one celebrated undecidable problem is the question,
“Is the first-order formula A valid?”. There is no Turing machine
which, given as input a first-order formula A, is guaranteed to halt
with output 1 or 0 according to whether A is valid or not. His-
torically, the question of finding a procedure to effectively solve
this problem was called simply “the” decision problem; and so we
say that the decision problem is unsolvable. Turing and Church
proved this result independently at around the same time, so it
is also called the Church-Turing Theorem.

11.2 Enumerating Turing Machines


We can show that the set of all Turing-machines is countable. This
follows from the fact that each Turing machine can be finitely
described. The set of states and the tape vocabulary are finite
sets. The transition function is a partial function from Q × Σ
to Q × Σ × {L, R, N }, and so likewise can be specified by listing
its values for the finitely many argument pairs for which it is de-
fined. Of course, strictly speaking, the states and vocabulary can
be anything; but the behavior of the Turing machine is indepen-
dent of which objects serve as states and vocabulary. So we may
CHAPTER 11. UNDECIDABILITY 202

assume, for instance, that the states and vocabulary symbols are
natural numbers, or that the states and vocabulary are all strings
of letters and digits.
Suppose we fix a countably infinite vocabulary for specifying
Turing machines: σ0 = ., σ1 = t, σ2 = I , σ3 , . . . , R, L, N ,
q 0 , q 1 , . . . . Then any Turing machine can be specified by some
finite string of symbols from this alphabet (though not every fi-
nite string of symbols specifies a Turing machine). For instance,
suppose we have a Turing machine M = hQ, Σ, q, δi where

Q = {q 00, . . . , q n0 } ⊆ {q 0, q 1, . . . } and
Σ = {., σ10 , σ20 , . . . , σm0 } ⊆ {σ0, σ1, . . . }.

We could specify it by the string

q 00q 10 . . . q n0 . σ10 . . . σm0 . q . S (σ00 , q 00 ) . . . . . S (σm0 , q n0 )

where S (σi0, q j0 ) is the string σi0q j0 δ(σi0, q j0 ) if δ(σi0, q j0 ) is defined,


and σi0q j0 otherwise.

Theorem 11.1. There are functions from N to N which are not Turing
computable.

Proof. We know that the set of finite strings of symbols from


a countably infinite alphabet is countable. This gives us that the
set of descriptions of Turing machines, as a subset of the finite
strings from the countable vocabulary {q 0, q 1, . . . , ., σ1, σ2, . . . },
is itself enumerable. Since every Turing computable function is
computed by some (in fact, many) Turing machines, this means
that the set of all Turing computable functions from N to N is
also enumerable.
On the other hand, the set of all functions from N to N is not
countable. This follows immediately from the fact that not even
the set of all functions of one argument from N to the set {0, 1}
is countable. If all functions were computable by some Turing
machine we could enumerate the set of all functions. So there
are some functions that are not Turing-computable. 
11.3. THE HALTING PROBLEM 203

11.3 The Halting Problem


Assume we have fixed some finite descriptions of Turing ma-
chines. Using these, we can enumerate Turing machines via their
descriptions, say, ordered by the lexicographic ordering. Each
Turing machine thus receives an index: its place in the enumera-
tion M1 , M 2 , M3 , . . . of Turing machine descriptions.
We know that there must be non-Turing-computable func-
tions: the set of Turing machine descriptions—and hence the set
of Turing machines—is enumerable, but the set of all functions
from N to N is not. But we can find specific examples of non-
computable function as well. One such function is the halting
function.

Definition 11.2 (Halting function). The halting function h is de-


fined as

 0 if machine Me does not halt for input n



h(e, n) = 
 1 if machine Me halts for input n

Definition 11.3 (Halting problem). The Halting Problem is the


problem of determining (for any m, w) whether the Turing ma-
chine Me halts for an input of n strokes.

We show that h is not Turing-computable by showing that a


related function, s , is not Turing-computable. This proof relies on
the fact that anything that can be computed by a Turing machine
can be computed using just two symbols: t and I , and the fact
that two Turing machines can be hooked together to create a
single machine.
CHAPTER 11. UNDECIDABILITY 204

Definition 11.4. The function s is defined as

0
 if machine Me does not halt for input e
s (e ) = 
1
 if machine Me halts for input e

Lemma 11.5. The function s is not Turing computable.

Proof. We suppose, for contradiction, that the function s is Turing-


computable. Then there would be a Turing machine S that com-
putes s . We may assume, without loss of generality, that when
S halts, it does so while scanning the first square. This machine
can be “hooked up” to another machine J , which halts if it is
started on a blank tape (i.e., if it reads t in the initial state while
scanning the square to the right of the end-of-tape symbol), and
otherwise wanders off to the right, never halting. S _ J , the
machine created by hooking S to J , is a Turing machine, so it is
Me for some e (i.e., it appears somewhere in the enumeration).
Start Me on an input of e I s. There are two possibilities: either
Me halts or it does not halt.
1. Suppose Me halts for an input of e I s. Then s (e ) = 1. So
S , when started on e , halts with a single I as output on the
tape. Then J starts with a I on the tape. In that case J
does not halt. But Me is the machine S _ J , so it should
do exactly what S followed by J would do. So Me cannot
halt for an input of e I ’s.
2. Now suppose Me does not halt for an input of e I s. Then
s (e ) = 0, and S , when started on input e , halts with a blank
tape. J , when started on a blank tape, immediately halts.
Again, Me does what S followed by J would do, so Me must
halt for an input of e I ’s.
This shows there cannot be a Turing machine S : s is not Turing
computable. 
11.4. THE DECISION PROBLEM 205

Theorem 11.6 (Unsolvability of the Halting Problem). The halt-


ing problem is unsolvable, i.e., the function h is not Turing computable.

Proof. Suppose h were Turing computable, say, by a Turing ma-


chine H . We could use H to build a Turing machine that com-
putes s : First, make a copy of the input (separated by a blank).
Then move back to the beginning, and run H . We can clearly
make a machine that does the former, and if H existed, we would
be able to “hook it up” to such a modified doubling machine to
get a new machine which would determine if Me halts on input e ,
i.e., computes s . But we’ve already shown that no such machine
can exist. Hence, h is also not Turing computable. 

11.4 The Decision Problem


We say that first-order logic is decidable iff there is an effective
method for determining whether or not a given sentence is valid.
As it turns out, there is no such method: the problem of deciding
validity of first-order sentences is unsolvable.
In order to establish this important negative result, we prove
that the decision problem cannot be solved by a Turing machine.
That is, we show that there is no Turing machine which, when-
ever it is started on a tape that contains a first-order sentence,
eventually halts and outputs either 1 or 0 depending on whether
the sentence is valid or not. By the Church-Turing thesis, every
function which is computable is Turing computable. So if if this
“validity function” were effectively computable at all, it would be
Turing computable. If it isn’t Turing computable, then, it also
cannot be effectively computable.
Our strategy for proving that the decision problem is unsolv-
able is to reduce the halting problem to it. This means the follow-
ing: We have proved that the function h(e, w) that halts with out-
put 1 if the Turing-machine described by e halts on input w and
outputs 0 otherwise, is not Turing-computable. We will show that
if there were a Turing machine that decides validity of first-order
CHAPTER 11. UNDECIDABILITY 206

sentences, then there is also Turing machine that computes h.


Since h cannot be computed by a Turing machine, there cannot
be a Turing machine that decides validity either.
The first step in this strategy is to show that for every input w
and a Turing machine M , we can effectively describe a sentence T
representing M and w and a sentence E expressing “M eventually
halts” such that:
 T → E iff M halts for input w.
The bulk of our proof will consist in describing these sentences
T (M, w) and E(M, w) and verifying that T (M, w) → E(M, w) is
valid iff M halts on input w.

11.5 Representing Turing Machines


In order to represent Turing machines and their behavior by a
sentence of first-order logic, we have to define a suitable language.
The language consists of two parts: predicate symbols for describ-
ing configurations of the machine, and expressions for counting
execution steps (“moments”) and positions on the tape. The lat-
ter require an initial moment, , a “successor” function symbol
which is traditionally written as a postfix 0, and an ordering x < y
of “before.”

Definition 11.7. Given a Turing machine M = hQ, Σ, q 0, δi, the


language LM consists of:

1. A two-place predicate symbol Qq (x, y) for every state q ∈ Q .


Intuitively, Qq (m, n) expresses “after n steps, M is in state q
scanning the nth square.”

2. A two-place predicate symbol Sσ (x, y) for every symbol σ ∈


Σ . Intuitively, Sσ (m, n) expresses “after n steps, the mth
square contains symbol σ.”

3. A constant symbol 
11.5. REPRESENTING TURING MACHINES 207

4. A one-place function symbol 0

5. A two-place predicate symbol <

For each number n there is a canonical term n, the numeral


for n, which represents it in LM . 0 is , 1 is 0, 2 is 00, and so
on. More formally:

0=
n + 1 = n0

The sentences describing the operation of the Turing ma-


chine M on input w = σi1 . . . σik are the following:

1. Axioms describing numbers:

a) A sentence that says that the successor function is in-


jective:
∀x ∀y (x 0 = y 0 → x = y)

b) A sentence that says that every number is less than its


successor:
∀x (x < x 0)

c) A sentence that ensures that < is transitive:

∀x ∀y ∀z ((x < y ∧ y < z ) → x < z )

d) A sentence that connects < and =:

∀x ∀y (x < y → x , y)

2. Axioms describing the input configuration:

a) M is in the inital state q 0 at time 0, scanning square 1:

Qq 0 (1, 0)
CHAPTER 11. UNDECIDABILITY 208

b) The first m + 1 squares contain the symbols ., σi1 , . . . ,


σik :
S. (0, 0) ∧ Sσi1 (1, 0) ∧ · · · ∧ Sσik (n, 0)

c) Otherwise, the tape is empty:

∀x (k < x → St (x, 0))

3. Axioms describing the transition from one configuration to


the next:
For the following, let A(x, y) be the conjunction of all sen-
tences of the form

∀z (((z < x ∨ x < z ) ∧ Sσ (z, y)) → Sσ (z, y 0))

where σ ∈ Σ . We use A(m, n) to express “other than at


square m, the tape after n + 1 steps is the same as after n
steps.”

a) For every instruction δ(q i , σ) = hq j , σ 0, Ri, the sen-


tence:

∀x ∀y ((Qqi (x, y) ∧ Sσ (x, y)) →


(Qq j (x 0, y 0) ∧ Sσ0 (x, y 0) ∧ A(x, y)))

This says that if, after y steps, the machine is in state q i


scanning square x which contains symbol σ, then af-
ter y + 1 steps it is scanning square x + 1, is in state q j ,
square x now contains σ 0, and every square other
than x contains the same symbol as it did after y steps.
b) For every instruction δ(q i , σ) = hq j , σ 0, Li, the sen-
tence:

∀x ∀y ((Qqi (x 0, y) ∧ Sσ (x 0, y)) →
(Qq j (x, y 0) ∧ Sσ0 (x 0, y 0) ∧ A(x, y)))
11.6. VERIFYING THE REPRESENTATION 209

Take a moment to think about how this works: now


we don’t start with “if scanning square x . . . ” but: “if
scanning square x + 1 . . . ” A move to the left means
that in the next step the machine is scanning square x.
But the square that is written on is x + 1. We do it this
way since we don’t have subtraction or a predecessor
function.
c) For every instruction δ(q i , σ) = hq j , σ 0, N i, the sen-
tence:

∀x ∀y ((Qqi (x, y) ∧ Sσ (x, y)) →


(Qq j (x, y 0) ∧ Sσ0 (x, y 0) ∧ A(x, y)))

Let T (M, w) be the conjunction of all the above sentences for


Turing machine M and input w
In order to express that M eventually halts, we have to find
a sentence that says “after some number of steps, the transition
function will be undefined.” Let X be the set of all pairs hq, σi
such that δ(q, σ) is undefined. Let E(M, w) then be the sentence
_
∃x ∃y ( (Qq (x, y) ∧ Sσ (x, y)))
hq,σi∈X

If we use a Turing machine with a designated halting state h,


it is even easier: then the sentence E(M, w)

∃x ∃y Qh (x, y)

expresses that the machine eventually halts.

11.6 Verifying the Representation


In order to verify that our representation works, we first have to
make sure that if M halts on input w, then T (M, w) → E(M, w) is
valid. We can do this simply by proving that T (M, w) implies a de-
scription of the configuration of M for each step of the execution
CHAPTER 11. UNDECIDABILITY 210

of M on input w. If M halts on input w, then for some n, M will be


in a halting configuration at step n (and be scanning square m, for
some m). Hence, T (M, w) implies Qq (m, n) ∧ Sσ (m, n) for some q
and σ such that δ(q, σ) is undefined.

Definition 11.8. Let C (M, w, n) be the sentence

Qq (m, n) ∧ Sσ0 (0, n) ∧ · · · ∧ Sσk (k, n) ∧ ∀x (k < x → St (x, n))

where q is the state of M at time n, M is scanning square m at


time n, square i contains symbol σi at time n for 0 ≤ i ≤ k
and k is the right-most non-blank square of the tape at time 0, or
the right-most square the tape head has visited after n steps (if
n > 0).

Suppose that M does halt for input w. Then there is some


time n, state q , square m, and symbol σ such that:

1. At time n the machine is in state q scanning square m on


which σ appears.

2. The transition function δ(q, σ) is undefined.

C (M, w, n) will be the description of this time and will include


the clauses Qq (m, n) and Sσ (m, n). These clauses together imply
E(M, w): _
∃x ∃y ( (Qq (x, y) ∧ Sσ (x, y)))
hq,σi∈X

since Qq 0 (m, n) ∧ S σ0 (m, n)  hq,σi∈X (Qq (m, n) ∧ Sσ (m, n), as


W
hq 0, σ 0i ∈ X .
So if M halts for input w, then there is some time n such that
C (M, w, n)  E(M, w)
Since consequence is transitive, it is sufficient to show that for
any time n, T (M, w)  C (M, w, n).
11.6. VERIFYING THE REPRESENTATION 211

Lemma 11.9. For each n, if M has not halted after n steps, T (M, w) 
C (M, w, n).

Proof. Induction basis: If n = 0, then the conjuncts of C (M, w, 0)


are also conjuncts of T (M, w), so entailed by it.
Inductive hypothesis: If M has not halted before the nth step,
then T (M, w)  C (M, w, n).
Suppose n > 0 and after n steps, M started on w is in state q
scanning square m.
Suppose that M has not just halted, i.e., it has not halted
before the (n + 1)st step. If T (M, n) is true in a structure M, the
inductive hypothesis tells us that C (M, w, n) is true in M also. In
particular, Qq (m, n) and Sσ (m, n) are true in M.
Since M does not halt after n steps, there must be an instruc-
tion of one of the following three forms in the program of M :

1. δ(q, σ) = hq 0, σ 0, Ri

2. δ(q, σ) = hq 0, σ 0, Li

3. δ(q, σ) = hq 0, σ 0, N i

We will consider each of these three cases in turn. First, as-


sume that m ≤ k .

1. Suppose there is an instruction of the form (1). By Defini-


tion 11.7, (3a), this means that

∀x ∀y ((Qq (x, y) ∧ Sσ (x, y)) →


(Qq 0 (x 0, y 0) ∧ Sσ0 (x, y 0) ∧ A(x, y)))

is a conjunct of T (M, w). This entails the following sen-


tence, through universal instantiation:

(Qq (m, n) ∧ Sσ (m, n)) → (Qq 0 (m 0, n 0) ∧ Sσ0 (m, n 0) ∧ A(m, n)).


CHAPTER 11. UNDECIDABILITY 212

This in turn entails,

Qq 0 (m 0, n 0) ∧ Sσ0 (m, n 0) ∧
Sσ0 (0, n 0) ∧ · · · ∧ Sσk (k, n 0) ∧
∀x (k < x → St (x, n 0))

The first line comes directly from the consequent of the


preceding conditional. Each conjunct in the middle line—
which excludes S σm (m, n 0)—follows from the corresponding
conjunct in C (M, w, n) together with A(m, n). The last line
follows from the corresponding conjunct in C (M, w, n), m <
x → k < x, and A(m, n). Together, this just is C (M, w, n +1).

2. Suppose there is an instruction of the form (2). Then, by


Definition 11.7, (3b),

∀x ∀y ((Qq (x 0, y) ∧ Sσ (x 0, y)) →
(Qq 0 (x, y 0) ∧ Sσ0 (x 0, y 0) ∧ A(x, y)))

is a conjunct of T (M, w), which entails the following sen-


tence:

(Qq (m 0, n) ∧ Sσ (m 0, n)) → (Qq 0 (m, n 0) ∧ Sσ0 (m 0, n 0) ∧A(m, n)),

which in turn implies

Qq 0 (m, n 0) ∧ Sσ0 (m 0, n 0) ∧
Sσ0 (0, n 0) · · · ∧ Sσk (k, n 0) ∧
∀x (k < x → St (x, n 0))

as before. But this just is C (M, w, n + 1).

3. Case (3) is left as an exercise.

If m > k and σ 0 , t, the last instruction has written a non-blank


symbol to the right of the right-most non-blank square k at time n.
11.6. VERIFYING THE REPRESENTATION 213

In this case, C (M, w, n + 1) has the form

Qq 0 (m 0, n 0) ∧
Sσ0 (0, n 0) ∧ · · · ∧ Sσk (k, n 0) ∧
St (k + 1, n 0) ∧ · · · ∧ St (m − 1, n 0) ∧
Sσ0 (m, n 0) ∧
∀x (m < x → St (x, n 0))

For k < i < m, St (i, n) follows from the conjunct ∀x (k < x →


St (x, n)) of C (M, w, n) and the fact that T (M, w)  k < i if k < i .
St (i, n 0) then follows from A(m, n) and i < m. From ∀x (k < x →
St (x, n)) we get ∀x (m < x → St (x, n)) since k < m and < is
transitive. From that plus A(m, n) we get ∀x (m < x → St (x, n 0)).
Similarly for cases (2) and (3).
We have shown that for any n, T (M, w)  C (M, w, n). 

Lemma 11.10. If M halts on input w, then T (M, w) → E(M, w) is


valid.

Proof. By Lemma 11.9, we know that, for any time n, the de-
scription C (M, w, n) of the configuration of M at time n is a con-
sequence of T (M, w). Suppose M halts after k steps. It will
be scanning square m, say. Then C (M, w, k ) contains as con-
juncts both Qq (m, k ) and Sσ (m, k ) with δ(q, σ) undefined. Thus,
C (M, w, k )  E(M, w). But then T (M, w)  E(M, w) and therefore
T (M, w) → E(M, w) is valid. 

To complete the verification of our claim, we also have to


establish the reverse direction: if T (M, w) → E(M, w) is valid,
then M does in fact halt when started on input m.
CHAPTER 11. UNDECIDABILITY 214

Lemma 11.11. If  T (M, w) → E(M, w), then M halts on input w.

Proof. Consider the LM -structure M with domain N which inter-


prets  as 0, 0 as the successor function, and < as the less-than
relation, and the predicates Qq and Sσ as follows:

started on w, after n steps,


QqM = {hm, ni : }
M is in state q scanning square m
started on w, after n steps,
SσM = {hm, ni : }
square m of M contains symbol σ

In other words, we construct the structure M so that it describes


what M started on input w actually does, step by step. Clearly,
M |= T (M, w). If  T (M, w) → E(M, w), then also M |= E(M, w),
i.e., _
M |= ∃x ∃y ( (Qq (x, y) ∧ Sσ (x, y))).
hq,σi∈X

As |M| = N, there must be m, n ∈ N so that M |= Qq (m, n) ∧


Sσ (m, n) for some q and σ such that δ(q, σ) is undefined. By the
definition of M, this means that M started on input w after n steps
is in state q and reading symbol σ, and the transition function is
undefined, i.e., M has halted. 

11.7 The Decision Problem is Unsolvable

Theorem 11.12. The decision problem is unsolvable.

Proof. Suppose the decision problem were solvable, i.e., suppose


there were a Turing machine D of the following sort. Whenever D
is started on a tape that contains a sentence B of first-order logic
as input, D eventually halts, and outputs 1 iff B is valid and 0 oth-
erwise. Then we could solve the halting problem as follows. We
construct a Turing machine E that, given as input the number e
of Turing machine Me and input w, computes the corresponding
sentence T (Me , w) → E(Me , w) and halts, scanning the leftmost
11.7. THE DECISION PROBLEM IS UNSOLVABLE 215

square on the tape. The machine E _ D would then, given input


e and w, first compute T (Me , w) → E(Me , w) and then run the de-
cision problem machine D on that input. D halts with output 1
iff T (Me , w) → E(Me , w) is valid and outputs 0 otherwise. By
Lemma 11.11 and Lemma 11.10, T (Me , w) → E(Me , w) is valid
iff Me halts on input w. Thus, E _ D, given input e and w halts
with output 1 iff Me halts on input w and halts with output 0 oth-
erwise. In other words, E _ D would solve the halting problem.
But we know, by Theorem 11.6, that no such Turing machine can
exist. 

Summary
Turing machines are determined by their instruction sets, which
are finite sets of quintuples (for every state and symbol read, spec-
ify new state, symbol written, and movement of the head). The
finite sets of quintuples are enumerable, so there is a way of as-
sociating a number with each Turing machine instruction set.
The index of a Turing machine is the number associated with
its instruction set under a fixed such schema. In this way we can
“talk about” Turing machines indirectly—by talking about their
indices.
One important problem about the behavior of Turing ma-
chines is whether they eventually halt. Let h(e, n) be the func-
tion which = 1 if the Turing machine with index e halts when
started on input n, and = 0 otherwise. It is called the halting
function. The question of whether the halting function is itself
Turing computable is called the halting problem. The answer is
no: the halting problem is unsolvable. This is established using
a diagonal argument.
The halting problem is only one example of a larger class
of problems of the form “can X be accomplished using Turing
machines.” Another central problem of logic is the decision
problem for first-order logic: is there a Turing machine that
can decide if a given sentence is valid or not. This famous prob-
CHAPTER 11. UNDECIDABILITY 216

lem was also solved negatively: the decision problem is unsolv-


able. This is established by a reduction argument: we can asso-
ciate with each Turing machine M and input w a first-order sen-
tence T (M, w) → E(M, w) which is valid iff M halts when started
on input w. If the decision problem were solvable, we could thus
use it to solve the halting problem.

Problems
Problem 11.1. The Three Halting (3-Halt) problem is the prob-
lem of giving a decision procedure to determine whether or not
an arbitrarily chosen Turing Machine halts for an input of three
strokes on an otherwise blank tape. Prove that the 3-Halt problem
is unsolvable.

Problem 11.2. Show that if the halting problem is solvable for


Turing machine and input pairs Me and n where e , n, then it is
also solvable for the cases where e = n.

Problem 11.3. We proved that the halting problem is unsolvable


if the input is a number e , which identifies a Turing machine Me
via an enumaration of all Turing machines. What if we allow
the description of Turing machines from section 11.2 directly as
input? (This would require a larger alphabet of course.) Can
there be a Turing machine which decides the halting problem
but takes as input descriptions of Turing machines rather than
indices? Explain why or why not.

Problem 11.4. Complete case (3) of the proof of Lemma 11.9.


APPENDIX A

Induction
A.1 Introduction
Induction is an important proof technique which is used, in dif-
ferent forms, in almost all areas of logic, theoretical computer
science, and mathematics. It is needed to prove many of the re-
sults in logic.
Induction is often contrasted with deduction, and character-
ized as the inference from the particular to the general. For in-
stance, if we observe many green emeralds, and nothing that we
would call an emerald that’s not green, we might conclude that
all emeralds are green. This is an inductive inference, in that it
proceeds from many particlar cases (this emerald is green, that
emerald is green, etc.) to a general claim (all emeralds are green).
Mathematical induction is also an inference that concludes a gen-
eral claim, but it is of a very different kind that this “simple in-
duction.”
Very roughly, and inductive proof in mathematics concludes
that all mathematical objects of a certain sort have a certain prop-
erty. In the simplest case, the mathematical objects an induc-
tive proof is concerned with are natural numbers. In that case
an inductive proof is used to establish that all natural numbers
have some property, and it does this by showing that (1) 0 has
the property, and (2) whenever a number n has the property, so

219
APPENDIX A. INDUCTION 220

does n + 1. Induction on natural numbers can then also often


be used to prove general about mathematical objects that can
be assigned numbers. For instance, finite sets each have a finite
number n of elements, and if we can use induction to show that
every number n has the property “all finite sets of size n are . . . ”
then we will have shown something about all finite sets.
Induction can also be generalized to mathematical objects
that are inductively defined. For instance, expressions of a formal
language suchh as those of first-order logic are defined induc-
tively. Structural induction is a way to prove results about all such
expressions. Structural induction, in particular, is very useful—
and widely used—in logic.

A.2 Induction on N
In its simplest form, induction is a technique used to prove results
for all natural numbers. It uses the fact that by starting from 0 and
repeatedly adding 1 we eventually reach every natural number.
So to prove that something is true for every number, we can (1)
establish that it is true for 0 and (2) show that whenever a number
has it, the next number has it too. If we abbreviate “number n
has property P ” by P (n), then a proof by induction that P (n) for
all n ∈ N consists of:

1. a proof of P (0), and

2. a proof that, for any n, if P (n) then P (n + 1).

To make this crystal clear, suppose we have both (1) and (2).
Then (1) tells us that P (0) is true. If we also have (2), we know
in particular that if P (0) then P (0 + 1), i.e., P (1). (This follows
from the general statement “for any n, if P (n) then P (n + 1)” by
putting 0 for n. So by modus ponens, we have that P (1). From
(2) again, now taking 1 for n, we have: if P (1) then P (2). Since
we’ve just established P (1), by modus ponens, we have P (2). And
so on. For any number k , after doing this k steps, we eventually
A.2. INDUCTION ON N 221

arrive at P (k ). So (1) and (2) together established P (k ) for any


k ∈ N.
Let’s look at an example. Suppose we want to find out how
many different sums we can throw with n dice. Although it might
seem silly, let’s start with 0 dice. If you have no dice there’s only
one possible sum you can “throw”: no dots at all, which sums
to 0. So the number of different possible throws is 1. If you have
only one die, i.e., n = 1, there are six possible values, 1 through 6.
With two dice, we can throw any sum from 2 through 12, that’s 11
possibilities. With three dice, we can throw any number from 3 to
18, i.e., 16 different possibilities. 1, 6, 11, 16: looks like a pattern:
maybe the answer is 5n + 1? Of course, 5n + 1 is the maximum
possible, because there are only 5n + 1 numbers between n, the
lowest value you can throw with n dice (all 1’s) and 6n, the highest
you can throw (all 6’s).

Theorem A.1. With n dice one can throw all 5n + 1 possible values
between n and 6n.

Proof. Let P (n) be the claim: “It is possible to throw any number
between n and 6n using n dice.” To use induction, we prove:

1. The induction basis P (1), i.e., with just one die, you can
throw any number between 1 and 6.

2. The induction step, for all k , if P (k ) then P (k + 1).

(1) Is proved by inspecting a 6-sided die. It has all 6 sides,


and every number between 1 and 6 shows up one on of the sides.
So it is possible to throw any number between 1 and 6 using a
single die.
To prove (2), we assume the antecedent of the conditional,
i.e., P (k ). This assumption is called the inductive hypothesis. We
use it to prove P (k + 1). The hard part is to find a way of thinking
about the possible values of a throw of k + 1 dice in terms of the
possible values of throws of k dice plus of throws of the extra
APPENDIX A. INDUCTION 222

k + 1-st die—this is what we have to do, though, if we want to use


the inductive hypothesis.
The inductive hypothesis says we can get any number between
k and 6k using k dice. If we throw a 1 with our (k + 1)-st die, this
adds 1 to the total. So we can throw any value between k + 1 and
6k + 1 by throwing 5 dice and then rolling a 1 with the (k + 1)-st
die. What’s left? The values 6k + 2 through 6k + 6. We can get
these by rolling k 6s and then a number between 2 and 6 with
our (k + 1)-st die. Together, this means that with k + 1 dice we
can throw any of the numbers between k + 1 and 6(k + 1), i.e.,
we’ve proved P (k + 1) using the assumption P (k ), the inductive
hypothesis. 

Very often we use induction when we want to prove something


about a series of objects (numbers, sets, etc.) that is itself defined
“inductively,” i.e., by defining the (n+1)-st object in terms of the n-
th. For instance, we can define the sum sn of the natural numbers
up to n by

s0 = 0
sn+1 = sn + (n + 1)

This definition gives:

s0 = 0,
s1 = s0 + 1 = 1,
s2 = s1 + 2 =1+2=3
s3 = s2 + 3 = 1 + 2 + 3 = 6, etc.

Now we can prove, by induction, that sn = n(n + 1)/2.


A.3. STRONG INDUCTION 223

Proposition A.2. sn = n(n + 1)/2.

Proof. We have to prove (1) that s0 = 0 · (0 + 1)/2 and (2) if


sn = n(n + 1)/2 then sn+1 = (n + 1)(n + 2)/2. (1) is obvious. To
prove (2), we assume the inductive hypothesis: sn = n(n + 1)/2.
Using it, we have to show that sn+1 = (n + 1)(n + 2)/2.
What is sn+1 ? By the definition, sn+1 = sn + (n + 1). By in-
ductive hypothesis, sn = n(n + 1)/2. We can substitute this into
the previous equation, and then just need a bit of arithmetic of
fractions:
n(n + 1)
sn+1 = + (n + 1) =
2
n(n + 1) 2(n + 1)
= + =
2 2
n(n + 1) + 2(n + 1)
= =
2
(n + 2)(n + 1)
= .
2


The important lesson here is that if you’re proving something


about some inductively defined sequence an , induction is the ob-
vious way to go. And even if it isn’t (as in the case of the possibil-
ities of dice throws), you can use induction if you can somehow
relate the case for n + 1 to the case for n.

A.3 Strong Induction


In the principle of induction discussed above, we prove P (0) and
also if P (n), then P (n+1). In the second part, we assume that P (n)
is true and use this assumption to prove P (n + 1). Equivalently,
of course, we could assume P (n − 1) and use it to prove P (n)—
the important part is that we be able to carry out the inference
from any number to its successor; that we can prove the claim
APPENDIX A. INDUCTION 224

in question for any number under the assumption it holds for its
predecessor.
There is a variant of the principle of induction in which we
don’t just assume that the claim holds for the predecessor n − 1
of n, but for all numbers smaller than n, and use this assumption
to establish the claim for n. This also gives us the claim P (k ) for
all k ∈ N. For once we have established P (0), we have thereby
established that P holds for all numbers less than 1. And if we
know that if P (l ) for all l < n then P (n), we know this in particular
for n = 1. So we can conclude P (2). With this we have proved
P (0), P (1), P (2), i.e., P (l ) for all l < 3, and since we have also the
conditional, if P (l ) for all l < 3, then P (3), we can conclude P (3),
and so on.
In fact, if we can establish the general conditional “for all n,
if P (l ) for all l < n, then P (n),” we do not have to establish P (0)
anymore, since it follows from it. For remember that a general
claim like “for all l < n, P (l )” is true if there are no l < n. This
is a case of vacuous quantification: “all As are Bs” is true if there
are no As, ∀x (A(x) → B(x)) is true if no x satisfies A(x). In this
case, the formalized version would be “∀l (l < n → P (l ))”—and
that is true if there are no l < n. And if n = 0 that’s exactly the
case: no l < 0, hence “for all l < 0, P (0)” is true, whatever P is.
A proof of “if P (l ) for all l < n, then P (n)” thus automatically
establishes P (0).
This variant is useful if establishing the claim for n can’t be
made to just rely on the claim for n − 1 but may require the
assumption that it is true for one or more l < n.

A.4 Inductive Definitions


In logic we very often define kinds of objects inductively, i.e., by
specifying rules for what counts as an object of the kind to be de-
fined which explain how to get new objects of that kind from old
objects of that kind. For instance, we often define special kinds
of sequences of symbols, such as the terms and formulas of a lan-
A.4. INDUCTIVE DEFINITIONS 225

guage, by induction. For a simpler example, consider strings of


parentheses, such as “(()(” or “()(())”. In the second string, the
parentheses “balance,” in the first one, they don’t. The shortest
such expression is “()”. Actually, the very shortest string of paren-
theses in which every opening parenthesis has a matching closing
parenthesis is “”, i.e., the empty sequence ∅. If we already have
a parenthesis expression p, then putting matching parentheses
around it makes another balanced parenthesis expression. And
if p and p 0 are two balanced parentheses expressions, writing one
after the other, “pp 0” is also a balanced parenthesis expression.
In fact, any sequence of balanced parentheses can be generated
in this way, and we might use these operations to define the set of
such expressions. This is an inductive definition.

Definition A.3 (Paraexpressions). The set of parexpressions is


inductively defined as follows:

1. ∅ is a parexpression.

2. If p is a parexpression, then so is (p).

3. If p and p 0 are parexpressions , ∅, then so is pp 0.

4. Nothing else is a parexpression.

(Note that we have not yet proved that every balanced paren-
thesis expression is a parexpression, although it is quite clear that
every parexpression is a balanced parenthesis expression.)
The key feature of inductive definitions is that if you want to
prove something about all parexpressions, the definition tells you
which cases you must consider. For instance, if you are told that
q is a parexpression, the inductive definition tells you what q can
look like: q can be ∅, it can be (p) for some other parexpression p,
or it can be pp 0 for two parexpressions p and p 0 , ∅. Because of
clause (4), those are all the possibilities.
When proving claims about all of an inductively defined set,
the strong form of induction becomes particularly important. For
APPENDIX A. INDUCTION 226

instance, suppose we want to prove that for every parexpression


of length n, the number of ( in it is n/2. This can be seen as a
claim about all n: for every n, the number of ( in any parexpres-
sion of length n is n/2.

Proposition A.4. For any n, the number of ( in a parexpression of


length n is n/2.

Proof. To prove this result by (strong) induction, we have to show


that the following conditional claim is true:

If for every k < n, any parexpression of length k has


k/2 (’s, then any parexpression of length n has n/2
(’s.

To show this conditional, assume that its antecedent is true, i.e.,


assume that for any k < n, parexpressions of length k contain k
(’s. We call this assumption the inductive hypothesis. We want to
show the same is true for parexpressions of length n.
So suppose q is a parexpression of length n. Because parex-
pressions are inductively defined, we have three cases: (1) q is
∅, (2) q is (p) for some parexpression p, or (3) q is pp 0 for some
parexpressions p and p 0 , ∅.

1. q is ∅. Then n = 0, and the number of ( in q is also 0. Since


0 = 0/2, the claim holds.

2. q is (p) for some parexpression p. Since q contains two


more symbols than p, len(p) = n − 2, in particular, len(p) <
n, so the inductive hypothesis applies: the number of ( in
p is len(p)/2. The number of ( in q is 1 + the number of (
in p, so = 1 + len(p)/2, and since len(p) = n − 2, this gives
1 + (n − 2)/2 = n/2.

3. q is pp 0 for some parexpression p and p 0 , ∅. Since neither


p nor p 0 = ∅, both len(p) and len(p 0) < n. Thus the induc-
tive hypothesis applies in each case: The number of ( in p
A.5. STRUCTURAL INDUCTION 227

is len(p)/2, and the number of ( in p 0 is len(p 0)/2. On the


other hand, the number of ( in q is obviously the sum of the
numbers of ( in p and p 0, since q = pp 0. Hence, the num-
ber of ( in q is len(p)/2 + len(p 0)/2 = (len(p) + len(p 0))/2 =
len(pp 0)/2 = n/2.

In each case, we’ve shown that teh number of ( in q is n/2 (on


the basis of the inductive hypothesis). By strong induction, the
proposition follows. 

A.5 Structural Induction


So far we have used induction to establish results about all natural
numbers. But a corresponding principle can be used directly to
prove results about all elements of an inductively defined set.
This often called structural induction, because it depends on the
structure of the inductively defined objects.
Generally, an inductive definition is given by (a) a list of “ini-
tial” elements of the set and (b) a list of operations which produce
new elements of the set from old ones. In the case of parexpres-
sions, for instance, the initial object is ∅ and the operations are

o 1 (p) =(p)
o 2 (q, q 0) =q q 0

You can even think of the natural numbers N themselves as being


given be an inductive definition: the initial object is 0, and the
operation is the successor function x + 1.
In order to prove something about all elements of an induc-
tively defined set, i.e., that every element of the set has a prop-
erty P , we must:

1. Prove that the initial objects have P

2. Prove that for each operation o, if the arguments have P ,


so does the result.
APPENDIX A. INDUCTION 228

For instance, in order to prove something about all parexpres-


sions, we would prove that it is true about ∅, that it is true of (p)
provided it is true of p, and that it is true about q q 0 provided it
is true of q and q 0 individually.

Proposition A.5. The number of ( equals the number of ) in any


parexpression p.

Proof. We use structural induction. Parexpressions are induc-


tively defined, with initial object ∅ and the operations o 1 and
o2.

1. The claim is true for ∅, since the number of ( in ∅ = 0 and


the number of ) in ∅ also = 0.

2. Suppose the number of ( in p equals the number of ) in p.


We have to show that this is also true for (p), i.e., o 1 (p). But
the number of ( in (p) is 1 + the number of ( in p. And the
number of ) in (p) is 1 + the number of ) in p, so the claim
also holds for (p).

3. Suppose the number of ( in q equals the number of ), and


the same is true for q 0. The number of ( in o 2 (p, p 0), i.e., in
pp 0, is the sum of the number ( in p and p 0. The number of
) in o 2 (p, p 0), i.e., in pp 0, is the sum of the number of ) in p
and p 0. The number of ( in o 2 (p, p 0) equals the number of )
in o 2 (p, p 0).

The result follows by structural induction. 


APPENDIX B

Biographies
B.1 Georg Cantor
An early biography of Georg Can-
tor (gay-org kahn-tor) claimed that
he was born and found on a ship
that was sailing for Saint Peters-
burg, Russia, and that his parents
were unknown. This, however, is
not true; although he was born in
Saint Petersburg in 1845.
Cantor received his doctorate
in mathematics at the University of
Berlin in 1867. He is known for his
work in set theory, and is credited
with founding set theory as a dis-
tinctive research discipline. He was Fig. B.1: Georg Cantor
the first to prove that there are infi-
nite sets of different sizes. His theories, and especially his theory
of infinities, caused much debate among mathematicians at the
time, and his work was controversial.
Cantor’s religious beliefs and his mathematical work were in-
extricably tied; he even claimed that the theory of transfinite num-
bers had been communicated to him directly by God. In later

229
APPENDIX B. BIOGRAPHIES 230

life, Cantor suffered from mental illness. Beginning in 1984, and


more frequently towards his later years, Cantor was hospitalized.
The heavy criticism of his work, including a falling out with the
mathematician Leopold Kronecker, led to depression and a lack
of interest in mathematics. During depressive episodes, Cantor
would turn to philosophy and literature, and even published a
theory that Francis Bacon was the author of Shakespeare’s plays.
Cantor died on January 6, 1918, in a sanatorium in Halle.

Further Reading For full biographies of Cantor, see Dauben


(1990) and Grattan-Guinness (1971). Cantor’s radical views are
also described in the BBC Radio 4 program A Brief History of
Mathematics (du Sautoy, 2014). If you’d like to hear about Can-
tor’s theories in rap form, see Rose (2012).

B.2 Alonzo Church


Alonzo Church was born in Wash-
ington, DC on June 14, 1903. In
early childhood, an air gun incident
left Church blind in one eye. He
finished preparatory school in Con-
necticut in 1920 and began his uni-
versity education at Princeton that
same year. He completed his doc-
toral studies in 1927. After a cou-
ple years abroad, Church returned
to Princeton. Church was known
exceedingly polite and careful. His Fig. B.2: Alonzo Church
blackboard writing was immaculate, and he would preserve im-
portant papers by carefully covering them in Duco cement. Out-
side of his academic pursuits, he enjoyed reading science fiction
magazines and was not afraid to write to the editors if he spotted
any inaccuracies in the writing.
B.3. GERHARD GENTZEN 231

Church’s academic achievements were great. Together with


his students Stephen Kleene and Barkley Rosser, he developed
a theory of effective calculability, the lambda calculus, indepen-
dently of Alan Turing’s development of the Turing machine. The
two definitions of computability are equivalent, and give rise to
what is now known as the Church-Turing Thesis, that a function
of the natural numbers is effectively computable if and only if
it is computable via Turing machine (or lambda calculus). He
also proved what is now known as Church’s Theorem: The deci-
sion problem for the validity of first-order formulas is unsolvable.
Church continued his work into old age. In 1967 he left
Princeton for UCLA, where he was professor until his retirement
in 1990. Church passed away on August 1, 1995 at the age of 92.

Further Reading For a brief biography of Church, see Ender-


ton (forthcoming). Church’s original writings on the lambda
calculus and the Entscheidungsproblem (Church’s Thesis) are
Church (1936a,b). Aspray (1984) records an interview with Church
about the Princeton mathematics community in the 1930s. Church
wrote a series of book reviews of the Journal of Symbolic Logic from
1936 until 1979. They are all archived on John MacFarlane’s web-
site (MacFarlane, 2015).

B.3 Gerhard Gentzen


Gerhard Gentzen is known primar-
ily as the creator of structural proof
theory, and specifically the creation
of the natural deduction and se-
quent calculus proof systems. He
was born on November 24, 1909 in
Greifswald, Germany. Gerhard was
homeschooled for three years be- Fig. B.3: Gerhard Gentzen
fore attending preparatory school,
where he was behind most of his classmates in terms of educa-
APPENDIX B. BIOGRAPHIES 232

tion. Despite this, he was a brilliant student and showed a strong


aptitude for mathematics. His interests were varied, and he, for
instance, also write poems for his mother and plays for the school
theatre.
Gentzen began his university studies at the University of Greif-
swald, but moved around to Göttingen, Munich, and Berlin. He
received his doctorate in 1933 from the University of Göttingen
under Hermann Weyl. (Paul Bernays supervised most of his
work, but was dismissed from the university by the Nazis.) In
1934, Gentzen began work as an assistant to David Hilbert. That
same year he developed the sequent calculus and natural deduc-
tion proof systems, in his papers Untersuchungen über das logische
Schließen I–II [Investigations Into Logical Deduction I–II]. He proved
the consistency of the Peano axioms in 1936.
Gentzen’s relationship with the Nazis is complicated. At the
same time his mentor Bernays was forced to leave Germany, Gentzen
joined the university branch of the SA, the Nazi paramilitary or-
ganization. Like many Germans, he was a member of the Nazi
party. During the war, he served as a telecommunications officer
for the air intelligence unit. However, in 1942 he was released
from duty due to a nervous breakdown. It is unclear whether
or not Gentzen’s loyalties lay with the Nazi party, or whether he
joined the party in order to ensure academic success.
In 1943, Gentzen was offered an academic position at the
Mathematical Institute of the German University of Prague, which
he accepted. However, in 1945 the citizens of Prague revolted
against German occupation. Soviet forces arrived in the city and
arrested all the professors at the university. Because of his mem-
bership in Nazi organizations, Gentzen was taken to a forced
labour camp. He died of malnutrition while in his cell on August
4, 1945 at the age of 35.

Further Reading For a full biography of Gentzen, see Menzler-


Trott (2007). An interesting read about mathematicians under
Nazi rule, which gives a brief note about Gentzen’s life, is given by
B.4. KURT GÖDEL 233

Segal (2014). Gentzen’s papers on logical deduction are available


in the original german (Gentzen, 1935a,b). English translations
of Gentzen’s papers have been collected in a single volume by
Szabo (1969), which also includes a biographical sketch.

B.4 Kurt Gödel


Kurt Gödel (ger-dle) was born
on April 28, 1906 in Brünn in
the Austro-Hungarian empire (now
Brno in the Czech Republic). Due
to his inquisitive and bright na-
ture, young Kurtele was often called
“Der kleine Herr Warum” (Little
Mr. Why) by his family. He excelled
in academics from primary school
onward, where he got less than the
highest grade only in mathematics.
Gödel was often absent from school
due to poor health and was exempt
from physical education. Gödel was Fig. B.4: Kurt Gödel
diagnosed with rheumatic fever during his childhood. Through-
out his life, he believed this permanently affected his heart despite
medical assessment saying otherwise.
Gödel began studying at the University of Vienna in 1920 and
completed his doctoral studies in 1929. He first intended to study
physics, but his interests soon moved to mathematics and espe-
cially logic, in part due to the influence of the philosopher Rudolf
Carnap. His dissertation, written under the supervision of Hans
Hahn, proved the completeness theorem of first-order predicate
logic with identity. Only a couple years later, his most famous
results were published—the first and second incompleteness the-
orems (Gödel, 1931). During his time in Vienna, Gödel was also
involved with the Vienna Circle, a group of scientifically-minded
philosophers.
APPENDIX B. BIOGRAPHIES 234

In 1938, Gödel married Adele Nimbursky. His parents were


not pleased: not only was she six years older than him and al-
ready divorced, but she worked as a dancer in a nightclub. Social
pressures did not affect Gödel, however, and they remained hap-
pily married until his death.
After Nazi Germany annexed Austria in 1938, Gödel and
Adele immigrated to the United States, where he took up a po-
sition at the Institute for Advanced Study in Princeton, New Jer-
sey. Despite his introversion and eccentric nature, Gödel’s time
at Princeton was collaborative and fruitful. He published essays
in set theory, philosophy and physics. Notably, he struck up a par-
ticularly strong friendship with his colleague at the IAS, Albert
Einstein.
In his later years, Gödel’s mental health deteriorated. His
wife’s hospitalization in 1977 meant she was no longer able to
cook his meals for him. Succumbing to both paranoia and anorexia,
and deathly afraid of being poisoned, Gödel refused to eat. He
died of starvation on January 14, 1978 in Princeton.

Further Reading For a complete biography of Gödel’s life is


available, see John Dawson (1997). For further biographical pieces,
as well as essays about Gödel’s contributions to logic and philos-
ophy, see Wang (1990), Baaz et al. (2011), Takeuti et al. (2003),
and Sigmund et al. (2007).
Gödel’s PhD thesis is available in the original German (Gödel,
1929). The original text of the incompleteness theorems is (Gödel,
1931). All of Gödel’s published and unpublished writings, as well
as a selection of correspondence, are available in English in his
Collected Papers Feferman et al. (1986, 1990).
For a detailed treatment of Gödel’s incompleteness theorems,
see Smith (2013). For an informal, philosophical discussion of
Gödel’s theorems, see Mark Linsenmayer’s podcast (Linsenmayer,
2014).
B.5. EMMY NOETHER 235

B.5 Emmy Noether


Emmy Noether (ner-ter) was born
in Erlangen, Germany, on March
23, 1882, to an upper-middle
class scholarly family. Hailed as
the “mother of modern algebra,”
Noether made groundbreaking con-
tributions to both mathematics and
physics, despite significant barriers
to women’s education. In Germany
at the time, young girls were meant
to be educated in arts and were not
allowed to attend college prepara-
tory schools. However, after au-
diting classes at the Universities of Fig. B.5: Emmy Noether
Göttingen and Erlangen (where her father was professor of math-
ematics), Noether was eventually able to enrol as a student at
Erlangen in 1904, when their policy was updated to allow female
students. She received her doctorate in mathematics in 1907.
Despite her qualifications, Noether experienced much resis-
tance during her career. From 1908–1915, she taught at Erlangen
without pay. During this time, she caught the attention of David
Hilbert, one of the world’s foremost mathematicians of the time,
who invited her to Göttingen. However, women were prohibited
from obtaining professorships, and she was only able to lecture
under Hilbert’s name, again without pay. During this time she
proved what is now known as Noether’s theorem, which is still
used in theoretical physics today. Noether was finally granted
the right to teach in 1919. Hilbert’s response to continued resis-
tance of his university colleagues reportedly was: “Gentlemen,
the faculty senate is not a bathhouse.”
In the later 1920s, she concentrated on work in abstract alge-
bra, and her contributions revolutionized the field. In her proofs
she often made use of the so-called ascending chain condition,
which states that there is no infinite strictly increasing chain of
APPENDIX B. BIOGRAPHIES 236

certain sets. For instance, certain algebraic structures now known


as Noetherian rings have the property that there are no infinite
sequences of ideals I 1 ( I 2 ( . . . . The condition can be general-
ized to any partial order (in algebra, it concerns the special case
of ideals ordered by the subset relation), and we can also con-
sider the dual descending chain condition, where every strictly
decreasing sequence in a partial order eventually ends. If a par-
tial order satisfies the descending chain condition, it is possible
to use induction along this order in a similar way in which we
can use induction along the < order on N. Such orders are called
well-founded or Noetherian, and the corresponding proof principle
Noetherian induction.
Noether was Jewish, and when the Nazis came to power in
1933, she was dismissed from her position. Luckily, Noether was
able to emigrate to the United States for a temporary position at
Bryn Mawr, Pennsylvania. During her time there she also lectured
at Princeton, although she found the university to be unwelcom-
ing to women (Dick, 1981, 81). In 1935, Noether underwent an
operation to remove a uterine tumour. She died from an infection
as a result of the surgery, and was buried at Bryn Mawr.

Further Reading For a biography of Noether, see Dick (1981).


The Perimeter Institute for Theoretical Physics has their lectures
on Noether’s life and influence available online (Institute, 2015).
If you’re tired of reading, Stuff You Missed in History Class has a
podcast on Noether’s life and influence (Frey and Wilson, 2015).
The collected works of Noether are available in the original Ger-
man (Jacobson, 1983).

B.6 Bertrand Russell


Bertrand Russell is hailed as one of the founders of modern ana-
lytic philosophy. Born May 18, 1872, Russell was not only known
for his work in philosophy and logic, but wrote many popular
B.6. BERTRAND RUSSELL 237

books in various subject areas. He was also an ardent political


activist throughout his life.
Russell was born in Trellech,
Monmouthshire, Wales. His par-
ents were members of the British
nobility. They were free-thinkers,
and even made friends with the rad-
icals in Boston at the time. Un-
fortunately, Russell’s parents died
when he was young, and Russell
was sent to live with his grandpar-
ents. There, he was given a re-
ligious upbringing (something his
parents had wanted to avoid at all
costs). His grandmother was very
Fig. B.6: Bertrand Russell
strict in all matters of morality. Dur-
ing adolescence he was mostly homeschooled by private tutors.
Russell’s influence in analytic philosophy, and especially logic,
is tremendous. He studied mathematics and philosophy at Trin-
ity College, Cambridge, where he was influenced by the math-
ematician and philosopher Alfred North Whitehead. In 1910,
Russell and Whitehead published the first volume of Principia
Mathematica, where they championed the view that mathematics
is reducible to logic. He went on to publish hundreds of books,
essays and political pamphlets. In 1950, he won the Nobel Prize
for literature.
Russell’s was deeply entrenched in politics and social activism.
During World War I he was arrested and sent to prison for six
months due to pacifist activities and protest. While in prison,
he was able to write and read, and claims to have found the ex-
perience “quite agreeable.” He remained a pacifist throughout
his life, and was again incarcerated for attending a nuclear dis-
armament rally in 1961. He also survived a plane crash in 1948,
where the only survivors were those sitting in the smoking sec-
tion. As such, Russell claimed that he owed his life to smoking.
Russell was married four times, but had a reputation for carrying
APPENDIX B. BIOGRAPHIES 238

on extra-marital affairs. He died on February 2, 1970 at the age


of 97 in Penrhyndeudraeth, Wales.

Further Reading Russell wrote an autobiography in three parts,


spanning his life from 1872–1967 (Russell, 1967, 1968, 1969).
The Bertrand Russell Research Centre at McMaster University
is home of the Bertrand Russell archives. See their website at
Duncan (2015), for information on the volumes of his collected
works (including searchable indexes), and archival projects. Rus-
sell’s paper On Denoting (Russell, 1905) is a classic of 20th century
analytic philosophy.
The Stanford Encyclopedia of Philosophy entry on Russell
(Irvine, 2015) has sound clips of Russell speaking on Desire and
Political theory. Many video interviews with Russell are available
online. To see him talk about smoking and being involved in a
plane crash, e.g., see Russell (n.d.). Some of Russell’s works,
including his Introduction to Mathematical Philosophy are available
as free audiobooks on LibriVox (n.d.).

B.7 Alfred Tarski


Alfred Tarski was born on January
14, 1901 in Warsaw, Poland (then
part of the Russian Empire). Often
described as “Napoleonic,” Tarski
was boisterous, talkative, and in-
tense. His energy was often re-
flected in his lectures—he once set
fire to a wastebasket while disposing
of a cigarette during a lecture, and
was forbidden from lecturing in that
building again.
Tarski had a thirst for knowl-
edge from a young age. Although Fig. B.7: Alfred Tarski
later in life he would tell students
B.8. ALAN TURING 239

that he studied logic because it was the only class in which he


got a B, his high school records show that he got A’s across the
board—even in logic. He studied at the University of Warsaw
from 1918 to 1924. Tarski first intended to study biology, but
became interested in mathematics, philosophy, and logic, as the
university was the center of the Warsaw School of Logic and Phi-
losophy. Tarski earned his doctorate in 1924 under the supervi-
sion of Stanisław Leśniewski.
Before emigrating to the United States in 1939, Tarski com-
pleted some of his most important work while working as a sec-
ondary school teacher in Warsaw. His work on logical conse-
quence and logical truth were written during this time. In 1939,
Tarski was visiting the United States for a lecture tour. During
his visit, Germany invaded Poland, and because of his Jewish her-
itage, Tarski could not return. His wife and children remained in
Poland until the end of the war, but were then able to emigrate to
the United States as well. Tarski taught at Harvard, the College
of the City of New York, and the Institute for Advanced Study
at Princeton, and finally the University of California, Berkeley.
There he founded the multidisciplinary program in Logic and
the Methodology of Science. Tarski died on October 26, 1983 at
the age of 82.

Further Reading For more on Tarski’s life, see the biogra-


phy Alfred Tarski: Life and Logic (Feferman and Feferman, 2004).
Tarski’s seminal works on logical consequence and truth are avail-
able in English in (Corcoran, 1983). All of Tarski’s original works
have been collected into a four volume series, (Tarski, 1981).

B.8 Alan Turing


Alan Turing was born in Mailda Vale, London, on June 23, 1912.
He is considered the father of theoretical computer science. Tur-
ing’s interest in the physical sciences and mathematics started at
a young age. However, as a boy his interests were not represented
APPENDIX B. BIOGRAPHIES 240

well in his schools, where emphasis was placed on literature and


classics. Consequently, he did poorly in school and was repri-
manded by many of his teachers.
Turing attended King’s College,
Cambridge as an undergraduate,
where he studied mathematics. In
1936 Turing developed (what is now
called) the Turing machine as an
attempt to precisely define the no-
tion of a computable function and
to prove the undecidability of the
decision problem. He was beaten
to the result by Alonzo Church,
who proved the result via his own
lambda calculus. Turing’s paper Fig. B.8: Alan Turing
was still published with reference to
Church’s result. Church invited Turing to Princeton, where he
spent 1936–1938, and obtained a doctorate under Church.
Despite his interest in logic, Turing’s earlier interests in phys-
ical sciences remained prevalent. His practical skills were put to
work during his service with the British cryptanalytic department
at Bletchley Park during World War II. Turing was a central figure
in cracking the cypher used by German Naval communications—
the Enigma code. Turing’s expertise in statistics and cryptogra-
phy, together with the introduction of electronic machinery, gave
the team the ability to crack the code by creating a de-crypting
machine called a “bombe.” His ideas also helped in the creation
of the world’s first programmable electronic computer, the Colos-
sus, also used at Bletchley park to break the German Lorenz
cypher.
Turing was gay. Nevertheless, in 1942 he proposed to Joan
Clarke, one of his teammates at Bletchley Park, but later broke off
the engagement and confessed to her that he was homosexual. He
had several lovers throughout his lifetime, although homosexual
acts were then criminal offences in the UK. In 1952, Turing’s
house was burgled by a friend of his lover at the time, and when
B.9. ERNST ZERMELO 241

filing a police report, Turing admitted to having a homosexual


relationship, under the impression that the government was on
their way to legalizing homosexual acts. This was not true, and
he was charged with gross indecency. Instead of going to prison,
Turing opted for a hormone treatment that reduced libido. Turing
was found dead on June 8, 1954, of a cyanide overdose—most
likely suicide. He was given a royal pardon by Queen Elizabeth II
in 2013.

Further Reading For a comprehensive biography of Alan Tur-


ing, see Hodges (2014). Turing’s life and work inspired a play,
Breaking the Code, which was produced in 1996 for TV starring
Derek Jacobi as Turing. The Imitation Game, an Academy Award
nominated film starring Bendict Cumberbatch and Kiera Knight-
ley, is also loosely based on Alan Turing’s life and time at Bletch-
ley Park (Tyldum, 2014).
Radiolab (2012) has several podcasts on Turing’s life and
work. BBC Horizon’s documentary The Strange Life and Death
of Dr. Turing is available to watch online (Sykes, 1992). (Theelen,
2012) is a short video of a working LEGO Turing Machine—
made to honour Turing’s centenary in 2012.
Turing’s original paper on Turing machines and the decision
problem is Turing (1937).

B.9 Ernst Zermelo


Ernst Zermelo was born on July 27, 1871 in Berlin, Germany.
He had five sisters, though his family suffered from poor health
and only three survived to adulthood. His parents also passed
away when he was young, leaving him and his siblings orphans
when he was seventeen. Zermelo had a deep interest in the arts,
and especially in poetry. He was known for being sharp, witty,
and critical. His most celebrated mathematical achievements in-
clude the introduction of the axiom of choice (in 1904), and his
axiomatization of set theory (in 1908).
APPENDIX B. BIOGRAPHIES 242

Zermelo’s interests at university


were varied. He took courses in
physics, mathematics, and philoso-
phy. Under the supervision of Her-
mann Schwarz, Zermelo completed
his dissertation Investigations in the
Calculus of Variations in 1894 at the
University of Berlin. In 1897, he
decided to pursue more studies at
the University of Göttigen, where he
was heavily influenced by the foun-
dational work of David Hilbert. In
1899 he became eligible for profes-
Fig. B.9: Ernst Zermelo
sorship, but did not get one until
eleven years later—possibly due to his strange demeanour and
“nervous haste.”
Zermelo finally received a paid professorship at the Univer-
sity of Zurich in 1910, but was forced to retire in 1916 due to
tuberculosis. After his recovery, he was given an honourary pro-
fessorship at the University of Freiburg in 1921. During this time
he worked on foundational mathematics. He became irritated
with the works of Thoralf Skolem and Kurt Gödel, and publicly
criticized their approaches in his papers. He was dismissed from
his position at Freiburg in 1935, due to his unpopularity and his
opposition to Hitler’s rise to power in Germany.
The later years of Zermelo’s life were marked by isolation. Af-
ter his dismissal in 1935, he abandoned mathematics. He moved
to the country where he lived modestly. He married in 1944, and
became completely dependent on his wife as he was going blind.
Zermelo lost his sight completely by 1951. He passed away in
Günterstal, Germany, on May 21, 1953.

Further Reading For a full biography of Zermelo, see Ebbing-


haus (2015). Zermelo’s seminal 1904 and 1908 papers are avail-
able to read in the original German (Zermelo, 1904, 1908). Zer-
B.9. ERNST ZERMELO 243

melo’s collected works, including his writing on physics, are avail-


able in English translation in (Ebbinghaus et al., 2010; Ebbing-
haus and Kanamori, 2013).
APPENDIX B. BIOGRAPHIES 244
Glossary
anti-symmetric R is anti-symmetric iff, whenever both Rxy and
Ryx, then x = y; in other words: if x , y then not Rxy
or not Ryx (see section 2.2).
assumption A formula that stands topmost in a derivation, also
called an initial formula. It may be discharged or undis-
charged (see section 7.2).
asymmetric R is asymmetric if for no pair x, y ∈ X we have Rxy
and Ryx (see section 2.3).

bijection A function that is both surjective and injective (see


section 3.2).
binary relation A subset of X 2 ; we write Rxy (or xRy) for hx, yi ∈
R (see section 2.1).
bound Occurrence of a variable within the scope of a quantifier
that uses the same variable (see section 5.7).

Cartesian product (X ×Y ) Set of all pairs of elements of X and


Y ; X × Y = {hx, yi : x ∈ X and y ∈ Y } (see section 1.6).
Church-Turing Theorem States that there is no Turing machine
which decides if a given sentence of first-order logic is
validity or not (see section 11.7).
Church-Turing Thesis states that anything computable via an ef-
fective procedure is Turing computable (see section 10.9).
closed A set of sentences Γ is closed iff, whenever Γ  A then
A ∈ Γ. The set {A : Γ  A} is the closure of Γ (see

245
GLOSSARY 246

section 6.1).
compactness theorem States that every finitely satisfiable set of
sentences is satisfiable (see section 8.9).
completeness Property of a proof system; it is complete if, when-
ever Γ entails A, then there is also a derivation that es-
tablishes Γ ` A; equivalently, iff every consistent set of
sentences is satisfiable (see section 8.1).
completeness theorem States that first-order logic is complete:
every consistent set of sentences is satisfiable.
composition (g ◦ f ) The function resulting from “chaining to-
gether” f and g ; (g ◦ f )(x) = g (f (x)) (see section 3.4).
connected R is connected if for all x, y ∈ X with x , y, then
either Rxy or Ryx (see section 2.2).
consistent A set of sentences Γ is consistent iff Γ 0 ⊥, otherwise
inconsistent (see section 7.4).
covered A structure in which every element of the domain is the
value of some closed term (see section 5.9).

decision problem Problem of deciding if a given sentence of first-


order logic is validity or not (see Church-Turing Theo-
rem).
deduction theorem Relates entailment and provability of a sen-
tence from an assumption with that of a corresponding
conditional. In the semantic form (Theorem 5.48), it
states that Γ ∪ {A}  B iff Γ  A → B. In the proof-
theoretic form, it states that Γ ∪ {A} ` B iff Γ ` A → B.
derivability (Γ ` A) A is derivable from Γ if there is a derivation
with end-formula A and in which every assumption is
either discharged or is in Γ (see section 7.4).
derivation A tree of formulas in which every formua is either an
assumption or follows from the trees above it by a rule
of inference (see section 7.2).
difference (X \ Y ) the set of all elements of X which are not
also elements of Y : X \ Y = {x : x ∈ X and x < Y } (see
section 1.4).
GLOSSARY 247

discharged An assumption in a derivation may be discharged by


an inference rule below it (the rule and the assumption
are then assigned a matching label, e.g., [A]2 ). If it is not
discharged, it is called undischarged (see section 7.2).
disjoint two sets with no elements in common (see section 1.4).
domain (of a function) (dom(f )) The set of objects for which
a (partial) function is defined (see section 3.1).
domain (of a structure) (|M|) Non-empty set from from which a
structure takes assignments and values of variables (see
section 5.9).

eigenvariable A special constant symbol in a premise of a ∃Elim


or ∀Intro inference which may not appear in the conclu-
sion or any undischarged (see section 7.2).
entailment (Γ  A) A set of sentences Γ entails a sentence A
iff for every structure M with M |= Γ, M |= A (see sec-
tion 5.13).
enumeration A possibly infinite, possibly gappy list of all ele-
ments of a set X ; formally a surjective function f : N →
7
X (see section 4.2).
equinumerous X and Y are equinumerous iff there is a total
bijection from X to Y (see section 4.5).
equivalence relation a reflexive, symmetric, and transitive rela-
tion (see section 2.2).
extensionality (of satisfaction) Whether or not a formula A is
satisfied depends only on the assignments to the non-
logical symbols and free variables that actually occur
in A.
extensionality (of sets) Sets X and Y are identical, X = Y , iff
every element of X is also an element of Y , and vice
versa (see section 1.1).

finitely satisfiable Γ is finitely satisfiable iff every finite Γ0 ⊆ Γ


is satisfiable (see section 8.9).
formula Expressions of a first-order language L which express re-
lations or properties, or are true or false (see section 5.3).
GLOSSARY 248

free An occurrence of a variable that is not bound (see sec-


tion 5.7).
free for A term t is free for x in A if none of the free occurrences
of x in A occur in the scope of a quantifier that binds a
variable in t (see section 5.8).
function (f : X → Y ) A mapping of each element of a domain
(of a function) X to an element of the codomain Y (see
section 3.1).
graph (of a function) the relation R f ⊆ X ×Y defined by R f =
{hx, yi : f (x) = y }, if f : X →
7 Y (see section 3.7).
halting problem The problem of determining (for any e , n)
whether the Turing machine Me halts for an input of n
strokes (see section 11.3).
injective f : X → Y is injective iff for each y ∈ Y there is at most
one x ∈ X such that f (x) = y; equivalently if whenever
x , x 0 then f (x) , f (x 0) (see section 3.2).
intersection (X ∩ Y ) The set of all things which are elements
of both X and Y : X ∩ Y = {x : x ∈ X ∧ x ∈ Y } (see
section 1.4).
inverse function If f : X → Y is a bijection, f −1 : Y → X is the
function with f − 1(y) = whatever unique x ∈ X is such
that f (x) = y (see section 3.3).
inverse relation (R −1 ) The relation R “turned around”; R −1 =
{hy, xi : hx, yi ∈ R} (see section 2.5).
irreflexive R is irreflexive if, for no x ∈ X , Rxx (see section 2.3).
Löwenheim-Skolem Theorem States that every satisfiable set
of sentences has a countable model (see section 8.10).
linear order A connected partial order (see section 2.3).
maximally consistent set A set of sentences is maximally con-
sistent iff it is consistent, and adding any sentence to it
makes it inconsistent (see section 8.3).
model A structure in which every sentence in Γ is true is a model
of Γ (see section 6.2).
GLOSSARY 249

partial function (f : X → 7 Y ) A partial function is a mapping


which assigns to every element of X at most one element
of Y . If f assigns an element of Y to x ∈ X , f (x) is
defined, and otherwise undefined (see section 3.6).
partial order A reflexive, anti-symmetric, transitive relation (see
section 2.3).
power set (℘(X )) The set consisting of all subsets of a set X ,
℘(X ) = {x : x ⊆ X } (see section 1.3).
preorder A reflexive and transitive relation (see section 2.3).

range (ran(f )) the subset of the codomain that is actually output


by f ; ran(f ) = {y ∈ Y : f (x) = y for some x ∈ X } (see
section 3.1).
reflexive R is reflexive iff, for every x ∈ X , Rxx (see section 2.2).

satisfiable A set of sentences Γ is satisfiable if M |= Γ for some


structure M, otherwise it is unsatisfiable (see section 5.13).
sentence A formula with no free variable. (see section 5.7).
sequence (finite) (X ∗ ) A finite string of elements of X ; an ele-
ment of X n for some n (see section 1.2).
sequence (infinite) (X ω ) A gapless, unending sequence of el-
ements of X ; formally, a function s : Z+ → X (see sec-
tion 1.2).
set A collection of objects, considered independently of the way
it is specified, of the order of the objects in the set, and
of their multiplicity (see section 1.1).
soundness Property of a proof system: it is sound if whenever
Γ ` A then Γ  A (see section 7.6).
strict linear order A connected strict order (see section 2.3).
strict order An irreflexive, asymmetric, and transitive relation
(see section 2.3).
structure (M) An interpretation of a first-order language, con-
sisting of a domain (of a structure) and assignments of
the constant, predicate and function symbols of the lan-
guage (see section 5.9).
GLOSSARY 250

subformula Part of a formula which is itself a formula (see sec-


tion 5.6).
subset (X ⊆ Y ) A set every element of which is an element of a
given set Y (see section 1.3).
surjective f : X → Y is surjective iff the range of f is all of Y ,
i.e., for every y ∈ Y there is at least one x ∈ X such
that f (x) = y (see section 3.2).
symmetric R is symmetric iff, whenever Rxy then also Ryx (see
section 2.2).

theorem (` A) A formula A is a theorem (of logic) if there is


a derivation of A with all assumptions discharged; or a
theorem of Γ if Γ ` A (see section 7.4).
total order see linear order.
transitive R is transitive iff, whenever Rxy and Ryz , then also
Rxz (see section 2.2).
transitive closure (R + ) the smallest transitive relation contain-
ing R (see section 2.5).

undischarged see discharged.


union (X ∪ Y ) The set of all elements of X and Y together:
X ∪ Y = {x : x ∈ X ∨ x ∈ Y } (see section 1.4).

validity ( A) A sentence A is valid iff M |= A for every struc-


ture M (see section 5.13).
variable assignment A function which maps each variable to an
element of |M| (see section 5.11).

x-variant Two variable assignments are x-variants, s ∼x s 0, if they


differ at most in what they assign to x (see section 5.11).
Photo Credits
Georg Cantor, p. 229: Portrait of Georg Cantor by Otto Zeth
courtesy of the Universitätsarchiv, Martin-Luther Universität Halle–
Wittenberg. UAHW Rep. 40-VI, Nr. 3 Bild 102.
Alonzo Church, p. 230: Portrait of Alonzo Church, undated,
photographer unknown. Alonzo Church Papers; 1924–1995, (C0948)
Box 60, Folder 3. Manuscripts Division, Department of Rare
Books and Special Collections, Princeton University Library. O c
Princeton University. The Open Logic Project has obtained per-
mission to use this image for inclusion in non-commercial OLP-
derived materials. Permission from Princeton University is re-
quired for any other use.
Gerhard Gentzen, p. 231: Portrait of Gerhard Gentzen play-
ing ping-pong courtesy of Ekhart Mentzler-Trott.
Kurt Gödel, p. 233: Portrait of Kurt Gödel, ca. 1925, photog-
rapher unknown. From the Shelby White and Leon Levy Archives
Center, Institute for Advanced Study, Princeton, NJ, USA, on de-
posit at Princeton University Library, Manuscript Division, De-
partment of Rare Books and Special Collections, Kurt Gödel Pa-
pers, (C0282), Box 14b, #110000. The Open Logic Project has
obtained permission from the Institute’s Archives Center to use
this image for inclusion in non-commercial OLP-derived materi-
als. Permission from the Archives Center is required for any other
use.
Emmy Noether, p. 235: Portrait of Emmy Noether, ca. 1922,

251
Photo Credits 252

courtesy of the Abteilung für Handschriften und Seltene Drucke,


Niedersächsische Staats- und Universitätsbibliothek Göttingen,
Cod. Ms. D. Hilbert 754, Bl. 14 Nr. 73. Restored from an original
scan by Joel Fuller.
Bertrand Russell, p. 237: Portrait of Bertrand Russell, ca. 1907,
courtesy of the William Ready Division of Archives and Research
Collections, McMaster University Library. Bertrand Russell Archives,
Box 2, f. 4.
Alfred Tarski, p. 238: Passport photo of Alfred Tarski, 1939.
Cropped and restored from a scan of Tarski’s passport by Joel
Fuller. Original courtesy of Bancroft Library, University of Cal-
ifornia, Berkeley. Alfred Tarski Papers, Banc MSS 84/49. The
Open Logic Project has obtained permission to use this image
for inclusion in non-commercial OLP-derived materials. Permis-
sion from Bancroft Library is required for any other use.
Alan Turing, p. 240: Portrait of Alan Mathison Turing by
Elliott & Fry, 29 March 1951, NPG x82217, O c National Portrait
Gallery, London. Used under a Creative Commons BY-NC-ND
3.0 license.
Ernst Zermelo, p. 242: Portrait of Ernst Zermelo, ca. 1922,
courtesy of the Abteilung für Handschriften und Seltene Drucke,
Niedersächsische Staats- und Universitätsbibliothek Göttingen,
Cod. Ms. D. Hilbert 754, Bl. 6 Nr. 25.
Bibliography
Aspray, William. 1984. The Princeton mathematics community
in the 1930s: Alonzo Church. URL http://www.princeton.
edu/mudd/finding_aids/mathoral/pmc05.htm. Interview.

Baaz, Matthias, Christos H. Papadimitriou, Hilary W. Putnam,


Dana S. Scott, and Charles L. Harper Jr. 2011. Kurt Gödel and
the Foundations of Mathematics: Horizons of Truth. Cambridge:
Cambridge University Press.

Church, Alonzo. 1936a. A note on the Entscheidungsproblem.


Journal of Symbolic Logic 1: 40–41.

Church, Alonzo. 1936b. An unsolvable problem of elementary


number theory. American Journal of Mathematics 58: 345–363.

Corcoran, John. 1983. Logic, Semantics, Metamathematics. Indi-


anapolis: Hackett, 2nd ed.

Dauben, Joseph. 1990. Georg Cantor: His Mathematics and Philoso-


phy of the Infinite. Princeton: Princeton University Press.

Dick, Auguste. 1981. Emmy Noether 1882–1935. Boston:


Birkhäuser.

du Sautoy, Marcus. 2014. A brief history of mathematics:


Georg Cantor. URL http://www.bbc.co.uk/programmes/
b00ss1j0. Audio Recording.

253
BIBLIOGRAPHY 254

Duncan, Arlene. 2015. The Bertrand Russell Research Centre.


URL http://russell.mcmaster.ca/.

Ebbinghaus, Heinz-Dieter. 2015. Ernst Zermelo: An Approach to his


Life and Work. Berlin: Springer-Verlag.

Ebbinghaus, Heinz-Dieter, Craig G. Fraser, and Akihiro


Kanamori. 2010. Ernst Zermelo. Collected Works, vol. 1. Berlin:
Springer-Verlag.

Ebbinghaus, Heinz-Dieter and Akihiro Kanamori. 2013. Ernst


Zermelo: Collected Works, vol. 2. Berlin: Springer-Verlag.

Enderton, Herbert B. forthcoming. Alonzo Church: Life and


Work. In The Collected Works of Alonzo Church. Cambridge: MIT
Press.

Feferman, Anita and Solomon Feferman. 2004. Alfred Tarski: Life


and Logic. Cambridge: Cambridge University Press.

Feferman, Solomon, John W. Dawson Jr., Stephen C. Kleene, Gre-


gory H. Moore, Robert M. Solovay, and Jean van Heijenoort.
1986. Kurt Gödel: Collected Works. Vol. 1: Publications 1929–1936.
Oxford: Oxford University Press.

Feferman, Solomon, John W. Dawson Jr., Stephen C. Kleene, Gre-


gory H. Moore, Robert M. Solovay, and Jean van Heijenoort.
1990. Kurt Gödel: Collected Works. Vol. 2: Publications 1938–1974.
Oxford: Oxford University Press.

Frey, Holly and Tracy V. Wilson. 2015. Stuff you


missed in history class: Emmy Noether, mathematics trail-
blazer. URL http://www.missedinhistory.com/podcasts/
emmy-noether-mathematics-trailblazer/. Podcast audio.

Gentzen, Gerhard. 1935a. Untersuchungen über das logische


Schließen I. Mathematische Zeitschrift 39: 176–210. English
translation in Szabo (1969), pp. 68–131.
BIBLIOGRAPHY 255

Gentzen, Gerhard. 1935b. Untersuchungen über das logische


Schließen II. Mathematische Zeitschrift 39: 176–210, 405–431.
English translation in Szabo (1969), pp. 68–131.
Gödel, Kurt. 1929. Über die Vollständigkeit des Logikkalküls
[On the completeness of the calculus of logic]. Dissertation,
Universität Wien. Reprinted and translated in Feferman et al.
(1986), pp. 60–101.
Gödel, Kurt. 1931. über formal unentscheidbare Sätze der Prin-
cipia Mathematica und verwandter Systeme I [On formally unde-
cidable propositions of Principia Mathematica and related sys-
tems I]. Monatshefte für Mathematik und Physik 38: 173–198.
Reprinted and translated in Feferman et al. (1986), pp. 144–
195.
Grattan-Guinness, Ivor. 1971. Towards a biography of Georg
Cantor. Annals of Science 27(4): 345–391.
Hodges, Andrew. 2014. Alan Turing: The Enigma. London: Vin-
tage.
Institute, Perimeter. 2015. Emmy Noether: Her life, work,
and influence. URL https://www.youtube.com/watch?v=
tNNyAyMRsgE. Video Lecture.
Irvine, Andrew David. 2015. Sound clips of Bertrand Rus-
sell speaking. URL http://plato.stanford.edu/entries/
russell/russell-soundclips.html.
Jacobson, Nathan. 1983. Emmy Noether: Gesammelte
Abhandlungen—Collected Papers. Berlin: Springer-Verlag.
John Dawson, Jr. 1997. Logical Dilemmas: The Life and Work of
Kurt Gödel. Boca Raton: CRC Press.
LibriVox. n.d. Bertrand Russell. URL https://librivox.
org/author/1508?primary_key=1508&search_category=
author&search_page=1&search_form=get_results. Collec-
tion of public domain audiobooks.
BIBLIOGRAPHY 256

Linsenmayer, Mark. 2014. The partially examined life: Gödel


on math. URL http://www.partiallyexaminedlife.com/
2014/06/16/ep95-godel/. Podcast audio.

MacFarlane, John. 2015. Alonzo Church’s JSL reviews. URL


http://johnmacfarlane.net/church.html.

Menzler-Trott, Eckart. 2007. Logic’s Lost Genius: The Life of Gerhard


Gentzen. Providence: American Mathematical Society.

Radiolab. 2012. The Turing problem. URL http://www.


radiolab.org/story/193037-turing-problem/. Podcast
audio.

Rose, Daniel. 2012. A song about Georg Cantor. URL https://


www.youtube.com/watch?v=QUP5Z4Fb5k4. Audio Recording.

Russell, Bertrand. 1905. On denoting. Mind 14: 479–493.

Russell, Bertrand. 1967. The Autobiography of Bertrand Russell,


vol. 1. London: Allen and Unwin.

Russell, Bertrand. 1968. The Autobiography of Bertrand Russell,


vol. 2. London: Allen and Unwin.

Russell, Bertrand. 1969. The Autobiography of Bertrand Russell,


vol. 3. London: Allen and Unwin.

Russell, Bertrand. n.d. Bertrand Russell on smoking. URL


https://www.youtube.com/watch?v=80oLTiVW_lc. Video
Interview.

Segal, Sanford L. 2014. Mathematicians under the Nazis. Princeton:


Princeton University Press.

Sigmund, Karl, John Dawson, Kurt Mühlberger, Hans Magnus


Enzensberger, and Juliette Kennedy. 2007. Kurt Gödel: Das
Album–The Album. The Mathematical Intelligencer 29(3): 73–
76.
BIBLIOGRAPHY 257

Smith, Peter. 2013. An Introduction to Gödel’s Theorems. Cambridge:


Cambridge University Press.

Sykes, Christopher. 1992. BBC Horizon: The strange life and


death of Dr. Turing. URL https://www.youtube.com/watch?
v=gyusnGbBSHE.

Szabo, Manfred E. 1969. The Collected Papers of Gerhard Gentzen.


Amsterdam: North-Holland.

Takeuti, Gaisi, Nicholas Passell, and Mariko Yasugi. 2003. Mem-


oirs of a Proof Theorist: Gödel and Other Logicians. Singapore:
World Scientific.

Tarski, Alfred. 1981. The Collected Works of Alfred Tarski, vol. I–IV.
Basel: Birkhäuser.

Theelen, Andre. 2012. Lego turing machine. URL https://www.


youtube.com/watch?v=FTSAiF9AHN4.

Turing, Alan M. 1937. On computable numbers, with an applica-


tion to the “Entscheidungsproblem”. Proceedings of the London
Mathematical Society, 2nd Series 42: 230–265.

Tyldum, Morten. 2014. The imitation game. Motion picture.

Wang, Hao. 1990. Reflections on Kurt Gödel. Cambridge: MIT


Press.

Zermelo, Ernst. 1904. Beweis, daß jede Menge wohlgeordnet


werden kann. Mathematische Annalen 59: 514–516. English
translation in (Ebbinghaus et al., 2010, pp. 115-119).

Zermelo, Ernst. 1908. Untersuchungen über die Grundlagen der


Mengenlehre I. Mathematische Annalen 65(2): 261–281. English
translation in (Ebbinghaus et al., 2010, pp. 189-229).
About the
Open Logic
Project
The Open Logic Text is an open-source, collaborative textbook of
formal meta-logic and formal methods, starting at an intermedi-
ate level (i.e., after an introductory formal logic course). Though
aimed at a non-mathematical audience (in particular, students of
philosophy and computer science), it is rigorous.
The Open Logic Text is a collaborative project and is under
active development. Coverage of some topics currently included
may not yet be complete, and many sections still require substan-
tial revision. We plan to expand the text to cover more topics in
the future. We also plan to add features to the text, such as a
glossary, a list of further reading, historical notes, pictures, bet-
ter explanations, sections explaining the relevance of results to
philosophy, computer science, and mathematics, and more prob-
lems and examples. If you find an error, or have a suggestion,
please let the project team know.
The project operates in the spirit of open source. Not only
is the text freely available, we provide the LaTeX source under

258
259

the Creative Commons Attribution license, which gives anyone


the right to download, use, modify, re-arrange, convert, and re-
distribute our work, as long as they give appropriate credit.
Please see the Open Logic Project website at openlogicpro-
ject.org for additional information.

View publication stats

You might also like