Basic Symbolic Logic
Basic Symbolic Logic
Basic Symbolic Logic
Text for
Logic 2003A: Symbolic Logic I
Kent A. Peacock
Department of Philosophy,
University of Lethbridge.
Copyright
c 2005, 2006, 2007 Kent A. Peacock
Preface
The purpose of this text is to provide a correct but flexible command of symbolic logic up to
but not including the point at which one can begin to study metalogic (the “logic of logic”).
As such, it covers most of the material usually covered in a typical second-year university
course on symbolic logic, including such time-honoured topics as truth tables, truth trees,
and natural deduction for propositional and first-order predicate logic (without and with
identity). A working command of these techniques is simply assumed in the the literature
on analytic philosophy, mathematical logic, foundations of mathematics, computer science,
and much of physical science.
A concluding chapter introduces the reader briefly and qualitatively to some of the riches
that lie beyond, including modal and deviant logics, Gödel, Turing, artificial intelligence,
and quantum computing. Not everyone will wish to explore such advanced topics in detail,
but every student of philosophy and logic should be aware that such things exist, and have
at least a passing acquaintance with them.
The aim of this text is not to present a rigorous development of propositional and
predicate logic from the simplest and most general first principles available, but to offer a
workaday (but of course valid!) system of logic that will be as useful as possible, both for
beginning logicians who may later wish to go on to a more foundational treatment, and also
those many others who are interested in logic more as a tool than for its own sake.
One might naturally ask why the world needs to be blessed with yet another logic text,
when there are so many excellent texts on the market now. Like almost everyone who
has taught logic, I could not avoid reconstructing it in my own way. I ended up with an
approach that I believe is a useful complement to other methods of teaching this sort of
logic. And there is room for new approaches, for the fact remains that elementary logic,
unlike certain parts of elementary mathematics, is still a developing subject. I modestly
believe that the approach I present here has an advantage over many other texts at a similar
level because of its emphasis on flexibility and ease of use.
The choice of topics in this text was designed in particular to meet the needs of the
second-year symbolic logic course at the University of Lethbridge. We have a first year
course that is mostly informal logic and critical thinking (with a smattering of “baby”
deductive logic), a second-year course called Symbolic Logic I, that introduces the formal
methods of classical two-valued Boolean logic (primarily propositional and predicate calcu-
lus) up to the point at which one can begin to do metatheory, and then a third-year course,
Symbolic Logic II, in which those who are so inclined can finally dip into metatheory.
The tabular notation for natural deduction I used here is adapted with some simplifi-
cations from that used in superb but now largely out-dated texts by Mates and Lemmon
[18, 17], supplemented with the highly efficient truth tree techniques pioneered by Jeffrey
iii
iv PREFACE
[14]. I build my natural deduction system on basic rules similar to those of Lemmon, but I
have streamlined the presentation of natural deduction techniques in such a way as to make
them as easy to use as possible. I also use Lemmon’s efficient way of setting up truth tables,
with a few shortcuts of my own thrown in. In the end, I have written the logic text that I
think would have benefited me the most as a student, and I hope that it will be useful to
others as well. Although the book is aimed in the first instance at college and university
students, I hope that it will also be useful for professionals in disciplines such as physics or
computer science, or indeed anyone at all who may wish to refresh their knowledge of the
techniques of logic.
The advantage of the tabular notation for natural deduction over the popular and
graphically explicit Fitch system (as used in, e.g., [2, 13]) is that it is more compact, more
flexible, and that it allows one to introduce a theorem at any point in the deduction.
(This will all be explained below.) Logic does not yet have one notation that has become
universally accepted (unlike most branches of mathematics such as calculus), but once you
have learned one notation, it is not very hard to learn another. Some students may in the
end prefer the Fitch system over the slightly more abstract tabular system because Fitch
allows us to visualize whenever we are in a subproof. On the other hand, the concision
and flexibility of the tabular system certainly help to make symbolic logic useful — which
places it squarely in the spirit of this text.
Although I tremendously admire Lemmon’s book, I base my teaching of logic on a dif-
ferent pedagogical philosophy. Lemmon, and indeed numerous other authors of logic texts,
believe that the student should learn how to derive every result from first principles, using
as few rules as possible. This “logical boot camp” approach certainly develops technical
adroitness in those students who are willing to tough it out. But it has the disadvantage that
one often requires derivations of great length and complexity in order to prove results that
should be very simple and obvious. My view is that the subject is difficult enough as it is.
I think that (especially for beginning students) the most useful approach is to do just what
one does in mathematics, which is to use, as creatively and as early as possible, previously
derived results to get new results. The ultimate purpose is to streamline derivations and
get useful or interesting formulas as quickly and painlessly as possible, just as one would
in (say) a trigonometry text. Trig would be nearly useless if one had to prove an identity
such as cos2 θ + sin2 θ = 1 every time one needed to use it. I therefore encourage students
to freely use earlier theorems and sequents, and I provide a formalism that allows one to
do this in an efficient and clear manner. (Lemmon allows the introduction of sequents and
theorems, too, but he does not emphasize it nearly as much as I think it would be useful to
do so.) Again, the aim is to develop a system of logic that could actually be useful .
Another respect in which this book differs from some other treatments of symbolic
logic is in its views about the purpose of learning natural deduction. Once the student has
mastered syntactic methods of verifying a given deduction, such as truth trees, she may
well wonder what is the point of struggling through the process of constructing a syntactic
derivation of a result that can be quickly checked with a tree or table.
The experienced logician will know that one reason for learning syntactic methods is
that not all logics are complete, so that once we get beyond first-order predicate logic,
syntactic methods are not merely a redundant gloss on semantic methods.
Another reason for learning natural deduction is that it is a formalization of the way
we do actually tend to think about things: presented with a collection of premisses, we do
v
indeed go through something like natural deduction (although usually sloppily, and with a
free admixture of inductive steps) to arrive at a conclusion. Like Molière’s M. Jourdain,
who was astonished to learn that he had been speaking prose his whole life, we may be
surprised to learn that we use natural deduction whether we know it or not; the methods
of logic simply make it more explicit and precise.
There is yet another reason that natural deduction is worth learning, however, a reason
that I do not think has received sufficient attention. What often happens in real life is that
we are presented with a collection of givens (assumptions or known facts) and we then try to
deduce what may follow from those givens. We often do not have a pre-cooked conclusion
any more than we know the sum total of a list of expenses before we add it up. There
are a number of exercises included here which are meant to develop the skill of using the
rules of natural deduction to work from a given collection of premisses to a conclusion that
may have certain desired or known properties but which has not yet been found. Once we
have a complete chain of reasoning from premisses to conclusion we can, if we wish, use
semantic methods to check the argument structure we have constructed. The relation of
natural deduction to truth trees and tables is something like the relation of long division
to multiplication. In long division we have to guess our trial divisors, but once we have
an answer we can check it in a purely mechanical way using multiplication. Similarly, in
natural deduction we have to guess which rules will help us uncover a conclusion with the
desired properties, but once we have a conclusion we can check the deduction mechanically
with semantic methods. In some of the exercises in this book the student is presented with
a collection of premisses without a conclusion, and invited to explore the consequences of
these premisses using the rules of natural deduction.
A number of rules and other results are used in this course that cannot be proven by
means of the tools introduced in the course. (An example is the Deduction Theorem.) Rest
assured that all of the rules I use are valid and can be rigorously established in a more
advanced approach.
Logic resembles gymnastics or language study in that it can be learned only by practice.
To this end, I have included a generous selection of exercises. I follow the practice of many
old-fashioned math texts and divide these exercises into A, B, and C levels, together with
occasional Questions for Discussion. The A exercises are “finger exercises” that test your
understanding of the definitions and techniques introduced in the chapter. If you cannot
do the A exercises then you have missed something basic in the chapter. The B exercises
are a bit more complex than the As and usually combine several concepts and techniques,
but they should be doable by anyone who has worked through the A exercises diligently.
The B exercises are therefore tests of basic competency in the material of the chapter or
section. The C exercises are advanced and may require quite a bit of thought, or even
additional research beyond the material in this text. The solutions to most of the exercises
are presented at the end of this book, except for some of the more open-ended problems,
not all of which have a solution in closed form.
The author of any text on logic must at some point decide how to spell the word
“premiss” (plural “premisses”). I have decided to follow Lemmon’s convention and use the
version with double s’s because it avoids confusion with the legalistic sense of the term (as
in, for instance, “keep off the premises!”) and because it is probably more widely used in
logic.
Although the book is elementary, its pace is brisk. I assume that my readers are like
vi PREFACE
me in that they do not enjoy being patronized; this is not Logic for the Complete Dummy.
I would be very grateful for any suggestions as to how this document can be improved.
Acknowledgements
I am grateful to the University of Lethbridge and the Social Sciences and Humanities Re-
search Council of Canada for essential material and financial support. Many thanks for dis-
cussions, advice, suggestions, shorter proofs, or error corrections to Bryson Brown, Sheldon
Chow, Dawn Collins, Mark Ebner, Herb Korté, Laura Smith, Mark Smith, Sam Woodruff,
and John Woods. A special thanks to Ardis Anderson for her help in tracking down M.
Jourdain. Any errors that remain (and a quick induction suggests that there must be some!)
are entirely my own responsibility.
— Lao Tzu
Contents
Preface iii
vii
viii CONTENTS
5.3.2 Inconsistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
5.3.3 One-to-One Relation Between Inconsistencies and Tautologies . . . . 64
5.3.4 Contingency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
5.3.5 Consistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
5.4 Relation of Syntax to Semantics . . . . . . . . . . . . . . . . . . . . . . . . 65
5.5 Semantic Classification of Pairs of Wffs . . . . . . . . . . . . . . . . . . . . 65
5.5.1 Implication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
5.5.2 Equivalence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
5.5.3 Contrariety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
5.5.4 Subcontrariety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
5.5.5 Inconsistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
5.5.6 Consistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
5.5.7 Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
5.6 Relations Between Semantic Properties of Wffs . . . . . . . . . . . . . . . . 66
5.7 Semantic Classification of Sets of Wffs . . . . . . . . . . . . . . . . . . . . . 67
5.7.1 Equivalence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
5.7.2 Inconsistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
5.7.3 Consistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
5.7.4 Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
5.8 Testing Validity of Sequents Using Truth Tables . . . . . . . . . . . . . . . 67
5.8.1 Deduction Theorem from the Semantic Viewpoint . . . . . . . . . . 69
5.8.2 Short-cuts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
5.9 Using Short-Cuts in Testing for Other Semantic Properties . . . . . . . . . 70
5.10 Evaluating Long Conjunctions or Disjunctions . . . . . . . . . . . . . . . . . 71
5.11 Sheffer and Nicod Strokes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
5.12 Review Exercises on Chapter 5 . . . . . . . . . . . . . . . . . . . . . . . . . 73
As the very word “logic” suggests (its root is the Greek term “logos,” meaning word ),
logic has a lot to do with language. But it also points to something beyond language,
for logic instantiates deep laws of form that appear in many guises in many subjects and
disciplines [5]. One can get into interesting philosophical debates about the nature of these
mathematical forms. (The Platonists, who believe in the independent reality of mathe-
matical objects, have been debating for millennia with the nominalists, who insist that
the “truths” of logic and mathematics are nothing more than linguistic conventions.) We
could also ask which comes first, logic or mathematics? The school of logicism, headed by
Bertrand Russell and Gottlob Frege (who themselves contributed in important ways to the
discipline that we now call modern symbolic logic) argued that all of mathematics is based
on laws of pure logic. But it is now generally felt that this ambitious project did not work,
and that it is more likely the other way around; i.e., that logic is a sort of applied math-
ematics. Unfortunately, we cannot get into such fascinating disputes in an introductory
work like this. For more on the philosophy and foundations of mathematics and logic, see
[1, 5].
We should begin by noting the important distinction between inductive and deductive
logic. In deductive logic we extract information already contained, or implicit, in a set
of premisses (statements that are accepted as given in the context of an argument or
investigation). We might accept the premisses for several reasons: they may be known facts
whose implications we are trying to uncover, for instance, or they might be hypotheses or
possibilities whose consequences we wish to explore. But in deductive reasoning we never
go beyond the information contained in the premisses; rather, we rearrange that implicit
information into forms that may be more interesting or useful to us.
1
2 CHAPTER 1. INTRODUCTION TO LOGIC: GETTING YOUR FEET WET
Warning! Please spell the word “argument” correctly! One mark off any
assignment or test any time you write “arguement”!
Both premisses and the conclusion happen to be true statements. Substitute the word
“France” for “Alberta” and we would have an argument with false premisses; but the
conclusion would still seem to follow from the premisses. Therefore, there are arguments
that intuitively seem to be valid in the sense that the conclusions somehow follow from the
premisses, but which still have something missing.
To clarify what is missing, we make a crucial distinction between validity and soundness:
Validity: a deductive argument (or argument form) is valid if and only if (or iff ) it is
impossible for its conclusion to be false when its premisses are true.
1.1. LOGIC, LOGICAL FORM, AND VALIDITY 3
Soundness: A deductive argument is sound iff it is valid and has true premisses.
This is the modern usage of the terms “valid” and “sound” as they are applied to arguments
or argument forms. Some older books, such as Lemmon’s Beginning Logic [17], use the word
“sound” where we will use “valid;” that terminology is now out of date.
The following rough-and-ready concept of validity is also useful:
Eventually, we have to ask whether or not the rules of deduction are themselves valid in
the deeper sense given above. In fact, they are, so that these two definitions of validity
turn out to be the same thing in the long run. Later in the course we will learn other
ways of expressing the notion of validity, all of which are equivalent to the first definition
given above. In a sense, this course is simply about all the ways of defining and testing for
deductive validity, and for validly deriving conclusions from premisses.
The practical value of valid deductive reasoning stems from the fact that if a valid
argument has true premisses, the conclusion is guaranteed to be true as well. Validity
transmits truth from premisses to conclusions.
Surprisingly, there are some arguments that can be valid, for formal reasons, but which
can never be sound. Later on we will see some examples of rather suspicious-looking argu-
ments that are, in fact, valid.
Common parlance often confuses the terms “valid” and “true.” Be careful to avoid
this ambiguity when you are doing formal logic. Arguments can be valid or invalid, and
propositions can be true or untrue, but there are no such things as “valid” statements or
“true” arguments. As we shall see, however, valid arguments can be transformed by the
techniques of natural deduction into certain kinds of true statements called logical truths.
As logicians, we do not worry about where the premisses came from or whether or not
they are true; we are only concerned about the validity of arguments, not their soundness.
In this respect, logicians might be compared to automotive mechanics, who ensure that your
car runs properly but do not ask where you drive it. As Bertrand Russell once quipped,
“Pure mathematics [which Russell saw as an extension of logic] can be defined as that
subject in which we do not know what we are talking about, nor whether what we are
saying is true.” [20].
Now consider the following argument:
Compare this to the argument given above about being in Alberta. What is common
to these two arguments is their form, which we could represent in a more or less obvious
way, without even defining the symbols, as follows:
P →Q
Q→R
∴ P → R.
4 CHAPTER 1. INTRODUCTION TO LOGIC: GETTING YOUR FEET WET
Each level has extra machinery that allows us to represent a kind of validity that cannot
be represented by the lower level.
We begin, in the next chapter, with propositional logic, which is an absolutely basic
discipline that appears in a hundred useful contexts, and serves as the foundation for all
higher-level work in logic.
I once had the temerity to take a course called Introduction to Theoretical Physics,
given at the University of Toronto by a great teacher and scholar named Professor Lynn
Trainor. The very first equation he presented to us will be the first formula of this text as
well:
Understanding = Familiarity.
What Professor Trainor meant was that in learning a complex technical subject such as
logic or physics, one must accept the fact that fully confident understanding may not come
immediately; rather, it follows from becoming so familiar with the operations of the subject
that they become second nature. In fact, learning logic is very much like learning a language;
in the beginning, you have to work very hard simply to familiarize yourself with the basic
definitions and rules, some of which at first may seem arbitrary and strange. After a while
the structure of the language becomes so obvious that you cannot quite figure out why you
once could not understand it. Yes, in order to learn logic you have to memorize things,
and then diligently practice their use. Learning a subject involves putting the details of the
subject into your head, and becoming adept in their use by dint of long practice.
There are, however, also some important differences between learning logic and learning
a language. Although there are things that must be memorized in learning logic, there are
far fewer of them than in learning a language, where one must memorize thousands of words
as a basis for full fluency. Furthermore, as one becomes more experienced and familiar with
the use of logic, the inter-relations between its concepts become more and more obvious,
and mere memorization becomes less important, until it finally dawns on you that all of
logic is in a deep sense just many different ways of saying the same thing.
7
8 CHAPTER 2. PROPOSITIONAL LOGIC: SYMBOLIZATION AND TRANSLATION
define other truth-functional connectives, and we will briefly discuss this in Chapter 5.
Combinations of propositional letters and connectives will be called formulas. Those
strings of symbols that can represent propositions are called well-formed formulas or wffs.
It is possible to give a precise recursive definition of the set of all possible wffs, but we will
not do that in this course. (See, for instance, Lemmon’s Beginning Logic [17].) Instead,
we will simply rely on our knowledge of English to tell us which formulas can stand for
complete sentences and which cannot.
Propositions can be atomic or molecular . Atomic propositions are those that cannot
be broken down into truth functional combinations of other propositions, while molecular
propositions are those that can. An example of an atomic proposition is “Today is Tuesday,”
while an example of a molecular proposition is “Today is Tuesday and I have to go to a
meeting.”
2.2.1 Negation
To negate a proposition is to deny that it is true or that it is the case. Thus, −P can be
read as “not P ,” or “it is not the case that P ,” “P is not true,” or “P is false.”
Negation is a unary connective, in that it only applies to the proposition written im-
mediately to its right.
Example: Bob does not go to the store. ⇔ −B.
The negated proposition could be a combination of other propositions, however:
2.2. COMMON ENGLISH EQUIVALENTS FOR PROPOSITIONAL CONNECTIVES 9
Example: It is not the case that both Bob and Alice go to the store. ⇔
−(A & B)
2.2.2 Conjunction
To conjoin two propositions is to assert or to hold them jointly. We will write conjunction
in the form P & Q. The two conjoined propositions P and Q are called the conjuncts.
Although conjunction is usually taken to be binary, meaning that it applies to only two
propositions, and while we will usually consider cases where only two propositions are
conjoined, there is no reason in principle why any number of conjuncts cannot be linked in
the same conjunction (although we will not prove that fact formally in this course). The
most general conjunction can be written in the form P1 & . . . & Pn .
Here are some common expressions that would all be translated into symbols as P & Q:
P and Q.
Both P and Q.
P but Q.
P although Q.
P despite Q.
P even though Q.
P whereas Q.
P in spite of Q.
In addition to Q,P .
etc.!
Note that the subtle rhetorical distinctions between “but,” “and,” “whereas,” etc., are all
lost in moving to propositional notation. If these distinctions make a difference to the logic
of an argument, then they must somehow be expressed clearly enough that they can be
translated into propositional notation.
Example 1 : Cheryl goes to work and Tad stays home ⇔ C & T .
Example 2 : Alice goes to the store while Tad stays home ⇔ A & T .
Example 3 : Bob and Alice both go to the store ⇔ B & A.
Example 4 : Bob and Alice go to the store while Cheryl goes to work ⇔ B & A & C.
Note that order does not matter in any of these examples, or, indeed, in any correct
usage of &. Thus, Example 2 could be translated just as well as T & A.
2.2.3 Disjunction
To disjoin two propositions is to say that one or the other, or possibly both, is or are the
case. We write disjunction in the form P ∨ Q. The two disjoined propositions P and Q are
called the disjuncts. Like conjunction, disjunction is usually taken to be binary, but also
like conjunction it is perfectly admissible to write disjunctions of any number of disjuncts.
The most general disjunction can be written in the form P1 ∨ . . . ∨ Pn .
Here are common expressions that would be translated as P ∨ Q:
10CHAPTER 2. PROPOSITIONAL LOGIC: SYMBOLIZATION AND TRANSLATION
P or Q.
Either P or Q.
P or Q or both.
In propositional logic we will almost always take “or” in the inclusive sense, meaning
that if we accept “P or Q” as true, we are willing to grant that either P or Q is true on
their own, or that P and Q may be true together. This is how “or” is usually used in
English and many other natural languages. Here’s a typical example: “Either negligence or
incompetence accounted for the sinking of the Titanic.” Clearly, it might have been both.
Occasionally, it will be useful to express the concept of exclusive disjunction, also known
as exclusive or , or in circuit theory as XOR. Later on we will see how to construct exclusive
“or” in terms of the usual connectives. Here’s an example in which the fact that a disjunction
is exclusive might make a logical difference: “Every natural number is either even or odd.”
Again, we can usually take “or” to be inclusive unless clearly stated otherwise.
In Latin there are two distinct words for the two senses of “or”: (“vel”) for the inclusive
sense, and (“aut”) for the exclusive sense — and thus less chance of confusing the two
concepts. The usual symbol for inclusive “or,” ∨, was probably adapted from the Latin vel .
If P then Q.
P only if Q.
Q if P .
If P , Q.
P is a sufficient condition for Q.
Q is a necessary condition for P .
P implies Q.
Q is implied by P .
2.2. COMMON ENGLISH EQUIVALENTS FOR PROPOSITIONAL CONNECTIVES11
P entails Q.
Q is entailed by P .
Q while P .
While P , Q.
Given P , Q.
Again, there are many possible ways of expressing this notion in natural languages.
Example: Bob goes to the store if and only if Tad stays home ⇔ B ↔ T .
Conjunction, disjunction, and the biconditional are commutative and associative, while
material implication is not.
12CHAPTER 2. PROPOSITIONAL LOGIC: SYMBOLIZATION AND TRANSLATION
2.2.7 “Unless”
“Unless,” like “or,” is a word that is used in more than one way in the English language, and
some confusion has grown up over the right way to translate it into propositional symbols.
Like “or,” “unless” can be used in an inclusive or exclusive sense.
Here is an example of the inclusive sense of “unless:”
Johnny will fail the exam unless he studies.
Sadly, we know that Johnny studying for the exam does not exclude the possibility of
Johnny failing the exam.
Here is an example of the exclusive use of “unless:”
We will have a picnic unless it rains.
The very thing we wish to deny is that we are going to have a picnic in the rain, while at
the same time we affirm that we will have a picnic if it does not rain. This is most easily
translated as exclusive “or”.
In the chapter on propositional semantics we will examine the truth table behavior of
the various senses of “or” and “unless”.
It would be very rare that the logic of an argument would depend on whether or not
a usage of “unless” was exclusive. Therefore, in the large majority of cases, you can follow
the advice of most logic books, which is to translate “P unless Q” as P ∨ Q. But check
the context carefully.
2.2.8 Exercises
A
1. Translate the following into symbols:
(a) B → −A.
(b) B & C.
(c) −B ∨ C.
(d) −C ↔ T .
(e) −A → B.
(f) −A & − T .
(g) −A ∨ −T .
(h) B → T .
(i) T → B.
(j) −T → −B.
P
T
F
This merely expresses the assumption that any proposition is taken to be either definitely
T or definitely F. The table is constructed simply by listing all the possible truth values of
the proposition in a column below it.
Tables for propositions containing two variables have four lines, in order to represent
all possible combinations of truth values of the atomic propositions. We can call these the
“input” values. The truth value of the whole expression is written in a column underneath
the connective; call these values the “outputs”.
Here are the truth-table definitions of the five most commonly used propositional con-
nectives:
It is absolutely necessary for beginners in logic to memorize these basic tables, because
they are the basis for a large part of what we do in this text. We will study truth tables in
much more detail in Chapter Four, where we will see that they offer a powerful method for
the analysis of the validity of arguments.
2.3.1 Exercises
A
1. If it is true that Bob goes to the store and false that Cheryl goes to work, then what
are the truth-values of the following propositions?
(a) P ∨ Q
(b) P → Q
(c) −P ∨ Q
(d) P ↔ Q
(e) Q → P
table where P → Q is T, that P is T only in the lines in which Q is true also. Hence, also,
the reading “Q is a necessary condition for P ”: given P → Q, P cannot be T unless Q is
T.
The concepts of necessity and sufficiency are easy to mix up. Concrete examples may
help to make them clear. For a person to be adequately nourished, it is sufficient that they
eat East Indian food. However, there are many other kinds of nourishing food, and it is
not necessary to eat East Indian in order to live. (Lovers of curry might disagree with me
about this.) On the other hand, it is necessary to ingest a certain amount of protein in
order to be adequately nourished.
Here is a nice example (thanks to a student!) which illustrates the proper reading of
P → Q. Consider the following statements:
(1) You will win the lottery only if you buy a ticket.
These are two very different statements. The first is the generally true claim that you have
to buy a ticket in order to have a chance to win; but this, of course, does not guarantee
that you will win the lottery. The second says that you will win if you buy a ticket, which
is certainly false for most lotteries!
Statement (1) above is equivalent to the following:
Again, the truth conditions for these two statements are clearly quite different.
If you change your oil regularly then your engine will last longer.
We recognize this as true because we know about the causal relevance of regular oil changes
to engine longevity. Here’s another example:
Here the relevance of antecedent to consequent is not causal, but legal. Still, anyone familiar
with the way Parliamentary governments work will recognize the statement as true. But
now consider the following:
Both antecedent and consequent are true, and so by the truth table for “if-then” the whole
statement is true. And yet, there seems to be little if any relevance of antecedent to
consequent. This illustrates a fact that is often called the Paradox of Material Implication:
16CHAPTER 2. PROPOSITIONAL LOGIC: SYMBOLIZATION AND TRANSLATION
The truth value of a material conditional has absolutely nothing to do with the
meaning of the antecedent and consequent, beyond their truth values.
This is a “paradox” only because it is an affront to our linguistic common sense. In proposi-
tional logic the only “meaning” the propositional letters P, Q, . . . have is their truth values;
propositional logic is purely an algebra of truth values, and any formula is true or false
depending only on the truth values of the input variables and the logical relations between
them.
The challenge to common sense becomes even more acute when we consider cases in
which the antecedent is false, for the table says that such conditionals are true regardless
of the truth value of the consequent. Thus, both of the following are true statements:
These examples are an illustration of the following principle of classical two-valued logic:
A free translation of the Latin phrase is, “from a falsehood, whatever. . . ” (This proves that
there were teenagers in ancient Rome.) This principle is sometimes abbreviated Ex Falso,
and we shall encounter it again in the next chapter.
There is an advanced branch of logic called relevance (or relevant) logic in which logi-
cians attempt to construct notions of implication which can express relevance of antecedent
to consequent in a way that is closer to ordinary language. However, long experience has
shown that the simple, stripped-down notion of material consequence presented here is on
the whole the most useful notion of implication we have, since it can be adapted in so many
ways.
fact they are not restricted to two variables. The highly flexible notation used by Spencer
Brown in his calculus of indications [5] gets around this problem elegantly. It is also possible,
although we will not rely on it here, to represent conjunctions and disjunctions respectively
by writing something like this:
^ ^ _ _
(P1 , P2 , . . . , Pn ) = Pi and (P1 , P2 , . . . , Pn ) = Pi ,
i i
where the connectives in front operate on the whole list of propositions given in brackets
and thus, in effect, are the main connectives for the whole conjunction or disjunction.
In Chapter 5 we will introduce the highly flexible Reverse Polish Notation (RPN), and
show how it can be used to streamline the evaluation of complex formulas, especially those
containing conjunctions or disjunctions with more than two terms.
2.5.3 Bracketing
There are almost as many bracketing conventions as there are logic books. In this book we
will use a very simple convention whose sole purpose is to avoid ambiguity.
Brackets indicate the scope of a connective. Bracket off a sub-formula if there is any
possibility of confusion about which connective applies to which part of the formula. For
instance, in (P & Q) → R, the & connects P and Q, and the → connects the whole
subformula (P & Q) with R. Brackets can be nested, and the general rule is that one
evaluates inner brackets first, and then works outwards. Putting brackets around a whole
formula is optional.
Example: In P & (Q ∨ R), the subformula Q ∨ R is considered as a unit, and this
formula would be translated as “P and either Q or R.” By comparison, (P & Q) ∨ R would
be translated as “Either P and Q or R;” or, perhaps more clearly as “R or both P and Q.”
Be careful! In English and many other natural languages, the scope of logical con-
nectives is ambiguous and must be understood from context and a knowledge of common
usage.
Example: “If Kerry wins then Bush loses and Cheney has to retire” would usually
be understood to be bracketed as follows: “Kerry wins → (Bush loses & Cheney has to
retire).” Strictly speaking, it could also be read as “(Kerry wins → Bush loses) & Cheney
has to retire.” However, few experienced speakers of English would read it this way. In
Chapter Four we will learn how to show that these two versions of the sentence are not
logically equivalent. I discuss further examples like this in the section on Combinations
below.
Remember that we allow unlimited associativity for conjunction, disjunction, and the
biconditional. This means that we can write conjunctions of n formulas as (P1 & . . . & Pn ),
disjunctions of n formulas as (P1 ∨ . . . ∨ Pn ), and equivalences of n formulas as (P1 ↔ . . . ↔
Pn ), without having to bracket off pairs of conjuncts or disjuncts. (There is an exception
to this when we do truth tables, which will be explained in Chapter 5.)
2.6 Combinations
Truth-functional combinations of propositions can be created in literally an infinite variety
of ways. As indicated in the discussion of bracketing above, a sensitivity to context and
18CHAPTER 2. PROPOSITIONAL LOGIC: SYMBOLIZATION AND TRANSLATION
2.6.1 Exercises
A
1. Using the propositional letters defined at the beginning of this chapter, translate the
following English sentences into formulas:
(a) If Bob goes to the store and Tad stays home then Alice goes to the store only if
Cheryl goes to work.
(b) Bob and Alice don’t go to the store if and only if Tad doesn’t stay home.
(c) If Tad stays home then Cheryl goes to work but Cheryl goes to work only if Tad
stays home.
(d) When Bob goes to the store, either Alice also goes to the store or Cheryl doesn’t
go to work.
(e) Either Bob or Alice goes to the store if Tad doesn’t stay home.
2. Translate the following formulas into clear, grammatically correct, idiomatic English:
2.7.1 Exercises
A
1. Using the propositional letters defined at the beginning of this chapter, translate the
following English sentences into formulas:
2. Translate the following formulas into clear, grammatically correct, idiomatic English:
(a) −(−A ∨ B)
(b) −(B → (−C ∨ T ))
(c) −(A ↔ B)
(d) A ↔ −B
(e) −(A ∨ B)
How should we translate these? The key is to see that the truth value of the whole statement
— for instance, whether or not it is true that Joe said that Santa came last night — is
entirely independent of the truth value of the statement itself that Joe made. Therefore, the
connection between Joe’s statement itself and the claim that he made a certain statement is
not truth-functional. If we are given any statements like the four examples above to translate
into propositional symbols, we have no choice but to treat them as atomic propositions, and
give them simply a single propositional letter such as P .
P ∨ Q is often written as P + Q.
P ↔ Q is often written as P ≡ Q.
Chapter 3
We begin by setting forth rules and procedures for what logicians call natural deduction.
Some students may well find it to be decidedly unnatural at first (though familiarity will
bring ease of use). However, this system of deductive reasoning is, in fact, a very useful
distillation of the most important deductive principles that we use in ordinary reasoning.
In fact, they represent more than merely a sort of anthropology of reasoning: they are
normative, in the sense that they are an attempt to codify, clarify, and make more precise
our day-to-day reasoning. (A normative rule is one that says what we should do; for
instance, “Honour thy father and mother,” or “Change your oil every 3000 km.”) The sort
of logic we set forth here by no means exhausts all possible useful logics. The justification
for the approach we take, out of the many approaches that are mathematically possible, is
pragmatic, in that extensive experience shows that what we set forth here can be applied
to a great deal of the day-to-day reasoning that we actually carry out, and can also serve
as a very useful foundation for further developments.
Logicians call this way of reasoning natural deduction because it is meant to come as
close as one reasonably can to the way people tend to reason deductively in ordinary natural
languages (such as English). There are also numerous axiomatic systems of logic that are
focused on logical economy and generality. Such axiomatic systems play an important role
in research into the basis of mathematics and logic, and in discovering new systems of logic
that might be useful. However, a natural deductive approach designed more for ease of
use and closeness to intuitive methods of reasoning is much more helpful for beginners in
logic, especially those who might want to learn logic mainly because of its numerous useful
applications in other fields such as philosophy, mathematics, computing, and science.
We shall symbolize the logical forms of arguments by sequents, which are expressions
of the general form
P1 , P2 , . . . , Pn ` Q, (3.1)
where the propositions Pi are the premisses (the statements that are taken for granted in
the context of the problem), and Q is the conclusion (the proposition to be arrived at by
valid deductive steps from the premisses).
The symbol ` (called the turnstile) can be read as “therefore,” “proves,” or (more
fussily) as “syntactically entails.” If P ` Q we can also say that Q is derivable from P .
21
22 CHAPTER 3. INTRODUCTION TO NATURAL DEDUCTION
It amounts to the claim that the stated conclusion can be derived validly from the given
premisses by accepted rules of deduction.
Since the premisses of a sequent are taken to be held conjointly, a sequent of the form
3.1 above is just the same thing as saying
Note carefully that in general deductive entailment goes only one way; that is, given
that we can show P ` R, we cannot automatically conclude that R ` P . There is an
important class of formulas, however, for which the entailment goes both ways — each
formula entails the other. In such a case, we will write a sequent of the form
P a` Q. (3.3)
` P, (3.4)
which can be read “Provable P ,” or “P is a theorem.” Theorems are propositions that can
be asserted without the qualification of any premisses at all. Such theorems are sometimes
referred to by the grand name of logical truths.
There is no reason we can’t concatenate sequents together to get something like this:
P, Q ` R ` S ` T, (3.5)
in which a sequence of conclusions are derived one after the other. This reflects common
linguistic practice, where it would not be uncommon to make an argument of the form “P
and Q, therefore R; therefore S, etc.” However, in this book we will be almost entirely
concerned with sequents with only one turnstile.
The proof or derivation of a sequent will consist of a sequence of lines, beginning with
the statement of the premisses as assumptions, and ending (one hopes) with the desired
conclusion. Each line has to show four things: the dependencies (if any), the line number,
the proposition asserted on the line, and the justification for asserting that proposition.
The dependencies are the line numbers of the assumptions on which the line is based;
if the line itself is merely an assumption, the dependency is the number of the line itself. If
the line is a theorem there will be no dependencies. We shall say that a proposition depends
upon the assumptions listed to the left of it; or equivalently, that the assumptions to the
left of a proposition in a line govern that proposition.
The justification is a statement of the rule used to get the line, together with the
numbers of the previous lines (if any were needed) from which the line was derived by
means of the indicated rule.
The meaning of these terms will be clearer when we consider examples of proofs.
3.1. BASIC RULES OF DERIVATION 23
6. v-Introduction (vI)
Given P , we may derive P ∨ Q or Q ∨ P , where Q is any proposition whatever.
Dependency: The conclusion depends upon whatever assumptions the premiss de-
pended upon.
Form of justification: m vI, where m is the line of the proposition “or-ed” onto.
7. v-Elimination (vE)
From a disjunction P1 ∨ . . . ∨ Pn , we may conclude Q, so long as we can find a
derivation of Q from all n of the disjuncts separately.
24 CHAPTER 3. INTRODUCTION TO NATURAL DEDUCTION
Dependency: the conclusion Q depends upon any assumptions that the starting dis-
junct depended upon, together with any additional assumptions that had to be in-
troduced (but which could not be again discharged) in the derivations of Q from the
individual disjuncts.
General form of justification: k, l1 − −m1 , . . . ln − −mn vE, where K is the line of the
starting disjunction, li is the first line of the i-th subproof, and Mi is the last line of
the i-th subproof. See examples below.
MP is the pattern for all deductive inferences. It expresses the fundamental notion of
logical consequence, that one proposition may follow from another.
If you cannot think of any other way to get a letter into the proof, vI may be the best way
to do it. To some people vI seems a bit like cheating, since it allows us to introduce any
proposition whatsoever into a derivation whenever we need it. You should be able to
see that nothing illegitimate is going on, however, if you can see that the proposition
added to or “or-ed” onto P is not being asserted categorically. An example might
help: if it is true that my name is Kent, it is also safe (although perhaps odd) to say
that either my name is Kent or the moon is made of green cheese.
RAA is one of the most powerful and useful rules. If you cannot think of any other way
of getting to the desired conclusion, introduce as an assumption the negation of the
desired conclusion and then try to get a contradiction. Sometimes even very well
educated people are surprisingly reluctant to use RAA, since they are uncomfortable
assuming for the sake of argument a proposition that they know or wish to believe is
false. This is unfortunate, since the ability to freely entertain hypotheses, no matter
how outrageous they may seem, is one of the most useful tools of thought. RAA would
26 CHAPTER 3. INTRODUCTION TO NATURAL DEDUCTION
not be valid in a logic that was not bivalent (i.e., a logic in which not all propositions
were either definitely true or definitely false).
It will become increasingly evident as the course progresses that these rules of deduction
are by no means the simplest rules upon which a valid system of propositional logic
could be based. These rules are chosen because they are fairly close to the way people
actually reason in ordinary natural languages; they are therefore designed mainly for
ease of use, not logical economy. It should also become evident that other rules could
just as easily be taken as basic. For instance, the reader might find it a good exercise
to show that we could have taken Disjunctive Syllogism (DS, defined below) as a basic
rule instead of MP, introduced → as a defined symbol (taking the rule of Implication,
also given below, as a definition instead of a derived result), and then derived MP
using DS.
This derivation illustrates the basic pattern for all natural deductions. The centre
column lists the propositions that lead one after the other to the conclusion; the numbers
in brackets are the line numbers. The left-hand column lists the dependencies, that is,
the assumptions that the proposition in the centre column depends upon. The right hand
column lists the rule that justifies the proposition written in that line, together with the
lines to which the rule was applied.
Those who have some familiarity with accountancy can think of the left column as
analogous to a tabulation of debit, and the centre column as credit. Making an assumption
is like borrowing money; the assumptions we make allow us (by means of the rules) to
assert other propositions, but we must keep track of the fact that we are relying on certain
assumptions just as we must keep track of money we owe. Debits and credits must balance
in the end. If we discharge all the assumptions and still have a proposition left over, that
3.3. SIMPLE EXAMPLES OF THE BASIC RULES 27
proposition counts as a theorem, that is, a proposition that can be held unconditionally —
like a property with no mortgage on it!
2 P & Q, P → R ` R (This uses &E and MP.)
1 (1) P &Q A
2 (2) P →R A
1 (3) P 1 &E
1,2 (4) R 2,3 MP
This is one of the simplest examples I have been able to think of that demonstrates the
use of vE. In some treatments of propositional logic, this sequent is actually called “v-
Elimination.” Here we will call it inference pattern Convergence, since variations on it will
occur often enough that it will be useful to give it a name.
The whole idea of vE is that we use it to establish the claim that from either P or Q
we can get to the desired conclusion. It is like a claim that we can get to Calgary from
Lethbridge via either Highway 2 or the Vulcan Highway. To demonstrate the claim, you
have to test both routes — because the claim is that either route works. This is why we use a
subproof for each disjunct in the starting disjunction. Each subproof starts by temporarily
assuming the disjunct to be tested; when the subproof is done, that temporary assumption
is discharged.
Notice also that in lines 4 and 6 I annotate the assumptions that start the subproofs
with an indication of the rule that I am trying to use in the subproof. This is an entirely
optional step, and whether or not you do it does not affect the validity of the proof. However,
it can help you keep track of which assumption is part of which subproof; this is especially
helpful when you use nested subproofs. (See the proof of DS below.) I have borrowed this
device from Jacquette [13].
5 P `Q→P (This demonstrates CP and R.)
1 (1) P A
2 (2) Q A (CP)
1 (3) P 1R
1 (4) Q→P 2,3 CP
28 CHAPTER 3. INTRODUCTION TO NATURAL DEDUCTION
As in sequent 5, I have marked the assumption in line 2 with the rule that I intend to
use in the subproof.
6 P → Q, P ` Q (This demonstrates RAA.)
1 (1) P →Q A
2 (2) P A
3 (3) −Q A (RAA)
1,2 (4) Q 1,2 MP
1,2,3 (5) Q& − Q 3,4 &I
1,2 (6) Q 3–5 RAA
Of course, this sequent is merely the statement of MP. But it demonstrates in a very
clear way how to use RAA.
3.3.1 Exercises
A
(a) P, P → −Q ` −Q
(b) − − P, P → Q ` Q
(c) P & Q, P → R ` R
(d) P, P → R, P → S ` R & S
(e) − − P ` P ∨ R
1 (1) P →Q A
2 (2) −Q A
3 (3) P A (RAA)
1,3 (4) Q 1,3 MP
1,2,3 (5) Q& − Q 2,4 &I
1,2 (6) −P 3–5 RAA
1 (1) P ∨Q A
2 (2) −Q A
3 (3) P A (vE)
4 (4) Q A (vE)
5 (5) −P A (RAA)
2,4 (6) Q& − Q 2,4 &I
2,4 (7) P 5–6 RAA
1,2 (8) P 1,3–3,4–7 vE
Example in Words: Either Bob or Alice paid the phone bill; however, it was not Bob.
Hence, it was Alice who paid the phone bill.
The proof of DS will repay careful study, since it illustrates the use of vE and RAA. The
object is to derive P from P ∨ Q given the condition −Q. The first disjunct is simply the
desired conclusion; from the assumption of the second disjunct we can derive a contradiction
and thereby prove P — or, indeed, anything we want. The conclusion follows by vE. In
the justification for the conclusion at line (8), 1 is the line at which the premiss P ∨ Q
was introduced; 3 is the line at which the working assumption P was introduced and also
the line at which it is discharged (since P is the very conclusion we are aiming at); 4 is the
line at which the second working assumption was introduced and 7 the line at which it was
discharged. If we were working with a disjunction with more than two terms, there would
have to be two line numbers for each working assumption introduced and then discharged;
but we shall rarely use vE on disjunctions with more than two terms.
Note the use of Reflection in line 3, in which P is both assumption and conclusion of
a sub-proof. Reflection also allows us, for example, to establish the theorem ` P → P as
follows:
1 (1) P A
(2) P →P 1–1 CP
Example in Words: Paul Martin is Prime Minister of Canada only if Paul Martin is
Prime Minister of Canada. More generally, every proposition implies itself; this theorem
could be just as well called “self-implication”.
This example illustrates the fact that the utter triviality of many of the basic rules of
logic becomes apparent when they are put into words. It is fascinating how powerful these
apparently “trivial” inference rules can be, however.
Here is another simple sequent that can sometimes come in handy:
1 (1) −P A
2 (2) P &Q A (RAA)
2 (3) P 2 &E
1,2 (4) P & −P 1,3 &I
1 (5) −(P & Q) 2–4 RAA
The reader will notice that the peculiar way in which we used RAA in the proof of
DS would apparently, in principle, allow us to deduce any proposition whatsoever from
a contradiction. This is, indeed, the case, and this phenomenon, which we have already
encountered in the last chapter, is important enough to deserve a name of its own: Ex falso
quodlibet, or more briefly Ex Falso. This Latin phrase literally means “from a contradiction
(or falsehood), whatever. . . ”, or “from a contradiction anything follows.”
1 (1) P A
2 (2) −P A
3 (3) −Q A (RAA)
1,2 (4) P & −P 1,2 &I
1,2 (5) Q 3–4 RAA
12 P & − P ` Q
There are at least two ways to prove this. A “learner’s permit” method uses Reiteration:
1 (1) P & −P A
2 (2) −Q A (RAA)
1 (3) P & −P 1R
1 (4) Q 2–3 RAA
1 (1) P & −P A
2 (2) −Q A (RAA)
1 (3) Q 2–1 RAA
Note that line 1 is only “derived” from line 2 in the sense that it can be asserted when
2 is asserted simply because it has already been asserted. But that is enough to make RAA
work. Ex Falso will seem less odd when we come to truth table semantics in Chapter 5.
1 (1) P →Q A
2 (2) P A
1,2 (3) Q 1,2 MP
1,2 (4) P &Q 2,3 &I
1 (5) P → (P & Q) 2–4 CP
Example in Words: Given that Bob works only if Alice stays home, then if Bob works
then Bob works and Alice stays home.
This proof is a typical usage of CP. Note how the assumption on line 2 is discharged
when we arrive at the conclusion.
1 (1) P →Q A
2 (2) Q→R A
3 (3) P A (CP)
1,3 (4) Q 1,3 MP
1,2,3 (5) R 2,4 MP
1,2 (6) P →R 3–5 CP
Example in Words: If Bob works only if Alice stays home, and if Alice stays home only
if she is off shift, then Bob works only if Alice is off shift.
The next derivation is a good illustration of the use of vE:
1 (1) P →Q A
2 (2) R→S A
3 (3) P ∨R A
4 (4) P A (vE)
1,4 (5) Q 1,4 MP
1,4 (6) Q∨S 5 vI
7 (7) R A (vE)
2,7 (8) S 2,7 MP
2,7 (9) Q∨S 8 vI
1,2,3 (10) Q∨S 3,4–6,7–9 vE
32 CHAPTER 3. INTRODUCTION TO NATURAL DEDUCTION
Example in Words: If Bob works only if Alice stays home, and if Tad works only if
Cheryl stays home, and if either Bob or Tad works, then either Alice or Cheryl stays home.
P a` Q. (3.6)
It can be shown that for every interderivability of the form (3.6), there is a theorem of the
form
` P ↔ Q. (3.7)
We shall say that any wffs P and Q for which ` P ↔ Q holds are provably equivalent. In
Chapter 5 we will see that expressions are provably equivalent iff they have the same truth
tables.
In syntactic terms (that is, in terms of the use that this fact can be put in derivations),
this implies that any occurrence of a formula of the form P can be substituted for by
a formula of the form Q wherever they occur, or vice versa. We will call this metarule
Equivalence Introduction, and this rule, along with some other introduction rules, will be
discussed in greater detail in the next chapter. For now, we merely note the fact that because
of Equivalence Introduction these interderivabilities are very powerful tools of deduction.
We will use Equivalence Introduction in some of the proofs further on in this chapter.
De Morgan’s Laws
The following rules, called de Morgan’s Laws, are very important and widely used in logic:
This is an adaptation of the ingenious proof given by Lemmon for his Sequent 36(a). It
shows that RAAs can be nested.
This can, of course, be proven without the use of the derived rule DS by simply, in effect,
filling in the proof of DS between lines (3) and (4).
34 CHAPTER 3. INTRODUCTION TO NATURAL DEDUCTION
The de Morgan Laws amount to rules for the distribution of the negation sign; they
show that when we distribute negation over conjunctions they turn into disjuncts of the
negated terms, and when we distribute negation over disjunctions they become conjuncts
of the negated terms.
Implication Rules
The following two rules about the implication relation → are frequently used. They can
both be cited the same way.
19 P → Q a` −P ∨ Q Implication (Imp)
(a) P → Q ` −P ∨ Q
1 (1) P →Q A
2 (2) −(−P ∨ Q) A (RAA)
2 (3) −−P & −Q 2 deM
2 (4) P & −Q 3 DN
2 (5) P 4 &E
1,2 (6) Q 1,5 MP
2 (7) −Q 4 &E
1,2 (8) Q& − Q 6,7 & I
1 (9) −P ∨ Q 2–8 RAA
The converse is left as a B exercise.
This example illustrates the fact that an interderivability rule such as deM allows us to
substitute an interderivable formula for a subformula in a larger formula.
Transposition is also very useful:
21 P → Q a` −Q → −P Transposition (Trans)
P → Q ` −Q → −P
1 (1) P →Q A
2 (2) −Q A (CP)
1,2 (3) −P 1,2 MT
1 (4) −Q → −P 2–3 CP
Idempotency
23 P ∨ P a` P Idempotency (Idem)
P ∨ P `P
1 (1) P ∨P A
2 (2) P A (vE)
3 (3) P A (vE)
1 (4) P 1,2–2,3–3 vE
This is the shortest possible application of vE. The converse is left as an A exercise.
Commutative Laws
The commutative laws are really just a restatement of the Basic Rules of &I and vI, which
allow us to “and” and “or” two propositions together in any order. Their proofs illustrate
that something which is “obvious” still may require a little work in order to be correctly
demonstrated from the Basic Rules.
P &Q ` Q&P
1 (1) P &Q A
1 (2) P 1 &E
1 (3) Q 1 &E
1 (4) Q&P 2,3 &I
25 P ∨ Q a` Q ∨ P Commutativity (Comm)
P ∨ Q`Q ∨P
1 (1) P ∨Q A
2 (2) P A (vE)
2 (3) Q ∨P 2 vI
4 (4) Q A (vE)
4 (5) Q ∨P 4 vI
1 (6) Q ∨P 1,2–3,4–5 vE
Associative Laws
27 P ∨ (Q ∨ R) a` (P ∨ Q) ∨ R Associativity (Assoc)
This proof is a little tricky, since it requires nested vEs. It is left as a C exercise.
As noted previously, Associativity can be extended to conjunctions and disjunctions
of arbitrary numbers of terms, but the proof of this fact requires mathematical induction,
which is beyond (though not far beyond) the scope of this course.
Rules of Dominance
The Rules of Dominance may seem a little strange, but they have a very natural interpre-
tation from the semantic point of view. We will review them in Chapter 5. These rules
simplify the derivation of Equiv (below).
The proofs of the Rules of Dominance are simple but quite instructive. I give some
here, and leave the rest as exercises.
P ` P & (Q ∨ −Q)
1 (1) P A
2 (2) −(P & (Q ∨ −Q)) A (RAA)
2 (3) −P ∨ −(Q ∨ −Q) 2 deM
2 (4) −P ∨ (−Q & − −Q) 3 deM
2 (5) P → (−Q & − −Q) 4 Imp
1,2 (6) (−Q & − −Q) 1,5 MP
1 (7) P & (Q ∨ −Q) 2–6 RAA
The proof of the converse is left as an A exercise.
P ∨ (Q & − Q) ` P
1 (1) P ∨ (Q & − Q) A
2 (2) P A (vE)
3 (3) Q& − Q A (vE)
3 (4) P 3 ExF
1 (5) P 1,2–2,3–4 vE
Distributive Laws
1 (1) P & (Q ∨ R) A
2 (2) −((P & Q) ∨ (P & R)) A (RAA)
2 (3) −(P & Q) & − (P & R) 2 deM
2 (4) (−P ∨ −Q) & (−P ∨ −R) 3 deM
2 (5) (P → −Q) & (P → −R) 4 Imp
2 (6) P → −Q 5 &E
2 (7) P → −R 5 &E
1 (8) P 1 &E
1 (9) Q∨R 1 &E
1,2 (10) −Q 6,8 MP
1,2 (11) −R 7,8 MP
1,2 (12) −Q & − R 10,11 &I
1,2 (13) −(Q ∨ R) 12 deM
1,2 (14) −(Q ∨ R) & (Q ∨ R) 9,13 &I
1 (15) (P & Q) ∨ (P & R) 2–14 RAA
(P ∨ Q) & (P ∨ R) ` P ∨ (Q & R)
1 (1) (P ∨ Q) & (P ∨ R) A
1 (2) (−P → Q) & (−P → R) 1 Imp
1 (3) −P → Q 2 &E
1 (4) −P → R 2 &E
5 (5) −P A (CP)
1,5 (6) Q 3,5 MP
1,5 (7) R 4,5 MP
1,5 (8) Q&R 6,7 &I
1 (9) −P → (Q & R) 5–8 CP
1 (10) P ∨ (Q & R) 9 Imp
38 CHAPTER 3. INTRODUCTION TO NATURAL DEDUCTION
Equivalence Rules
There are two Equivalence rules that are widely used.
Note how Dom allows us to “cancel” out contradictory expressions such as −P & P .
Negated Arrows
The following are easy, given the rules we have now developed.
−(P → Q) ` P & −Q
1 (1) −(P → Q) A
1 (2) −(−P ∨ Q) 1 Imp
1 (3) −−P & −Q 2 deM
1 (4) P & −Q 3 DN
We get the converse essentially by reversing these steps.
P → (Q ∨ R) ` (P → Q) ∨ (P → R)
1 (1) P → (Q ∨ R) A
1 (2) −P ∨ (Q ∨ R) 1 Imp
1 (3) (−P ∨ Q) ∨ R 2 Assoc
1 (4) ((−P ∨ Q) ∨ R) ∨ −P 3 vI
1 (5) (−P ∨ Q) ∨ (R ∨ −P ) 4 Assoc
1 (6) (−P ∨ Q) ∨ (−P ∨ R) 5 Comm
1 (7) (P → Q) ∨ (P → R) 6 Imp
40 CHAPTER 3. INTRODUCTION TO NATURAL DEDUCTION
(In the next chapter we will show that it is possible to combine steps in proofs like this.)
The converse is left as a B exercise.
Finally, here is a rule that looks almost like a distributive law for →, but isn’t quite.
(P & Q) → R ` P → (Q → R)
1 (1) (P & Q) → R A
2 (2) P A (CP)
3 (3) Q A (CP)
2,3 (4) P &Q 2,3 &I
1,2,3 (5) R 1,4 MP
1,2 (6) Q→R 3–5 CP
1 (7) P → (Q → R) 2–6 CP
This proof of Exp demonstrates nested CPs. The converse is left as a B exercise.
41 P ∨ Q, −P ∨ R, −Q ∨ R ` R
1 (1) P ∨Q A
2 (2) −P ∨ R A
3 (3) −Q ∨ R A
2 (4) P →R 2 Imp
3 (5) Q→R 3 Imp
1,2,3 (6) R∨R 1,4,5 CD
1,2,3 (7) R 6 Idem
Since CD requires three premisses, three lines must be cited in the justification.
logical form; for instance, to spot that a very complicated-looking argument is really just a
case of a simple form such as MP, MT, or HS.
Don’t forget that in the following statement of the rules, the propositional variables P ,
Q, R, etc., could be either atomic or molecular propositions.
Basic Metarules
Assumption (A):
Any proposition whatever may be assumed, so long as we do not forget that it was
an assumption.
∨ -Elimination (vE):
If P ` R and Q ` R are provable separately, then we can conclude P ∨ Q ` R.
Reiteration (R):
Any line may be repeated anywhere in a proof, so long as we carry its dependencies
with it.
Basic Sequents
Modus Ponens (MP):
P, P → Q ` Q
&-Introduction (&I):
P, Q ` P & Q
P, Q ` Q & P .
&-Elimination (&E):
P &Q ` P
P &Q ` Q
v-Introduction (vI):
P `P ∨ Q
P ` Q ∨ P.
42 CHAPTER 3. INTRODUCTION TO NATURAL DEDUCTION
One-Way Sequents
Modus Tollens (MT):
P → Q, −Q ` −P
Denial (Den):
−P ` −(P & Q)
Convergence (Conv):
P ∨ Q, P → R, Q → R ` R
Absorption (Abs):
P → Q ` P → (P & Q)
Interderivability Sequents
de Morgan’s Laws (deM):
−(P & Q) a` −P ∨ −Q
−(P ∨ Q) a` −P & − Q.
Transposition (Trans):
P → Q a` −Q → −P
3.6. FALLACY ALERT! 43
Idempotency (Idem):
P & P a` P
P ∨ P a` P .
Commutativity (Comm):
P & Q a` Q & P
P ∨ Q a` Q ∨ P
Associativity (Assoc):
P & (Q & R) a` (P & Q) & R
P ∨ (Q ∨ R) a` (P ∨ Q) ∨ R
Dominance (Dom):
P & (Q ∨ −Q) a` P
P & (Q & − Q) a` Q & − Q
P ∨ (Q ∨ −Q) a` Q ∨ −Q
P ∨ (Q & − Q) a` P
Distributivity (Dist):
P & (Q ∨ R) a` (P & Q) ∨ (P & R)
P ∨ (Q & R) a` (P ∨ Q) & (P ∨ R)
Equivalence (Equiv):
P ↔ Q a` (P & Q) ∨ (−P & − Q).
Exportation (Exp):
(P & Q) → R a` P → (Q → R)
discussed in books on rhetoric and critical thinking. (See [6, 16, 8] for good introductions
to critical thinking and fallacy theory.) There are a few fallacies of deductive reasoning that
occur rather often as well, and I will mention a few of them here. In Chapter 5 we will learn
systematic techniques for diagnosing fallacious deductive reasoning.
1 (1) P ∨Q A
1 (2) P 1 vE (???)
This is a glaring error. You can convince yourself that this pattern of reasoning is invalid
by considering the following example:
P → Q, Q ` P (????) (3.8)
Even if the premisses are correct, the conclusion could still be false, simply because logic
might be held on other days of the week besides Tuesday. This error is seductive, however,
since it looks so much like the valid rule MP.
P → Q, −P ` −Q. (3.9)
Again, the problem is just that we might have logic on other days of the week.
(a) P, Q ` P → Q
(b) Prove the following without the use of R. Your proof should be only three lines
long.
P `Q→P
(c) P → (Q ∨ R), −(Q ∨ R) ` −P
(d) P ∨ −Q, Q ` P
(e) (Q → R), −(Q → R) ` R
(f) (M ∨ N ) → Q ` (M ∨ N ) → (Q & (M ∨ N ))
(g) P → (Q → R), (Q → R) → T ` P → T
(h) −(P & − Q) ` −P ∨ Q
(i) −(P ∨ −(R → S)) ` −P & (R → S)
(j) (X ∨ Y ) → Q ` (−X & − Y ) ∨ Q
(k) −M ∨ N, M ` N
(l) −(X → Y ) → −P, P ` (X → Y )
(m) P → (Q ∨ Q) ` −(P & − Q)
(n) P & (Q → R) ` (P & − Q) ∨ (P & R)
(o) X → (P ↔ Q), (P & − Q) ∨ (−P & Q) ` −X
B
1. Find natural deduction proofs for the following named sequents:
(a) P → Q, R → S, −Q ∨ −S ` −P ∨ −R (DD)
(b) −P ∨ Q ` P → Q (Imp) (Hint: use DS)
(c) P → Q ` −(P & − Q) (Imp)
3.7. EXERCISES ON CHAPTER 3 47
2. The following questions require you to find natural deduction proofs for a variety of
sequents:
(a) P, Q ` P ↔ Q
(b) P ↔ Q, Q ↔ R ` P ↔ R
(c) (P → R), (Q → R) ` (P & Q) → R
(d) −P ∨ R, −(Q & − S), −P → Q, −R ` S
(e) R, R ↔ P, −Q → −P ` Q
(f) P → Q, Q → R, R → S, −S ∨ −R ` −P
(g) P ↔ S, P ∨ Q, P ∨ R, −Q ∨ −R ` S
(h) P → (Q ∨ R), Q → S, R → T, −(S ∨ T ) ` −P
(i) P → (Q ∨ R), (P → Q) → X, (P → R) → Y ` X ∨ Y
(j) P → R, −Q → S, −(R ∨ S) ` −(Q → P )
C
1. MT is a basic pattern of inference which is very important in scientific reasoning.
Suppose we have a new scientific theory, and suppose this entire theory is expressed as
a single proposition T . Suppose also the theory deductively entails a certain prediction
P . Finally, suppose that (to our great disappointment) experimentation shows that P
is false. What does MT imply about T ? (Philosophers of science refer to this process
as falsification.) Does this mean that we have to junk all of T and start all over again?
If not, why not?
(a) P ∨ (Q ∨ R) ` (P ∨ Q) ∨ R
48 CHAPTER 3. INTRODUCTION TO NATURAL DEDUCTION
3. Suppose we had a system of Natural Deduction that was exactly the same as the
one we use in this book, except that, in the place of Modus Ponens (MP), it used
Disjunctive Syllogism (DS) (P ∨ Q, −Q ` P ) as a Basic Deductive Rule.
(a) Show that in this new system we can derive MP using only DS and other Basic
Rules, so long as we define P → Q as −P ∨ Q. (If we don’t have MP as a Basic
Rule, we have to treat → as a defined symbol, since in our usual system → is
defined implicitly by the way it behaves in MP.)
(b) Now suppose, instead, that we define P → Q as −(P & − Q), as is done in
numerous logic texts. (See, e.g., Copi and Cohen [6].) This definition captures
the intuition that to say P implies Q is to say that we never simultaneously have
P and −Q. Show that in this system we can, as well, derive MP using Basic
Rules. Hint: try RAA.
Chapter 4
In this chapter we introduce several techniques that greatly simplify propositional natural
deductions, and indicate some of its possible applications.
Don’t forget that in this chapter the propositional variables P , Q, R, etc., can be either
atomic or molecular unless stated otherwise.
4.1.1 Exercises
A
1. Identify the simple rule used in each of the following deductions:
(a) (X ∨ Y ) → (Y ∨ Z), (X ∨ Y ) ` (Y ∨ Z)
(b) ((X ∨ Y ) → Q) → (P & Q), −(P & Q) ` −(((X ∨ Y ) → Q) → (P & Q))
(c) P ` ((M ↔ N ) ∨ (M ↔ (R & S))) ∨ P
49
50 CHAPTER 4. TECHNIQUES AND APPLICATIONS
This is perfectly okay, so long as you don’t get ahead of yourself and make a mistake.
Just as you can use more than one rule to get a given line, you can use the same rule
more than once. For instance, you could use Imp twice on the same line as follows:
If you have the slightest doubt about the steps you are taking, it is better painstakingly
to write them all out one by one and thereby reduce the chance that you will make a
mistake. Remember, too, that on tests and assignments the onus is on you to convince your
instructor that you understand how the proof works!
4.2.1 Exercises
A
(a) (n) P → (Q → P ) TI
(n+1) −P ∨ (−Q ∨ P ) ?
(b) (n) P → (Q & R)
(n+1) (−P ∨ Q) & (−P ∨ R) ?
(c) (n) −P → (Q & R)
(n+1) P ∨ (Q & R) ?
(d) (n) (P → Q) & (P → R)
(n+1) −P ∨ (Q & R) ?
(e) (n) −M → N
(n+1) −N
(n+2) M ?
(f) (n) (P ∨ −P ) ∨ (Q ∨ −Q)
(n+1) (−P ∨ Q) ∨ (−Q ∨ P ) ?
4.3. THEOREMS 51
4.3 Theorems
In the last chapter we introduced the concept of the theorem, meaning a wff that can be
proven with no assumptions left over. If P is a theorem, we assert
` P, (4.1)
(read as “Provable P ” or “P is a theorem”), which means that P can stand on its own,
without the support of any premisses. In the next chapter we will see that the theorems of
propositional logic are just the same as its tautologies, which are the formulas that are T
in all lines of their truth tables.
For now, I will give a few simple examples of theorems, to demonstrate typical tech-
niques that can be used to prove them.
42 `P →P
Here are two proofs of this elementary theorem. The first uses CP and R:
1 (1) P A (CP)
1 (2) P 1R
(3) P →P 1–2 CP
1 (1) P A
(2) P →P 1–1 CP
This illustrates the power of RAA: if you can’t think of any other way to get a result,
assume its negation and try to derive a contradiction.
Here’s another way to get ExM:
1 (1) P A (CP)
(2) P →P 1–1 CP
(3) P ∨ −P 2 Imp,Comm
This theorem is called the Law of Excluded Middle because it says that every proposition
is either true or false, with nothing in between. This is sometimes expressed in Latin as
tertium non datur — “a third is not given”.
52 CHAPTER 4. TECHNIQUES AND APPLICATIONS
Here is another example: Show that any given proposition P is implied by any other
proposition Q. I am going to give five ways of proving this theorem, in order to demonstrate
the variety of proof strategies that can be used for theorems.
44 ` P → (Q → P )
1 (1) P A (CP)
2 (2) Q A (CP)
1 (3) P 1R
1 (4) Q→P 2–3 CP
(5) P → (Q → P ) 1–4 CP
This is a straightforward use of CP. The use of R in line 3 is redundant; see if you can do
it without using R.
1 (1) P A (CP)
1 (2) −Q ∨ P 1 vI
1 (3) Q→P 2 Imp
(4) P → (Q → P ) 1–3 CP
The above proof reminds us of how we can use vI to bring new propositional letters into a
proof.
The above proof illustrates the idea that if we want to prove a conditional, we might try
to prove the negation of its consequent first and then use Transposition to get the desired
conditional.
The above proof is a bit longer than the others, but it illustrates the fact that if all else fails
we can use RAA; that is, assume the negation of what you want to prove, and then derive
a contradiction by brute force.
4.4. SUBSTITUTION 53
(1) P ∨ −P ExM
(2) (P ∨ −P ) ∨ −Q 1 vI
(3) −P ∨ (P ∨ −Q) 2 Comm,Assoc
(4) P → (P ∨ −Q) 3 Imp
(5) P → (Q → P ) 4 Comm,Imp
In line 1 of the above proof we introduce a theorem, the Law of Excluded Middle. Theorem
Introduction is discussed further below; the basic idea is that a theorem can be stated at
any point in a proof without justification. We then expand line 1 using vI, rearrange the
results, and apply Imp to get what we wanted. This is the deepest proof, since it builds
directly from ExM. In fact, it would be possible to derive all theorems, and thereby all of
deductive logic, directly from ExM if we wanted to.
We have seen the proofs of the first two; the proof of NC is left as an A exercise.
4.4 Substitution
Under certain circumstances, it is valid to substitute one wff for the whole or part of another
wff. There are two kinds of substitution, uniform and nonuniform.
1. Definition Introduction (DefIntro): Any defined symbol can be replaced by its defini-
tion, or vice versa.
So far, the only symbol we are taking as defined in terms of the basic set of symbols is
↔. But there is no reason that we could not define other symbols as combinations of
basic symbols or other previously defined symbols. For instance, suppose we stipulate
that P ∗ Q := (P ∨ Q) → (P & Q). Then any occurrence of P ∗ Q could be substituted
for by its definition, and any wff that was interderivable with its definition could be
substituted for by P ∗ Q. Any such usage can be justified by citing the line number
that was operated on and then putting down either DefIntro or Def∗.
2. Sequent Introduction (SI): Any uniform substitution instance of a proven sequent can
be introduced in a proof.
We have already used SI many times in Chapter 3, when we applied sequents such as
MT, HS, DS, and so forth in the proofs of further sequents. Do not forget that the
substitution has to be uniform; only uniform substitution instances of sequents can
be guaranteed to be valid. If the sequent you are using has a special name (such as
MP or HS) then just cite the name in the justification column; otherwise, write SI
and give a reference to where the sequent was previously proven (such as in an earlier
exercise).
Example of TI:
45 P → Q, −P → R ` Q ∨ R
1 (1) P →Q A
2 (2) −P → R A
(3) P ∨ −P ExM
1,2 (4) Q∨R 1,2,3 CD
4.5.1 Exercises
A
1. Prove the Law of Non-Contradiction, ` −(P & − P ).
B
1. Prove the following theorems:
(a) ` (P & − P ) → Q
(b) ` P → (−P → Q)
(c) ` P ∨ (P → Q)
(d) ` (P → Q) ∨ (Q → P )
(e) ` (P → Q) ∨ (Q → R)
(f) ` (−P → P ) → P
C
1. Prove ExM using only Basic Rules.
Deduction Theorem:
The sequent P1 , P2 , . . . , Pn ` Q is valid iff ` (P1 & P2 & . . . & Pn ) → Q.
In line 2 we apply the given sequent P ` R. It doesn’t have a special name (like MT, HS,
etc.), but we can substitute it into the proof as required. Note that we have to cite line 1,
since this is the line that the introduced sequent is applied to.
P a` Q if and only if ` P ↔ Q.
(1) P →Q TI
(2) Q→P TI
(3) (P → Q) & (Q → P ) &I
(4) P ↔Q 3 Def↔
The main use of the Deduction Theorem, for the purposes of this introductory course,
is simply the fact that whenever we have an interderivability result, we have an equivalence;
and such equivalences can be used freely in transforming expressions into equivalent forms
which may help us get to where we want to go in a deduction.
4.6.1 Exercises
B
4.7.1 Exercises
B
1. Use Boolean algebra to demonstrate the following equivalence theorems:
We can put down all of the premisses we have on hand, and try a few steps to see where
they lead:
4.8. FILLING IN THE GAPS IN AN ARGUMENT 59
1 (1) B ∨A A
2 (2) B→R A
3 (3) A→S A
4 (4) −R A
2,4 (5) −B 2,4 MT
1,2,4 (6) A 1,5 DS
We quickly come to the conclusion that it was Alice who paid the bill, and we stop there
because this answers the question. We have therefore established the sequent B ∨ A, B →
R, A → S, −R ` A. Remember, in this sort of problem we usually don’t know in advance
what sequent we are going to be proving; rather, we explore a set of premisses to see where
they will lead. Note, also, that one of the premisses (line 3) turned out to be irrelevant; and
that, of course, is just like real life, where we often have to sort out the useful information
from the noise.
Example: We are given the following facts: If either Paul or Peter did it then either
Randa helped or Michael wasn’t entirely innocent. If the weapon was in Paul’s office he did
it. On the other hand, if the weapon wasn’t in Paul’s office, then Michael wasn’t entirely
innocent. Can we determine whether Randa was involved?
Symbolization:
P := Paul did it
Q := Peter did it
R := Randa helped
M := Michael was innocent
N := The weapon was found in Paul’s office
1 (1) (P ∨ Q) → (R ∨ −M ) A
2 (2) N →P A
3 (3) −N → −M A
There is no way we can draw a definite conclusion about the truth or falsity of R from this,
since there is no way to extract it from its conditional. But now suppose that independent
evidence shows that Michael has an alibi — he was out of town for a meeting at the time
of the incident. Then we can continue:
4 (4) M A
3 (5) M →N 3 Trans
2,3 (6) M →P 2,5 HS
2,3,4 (7) P 4,6 MP
1 (8) (P → (R ∨ −M )) & (Q → (R ∨ −M )) 1 SI (Exercise 4.7.1, 1(b))
1 (9) P → (R ∨ −M ) 8 &E
1,2,3,4 (10) R ∨ −M 7,9 MP
1,2,3,4 (11) R 4,10 DS
60 CHAPTER 4. TECHNIQUES AND APPLICATIONS
Chapter 5
61
62 CHAPTER 5. PROPOSITIONAL SEMANTICS: TRUTH TABLES
(P ∗ Q)
T ? T
T ? F
F ? T
F ? F
There are two ways to fill each space in the centre column and four lines per table,
meaning that the total number of ways to fill in the centre column will be just 2×2×2×2 =
24 = 16. In other words, there are 16 possible truth functions of two propositional variables.
While the ones we define above are the ones most commonly used in elementary logic, others
are possible, and some (such as exclusive OR) have practical applications in circuit theory.
3. starting with the innermost brackets, work outward until you come to the main con-
nective.
Arbitrarily complicated wffs can be evaluated mechanically by truth tables. Truth
tables therefore constitute what computer scientists call an effective procedure — meaning
simply a recipe that can be followed in a completely unambiguous manner that is guaranteed
to produce a unique result. The practical drawback of truth tables is that the number of
lines in a table for a formula containing n propositional variables is 2n . This
P is a irapidly
increasing function; recall the old fable about the wise man who demanded 64 i=1 2 grains
of rice from his king in return for a favour. (See [3]. A quick estimation will show why
the king may have been tempted to cut off the wise man’s head.) Large truth tables may
therefore take impractically long times to evaluate.
Many of the truth table problems encountered in elementary propositional logic can be
solved by means of short-cuts, and we will consider these below. There is no guarantee that
a short cut exists for a given problem. On the other hand, if in an introductory course you
find yourself faced with the task of evaluating a formula that would contain 8, 16, 32, or
more lines if fully written out, take a careful look at it before beginning blindly to fill in
the whole table. There is probably a short-cut!
Example: do the full truth table for the wff −(P → (−Q ∨ R)) & P .
Write out the formula as neatly as possible, with fairly wide, even spacing between the
symbols. Then enter all the possible combinations of truth values for the three letters in
the formula, P , Q, and R. (You can think of these as “inputs”.) There is nothing sacred
about the order in which I have written the inputs below, but it is a standard order that
is commonly used in logic, and it would be helpful to stick to this order. Since P appears
twice in the formula you will have to enter its input values twice. Whenever a letter appears
more than once in a table, it is essential that you write the values in its column of input
values in the same order for each instance of the letter; otherwise, the truth table will be
meaningless. There will be 8 = 23 lines in the table since there are three variables. Then
proceed to evaluate the various subformulas in the order indicated above, ending with the
main connective, which in this case is the & symbol.
− (P → (− Q ∨ R)) & P
F T T F T T T F T
T T F F T F F T T
F T T T F T T F T
F T T T F T F F T
F F T F T T T F F
F F T F T F F F F
F F T T F T T F F
F F T T F T F F F
64 CHAPTER 5. PROPOSITIONAL SEMANTICS: TRUTH TABLES
We see that the formula is T only in the case in which P and Q are T and R is F.
5.3.1 Tautology
A wff is said to be tautologous, or to be a tautology, if and only if it comes out T under all
possible combinations of input values. Tautologies can also be referred to as logical truths
or necessary truths.
Example: Law of Excluded Middle
In the previous chapter we proved the Law of Excluded Middle as a theorem. The table
shows that it is a tautology as well, as it should be.
P ∨ − P
T T F T
F T T F
Example: (P → (Q → P ))
In the previous chapter we gave several proofs of this formula as a theorem. One can easily
see that it is a tautology as well. The only way it could be F would be if P were T and
(Q → P ) were F. But if P is T, then (Q → P ) is T and the whole wff is T. Hence, it is not
possible for this wff to be false.
5.3.2 Inconsistency
A wff is said to be an inconsistency, or to be inconsistent, if it comes out F for all possible
combinations of input values. Such a wff is sometimes loosely called a contradiction, but
that term is properly reserved for an inconsistent wff such as P & − P containing only one
propositional variable.
Example: Law of Non-Contradiction
The Law of Non-Contradiction states that it is never the case that a proposition is both
true and false; i.e., that (P & − P ) is always false. This is supported by the truth table:
P & − P
T F F T
F F T F
5.3.4 Contingency
A wff is said to be contingent or to be a contingency iff it has occurrences of both T and
F in its truth table. Its truth or falsity in a given situation therefore depends on the facts,
not on logical form. The simplest example of a contingency is just P ; other examples are
P & Q, P ∨ Q, and P → Q.
5.3.5 Consistency
A wff is said to be consistent if it is T for at least some of its input values; in other words, a
wff is consistent, or is a consistency, iff it is either a tautology or a contingency. Examples:
P , P → Q, P → P .
5.5.1 Implication
Proposition P is said to imply Q iff P → Q is a tautology.
66 CHAPTER 5. PROPOSITIONAL SEMANTICS: TRUTH TABLES
5.5.2 Equivalence
One proposition P is equivalent to another, Q, iff P ↔ Q is a tautology.
5.5.3 Contrariety
A proposition P is contrary to Q if they are never both T. They could, however, both be
false. Two propositions P and Q are contraries iff −(P & Q) is a tautology.
Examples: The propositions P & Q and P & − Q are contraries, since they are both F
when P is F, but never both T. It is easily seen that the negation of their conjunction is a
tautology.
5.5.4 Subcontrariety
Two propositions P and Q are subcontraries if they are never both false. They could,
however, both be true. Formally, P and Q are subcontrary iff P ∨ Q is a tautology. The
concept of subcontrariety is not widely used these days, though it was important in medieval
logic.
Examples: The propositions P ∨ Q and P ∨ −Q are subcontraries since one of them
is T even if the other is F; however, they are both T if P is T. It is also easily seen that
their disjunction is a tautology.
5.5.5 Inconsistency
Two propositions P and Q are inconsistent iff they never agree in truth value. This is the
same thing as saying that −(P ↔ Q) is a tautology.
5.5.6 Consistency
Two propositions P and Q are consistent iff their conjunction P & Q is consistent.
5.5.7 Independence
Two propositions are said to be independent iff none of the above relations apply to them.
The simplest example of independent propositions are two atomic propositions, say P and
Q.
5.7.1 Equivalence
A set of wffs is said to be equivalent iff all its members are pair-wise equivalent.
5.7.2 Inconsistency
A set of wffs is said to be inconsistent iff its conjunction is always false; or, equivalently, iff
the negation of its conjunction is a tautology. E.g.: {P, P → Q, −Q}.
5.7.3 Consistency
A set of wffs is said to be consistent iff it is not inconsistent; this is the same thing as saying
that its conjunction is consistent; i.e., either a tautology or a contingency.
5.7.4 Independence
A set of two or more propositions are independent of each other iff they are independent
pairwise.
P P → Q Q
T T T T T
T T F F F
F F T T T
F F T F F
(I use a double vertical line to separate the conclusion from the premisses.) In each line
in which the conclusion is F, at least one of the premisses is F as well. This is enough to
show that the sequent is valid.
By contrast, here is the table for Affirming the Consequent:
Q P → Q P
T T T T T
F T F F T
T F T T F
F F T F F
In the third line, both premisses are T while the conclusion is F. This proves that
Affirming the Consequent is invalid. You need only find one line in which this happens in
order to show invalidity. Note very carefully that we cannot conclude that the argument
is valid just because there are some lines (such as the first) in which everything comes up
true. That is just like an invalid computer programme that occasionally outputs the correct
answer. The program is valid only if it never makes a mistake, not if it is sometimes but
not always right.
Here is an example that may seem counter-intuitive. Is P, −P ` Q valid or invalid?
This is just ExF, which we already know how to prove using RAA. But the truth table is
instructive:
P − P Q
T F T T
T F T F
F T F T
F T F F
The sequent is valid since there is no line in which the conclusion is false and all the
premisses are T. But the sequent can never be sound , meaning valid with all premisses true.
Suppose now we have a sequent in which the conclusion is a tautology, such as P →
Q ` P → P . Here is the table:
P → Q P → P
T T T T T T
T F F T T T
F T T F T F
F T F F T F
As before, there is no line with the premiss T and the conclusion F, and the sequent is
valid. But we need hardly have done the whole table to see this, since the conclusion is a
5.8. TESTING VALIDITY OF SEQUENTS USING TRUTH TABLES 69
F ` X (5.1)
X ` T (5.2)
The sequent P1 , P2 , . . . , Pn ` Q is valid iff the conditional (P1 & P2 & . . . & Pn ) →
Q is a tautology.
Our method for checking for validity is designed to take advantage of this fact.
5.8.2 Short-cuts
The truth-table check of validity is what logicians call an effective procedure, meaning that
it is always guaranteed to produce a definite answer. However, as noted, the size of a truth
table increases very rapidly as a function of the number of propositional variables involved;
the table for a problem with 6 letters, for instance, will have 26 = 64 lines. It would be
very good, therefore, to have short-cuts that would make it unnecessary for us to write out
every line of the table in order to check the validity of a sequent.
The key idea of the short-cut method is to work backwards from the conclusion to the
premisses. Recall that to show that a sequent is valid we have to show that there are no
lines in the truth table where all the premisses are T and the conclusion F; conversely,
to show that a sequent is invalid it is sufficient to find just one line in its table with all
premisses T and the conclusion F. What we do, therefore, is set up the truth table for the
sequent in the manner suggested above, with the conclusion set off to the right. Then set
the conclusion to be F, and see whether this forces any or all of the premisses to be T. Some
sequents can be checked in one line; some require more than one.
There can be no guarantee that a short-cut can be found for every problem; some may
be irreducibly complex and will require a full or nearly full truth table for their analysis.
But it is always a good idea to see if a short-cut can be found before starting to write out
a huge truth table.
Here are a few examples to illustrate the method.
Example: show that MP is valid by the short-cut method.
P P → Q Q
T T F F F
F F T F F
The only lines that need concern us are the two lines in which the conclusion is F.
However, we immediately see that in both of these lines one of the premisses is F; the
sequent is therefore valid. Writing what I have shown here constitutes a complete solution
to the problem of testing the validity of MP; it is not necessary to write out the whole table.
70 CHAPTER 5. PROPOSITIONAL SEMANTICS: TRUTH TABLES
Example: show that Affirming the Consequent is invalid by the short-cut method.
Q P → Q P
T F T T F
P ∨ Q P → R Q → S R ∨ S
F F F F T F F T F F F F
This can be checked with just one line. Set the conclusion to be F. That forces both
R and S to be F. The values of P and Q remain to be determined. But any possible
invalidating line has to have P → R and Q → S come out T, and which means that P and
Q would have to be F. However, this makes the first premiss P ∨ Q F as well. Hence, there
is no invalidating line, and the argument is valid.
There will sometimes be sequents in which you have to check more than one line.
Example: check (−P ∨ −Q) → R, −R ` P & Q.
(− P ∨ − Q) → R − R P & Q
F T T T F F F T F T F F
T F T F T F F F F T
T F T T F F F F F F
(P & − P ) → (P → (Q ∨ R))
F T
The general approach to this sort of problem is to see what it would take to make the
wff T and F. If it can be both T and F, then it is contingent; if it can only be F, it is
contradictory; and if it can only be T, it is a tautology.
By a “long” conjunction or disjunction I simply mean one containing more than two formu-
las. We accept conjunctions and disjunctions of more than two formulas as wffs. However,
truth table analysis as we have set it up here (and as it is usually done in most elementary
texts) deals only with & and ∨ as binary connectives. In order to evaluate a formula
containing a long conjunction or disjunction, therefore, we have to bracket off the terms
two at a time, and arbitrarily pick one of the connectives as the main connective. In fact,
which one we pick makes no difference to the outcome, although we cannot prove that fact
in all generality in this course.
Example: Evaluate P ∨ Q ∨ R when P and Q are T and R is F , and show that we get
the same result regardless of what order we associate the terms.
P ∨ (Q ∨ R) (P ∨ Q) ∨ R
T T T T F T T T T F
(I’ve only shown two of the possible ways to associate this formula; the others can
be found by applying commutation.) In the left-hand formula, the first ∨ is the main
connective; in the right-hand formula, the last ∨ is the main connective.
In this course you will rarely if ever have to evaluate a conjunction or disjunction with
more than two terms, so this is not something that you will frequently need to worry about.
At the beginning of this chapter I mentioned that there are other possible truth tables for
two propositional variables than the familiar ones we use in elementary logic. The latter are
chosen (i) because they are expressively complete, and (ii) because they come reasonably
close to capturing the sense of the logical connectives that we use in ordinary language.
It is, however, possible to do the logic of propositions with only one symbol, and for this
purpose one can use either the Sheffer or the Nicod strokes. As the practice examples at
the end of the chapter will convince you, these connectives are almost entirely useless for
practical calculations in logic. But their existence does tell us something about how logic
works, and they have important applications in circuit theory.
The Sheffer stroke is defined as “not both P and Q”, and has the following truth table:
72 CHAPTER 5. PROPOSITIONAL SEMANTICS: TRUTH TABLES
P | Q
T F T
T T F
F T T
F T F
The Nicod stroke is defined as “neither P nor Q,” and has the following table:
P ↓ Q
T F T
T F F
F F T
F T F
(a) −(P → Q)
(b) −(P ↔ Q)
(c) (−P ↔ Q)
(d) (−P & − Q) ∨ (P & Q)
(a) P → P
(b) −P → P
(c) P → (−P → P )
(d) (P ∨ −P ) → (P & − P )
4. Check the following sequents for validity. Use short-cuts where they are suitable.
(a) P ∨ Q, −Q ` P (DS)
(b) P → Q, −Q ` −P (MT)
(c) P → Q ` P → (P & Q) (Abs)
(d) P → Q, −P ` −Q (Denying the Antecedent)
B
1. Use truth tables to classify the following wffs as either tautologous, inconsistent, or
contingent:
(a) P ∨ (Q & R)
(b) P → (R & S)
(c) P → (Q → P )
(d) (P → R) → (P → (P & R))
(e) (P → (Q → P )) → R
(f) (P ↔ Q) ↔ (Q ↔ −P )
(g) P → (R & − R)
74 CHAPTER 5. PROPOSITIONAL SEMANTICS: TRUTH TABLES
(h) (P → R) → −P
(i) (P → (Q & R)) & (P → (Q & − R))
(j) −(P → Q) & − (Q → P )
(a) P → Q, Q → R ` P → R (HS)
(b) P ↔ Q, Q ↔ R ` P ↔ R
(c) P → Q, R → Q ` P → R
(d) P → Q ` −Q → P
(e) P → Q, R → S, −Q ∨ −S ` −P ∨ −R (DD)
(f) P ∨ Q, P → R, Q → R ` R (Conv)
(g) P → Q ` P → (P ∨ Q)
(h) P → Q ` P → (P & Q)
(i) P ∨ R, P → (R ∨ S), −S ` −P
(j) P → (M & N ), M → R, N → S, −(R ∨ S) ` −P ∨ M
(a) Show that if P and Q are contrary then −P and −Q are subcontrary.
(b) Show that P and Q are equivalent if and only if P and −Q are contradictory.
(c) Show that −P and −Q are equivalent if and only if P and Q are equivalent.
(d) Show that two wffs are inconsistent iff they are both contraries and subcontraries.
(e) If P, Q ` R is a valid sequent, is it possible for R to be false? Explain your
answer.
(f) If P, Q ` R is invalid, is it impossible for P , Q, and R to all be true? Explain
your answer.
C
1. (a) Show how to write −P , P ∨ Q, P & Q, and P → Q in terms of the Sheffer stroke.
(b) Do the same for the Nicod stroke.
(c) Check the following sequent for validity by constructing its full truth table:
(P |P )|(Q|Q), P |(R|R) ` R.
(d) Translate the sequent in part (c) into ordinary propositional notation.
Chapter 6
Truth trees are one of the most efficient ways of checking the semantic properties of propo-
sitional formulas. In particular, it gives a very easy way of checking the validity of sequents.
The basic idea of truth trees is that they give a graphic way of displaying whether or
not a set of formulas is inconsistent.
P & QX P ∨ QX
P @
Q @
P Q
Basic Stack Basic Branch
75
76 CHAPTER 6. PROPOSITIONAL SEMANTICS: TRUTH TREES
Similarly, because any number of propositions can be disjoined, we can have branches
with any number of stems.
P ∨Q∨R
HH
H
P R
Q
However, we will rarely need branches with more than two stems in this book.
Stacks and branches can be linked together into trees of arbitrary size. For instance,
the wff (P & Q) ∨ R can be decomposed as follows:
(P & Q) ∨ R X
@
@
P &QX R
P
Q
Decomposition has gone as far as it can when all molecular formulas are broken down
into atomic formulas or their negations (which are collectively called literals).
6.2 Paths
A route going from the initial set of formulas at the top down to one of the atomic formulas
at the bottom, while choosing only one side of each branch as one goes, will be called here
a path. (In some logic books these are called branches, but this invites confusion with the
sense of branch as a forked structure splitting off from a disjunction, as noted above.) As
each formula is broken down into its corresponding branch or stack, it is checked off as
shown so that we don’t forget that we have broken it down.
6.3. STACKING AND BRANCHING RULES 77
To arrive at this we first broke down the conjunction (P & Q) into a stack, and then
broke down the disjunction (P ∨ −Q) into a branch. It would have been perfectly valid
to do the branch first, and then the stack. (Try it.) As a rule, though, it is usually more
efficient to do as many stacks as one can first, and then do the branches, since this will
minimize the number of branches to be drawn.
Truth trees are a kind of partially ordered set or poset. These terms will be explained
later in this book.
My use of the term path here is analogous to the way it is used in DOS and UNIX,
which possess hierarchical file structures. In the last tree above, the two paths could be
notated in a form similar to the way we note paths in UNIX. That is, the two paths in
the above tree could be written more abstractly as ((P & Q) & (P ∨ −Q)) \ (P & Q) \ P
and ((P & Q) & (P ∨ −Q)) \ (P & Q) \ Q. However, it will not be especially useful to do
it this way, since the main purpose of truth trees is to make the implications of a stack of
propositions graphically explicit.
Notice that the path ending in −Q contains contradictory formulas, namely −Q itself
and Q. Any path that contains a contradiction is marked off with an × at the end, and
declared to be closed . Any path that is not closed is open.
If all branches on a tree are closed, then the tree itself is said to be closed. If just one
branch remains open after decomposition has gone as far as it can go, the tree itself is open.
Although we will usually have to break formulas all the way down to the literal level,
we can stop decomposing formulas along a path as soon as a contradiction is found. For
example, in the above tree, replace Q with (R → S). (Try it.) You will obviously get a
closed path without having to break down (R → S) into its literal components, and you
can stop working on that path at that point. In general, once a path is closed, we do not
need to go any farther along it.
What we are doing, in effect, is treating all the formulas along a given path as if they
were true, and seeing whether or not this will lead to a contradiction. Testing the branches
that sprout from a disjunction is similar to vE in that we go exploring down each branch
of the disjunction, one at a time, to see where it will lead. We can start from any number
of initial formulas and break them down as far as we can. If all branches of the resulting
tree are closed, the set of formulas is inconsistent.
These rules follow from equivalence formulas (such as ` (P → Q) ↔ (−P ∨ Q)) which
can be easily established by natural deduction, and checked by truth tables.
−(P → (Q → P )) X
P
−(Q → P ) X
Q
−P
×
6.4. CHECKING SEMANTIC PROPERTIES OF FORMULAS WITH TREES 79
There is only one path, and since it is closed we see that the negation of the formula is
inconsistent, meaning that the formula itself is tautologous.
If we ran the a tree on the formula (P → (Q → P )) itself, we would get this:
(P → (Q → P )) X
@
@
−P (Q → P ) X
@
@
−Q P
All paths on this tree are open. However, this does not show that the formula is a
tautology, only that it is consistent; and we already know that if all we know about a
formula is that it is consistent then it could be either a tautology or a contingency.
A set of formulas {P1 , . . . , Pn } is inconsistent if and only if (P1 & . . . & Pn ) is always
false, which is the same thing as saying that −(P1 & . . . & Pn ) is a tautology. To check
whether a set of formulas is inconsistent, stack them up and run a tree. The set is incon-
sistent iff the tree closes.
To check whether a formula P is contingent, we have to run trees for both P and −P .
The formula is contingent if and only if neither tree closes. For instance, the following trees
show that (P → Q) is contingent:
−(P → Q) X
P → QX
P
@@ −Q
−P Q
In summary: a set of formulas is inconsistent iff its stack has a closed tree, and is
consistent iff its stack has an open tree. A formula is contingent iff the tree for both the
formula and its negation are open.
6.4.2 Equivalence
To check whether or not two formulas P and Q are equivalent, we use the fact that P and
Q are equivalent iff (P ↔ Q) is a tautology, and this is the same thing as saying that the
tree for −(P ↔ Q) must close. For instance, to demonstrate that −(P & Q) ↔ (−P ∨ −Q)
(one of de Morgan’s Laws), we run the following tree:
Since all paths close, the formula is a tautology. This tree looks relatively complex,
80 CHAPTER 6. PROPOSITIONAL SEMANTICS: TRUTH TREES
but it follows straightforwardly by the application of the branching and stacking rules given
above.
Of course, we could have also demonstrated the same result by doing two trees, one for
−(P & Q) ` (−P ∨ −Q), and one for (−P ∨ −Q) ` −(P & Q). (Try it.)
Checking a set of more than two formulas for equivalence can be a little more compli-
cated, and is in fact not something that one often has to do. But suppose we have to check
whether all formulas in the set {P1 , P2 , P3 } are equivalent. We could check each possible
pair for equivalence. However, we should keep in mind that equivalence is transitive, that
is, P1 ↔ P2 , P2 ↔ P3 ` P1 ↔ P3 . (This sequent can be proven by an easy application of
HS; make sure you try it!) Therefore, to check three formulas for equivalence, we only need
to check two formulas for inconsistency: −(P1 ↔ P2 ) and −(P2 ↔ P3 ).
In many cases the formulas will be simple enough in structure that it will be obvious
whether some pair of them is not equivalent. If that is the case, you are done; since to show
that a whole set of formulas is inequivalent it is sufficient to show that any two members of
a set are inequivalent.
P → QX P → QX
P Q → RX
−Q −(P → R) X
@ P
−R
@
−P Q
× × @
@
−Q R
MP @ ×
@
−P Q
× ×
HS
These sequents are shown to be valid by the fact that all paths close.
6.6. TREES OR TABLES? 81
Don’t forget that the order in which we choose to break down the formulas in our initial
stack does not make any difference to the validity of the result. However, one often arrives
at a simpler tree structure by doing as much stacking as possible first. Also, don’t forget
that as shown in the tree for HS, once a path is closed, no further breakdown is needed.
Here, for comparison, are the trees for some invalid sequents:
P → QX P ∨ QX
Q Q ∨ RX
−P −R
@ HH
@ HH
−P Q H
P Q
@
@ @
@
Q R Q R
× ×
In the first example (which is just Affirming the Consequent), neither path closes. In
the second example, two paths close, but two remain open, showing that the sequent is
invalid. Also, in the second example, you will see that if you decompose Q ∨ R first you
will get a slightly simpler tree structure.
To check whether or not a formula is a theorem is (by the soundness and completeness
of propositional logic) the same thing as checking whether it is a tautology. Thus, for
instance, the tree above that checks whether (P → (Q → P )) is a tautology shows that
` (P → (Q → P )).
P → QX
Q → RX
−(R → P ) X
R
−P
H
HH
H
H
−Q R
@
@ @
@
−P Q −P Q
×
Since the tree is open, the sequent is invalid. Strictly speaking, one could stop drawing
the tree as soon as even one open path was revealed since that would be sufficient to
82 CHAPTER 6. PROPOSITIONAL SEMANTICS: TRUTH TREES
P → Q Q → R R → P
F T T T T F F
While the full table for this sequent has eight lines, only one partial line is needed to
demonstrate invalidity: assume the conclusion is false, and work backwards. We can see
that it is possible to make both the premisses T when the conclusion is F, regardless of
the value of Q. This is clearly less work than the tree. The lesson is, therefore, that both
trees and tables should be in the logician’s toolkit, and with experience the logician will get
to know which is likely to be easier to try. If there are no more than (say) three or four
premisses, and if the tree will involve a lot of branching, it may be easier to use a table.
6.7 Exercises
B
1. Use truth trees to check the following sequents for validity:
Intuitively, this argument seems valid. (Whether or not it is sound is another question.
I have, in my roughly twenty year career, known two or three philosophers who were not
absent-minded, so I think that in fairness to the profession I would have to say that the first
premiss is false.) However, we have no way of representing the validity of this argument
using the tools of propositional logic we have learned so far, although it does seem as if it
might have something to do with MP.
Now consider a slight generalization of this argument:
This again seems obviously valid, but, again, we have no tools with which to express its
validity in any precise way. Arguments like these seem to have a structure suggesting some
sort of generalization of propositional sequents with which we are already familiar. (For
instance, the above argument seems to have something in common with MP.) However,
all that propositional logic as such could do for us would be to translate the sentences in
these arguments into logically independent symbols, and so both arguments would have the
generally invalid form P, Q ` R.
83
84 CHAPTER 7. PREDICATE LOGIC: SYMBOLIZATION AND TRANSLATION
To represent the logical structure of these arguments we therefore need the following
tools:
1. A way to symbolize properties of individual persons or things (so that we can say
something like “Kent is a philosopher” or “Mars is a planet”);
from the end of the alphabet, such as x, y, z, t . . .. As with constants and names, they can
be subscripted if it is convenient to do so. For instance, we could have a set of variables of
the form x1 , x2 , . . .. A variable is not a term, but rather a place-holder where a term can
be inserted.
P x := “x is a planet”.
The terminology for properties that apply to single terms is diverse: they can be called
predicates, monadic predicates, qualities, properties, attributes, adjectives, unary relations,
or 1-place relations.
7.2.4 Relations
Predicates that apply to two or more terms, also called relations, polyadic relations, n-ary
relations, or n-place relations, can easily be accommodated in the notation of predicate
calculus.
Here are some examples:
T xy := “x is taller than y”
Bxyz := “y is between x and z”
Alternate Notation
Some books write two-place relations Rxy as xRy, which conforms more closely to standard
usage in English. (Unless we are Yoda of Star Wars, we would be more likely to say “x is
taller than y” than “Taller is x than y.”) However, we will stick with the notation in which
the variables are to the right of the relation letter (call this Yoda-notation, if you like), since
it is more easily generalized.
7.2.6 Quantifiers
Quantifiers are powerful symbolic tools that allow us to refer to collections where we do
not or cannot know all the members in advance. These could be large finite collections
where it would be impractical to name all members even if we knew them, or (especially
in mathematics) infinite collections where it would be impossible to name all the members
even if we knew what they were.
The introduction of quantifiers (principally by Gottlob Frege) toward the end of the
19th century was one of the major innovations in the history of logic.
for all of the approximately 55 students in the class. A universal statement, which expresses
the concept that everything in the domain of discourse has a certain property, is therefore
equivalent to a conjunction over all members of the domain of discourse.
For a large class it is quite inconvenient to write a universal claim this way, and impos-
sible if we do not know everyone’s name, if the membership in the class was unknown or
undetermined, or if it was infinite. We can get around these difficulties by introducing the
universal quantifier ∀x . . ., and write
∀xSx. (7.2)
This can be read in several ways, the most common of which are the following:
where Lx := “x is in Logic 2003A”. The latter would be read “Everyone in Logic 2003A
studies logic,” or “If anyone is in Logic 2003A, then that person studies logic.”
Suppose now we wanted to express the claim that someone in Logic 2003A is wearing blue
jeans. Let Bx := “x wears blue jeans”. If we know in advance that U is the membership of
that particular class, then we could write
As before, if the class is large it would be quite inconvenient to write this all out, and
impossible if we do not know everyone’s name, or if the membership in the class was
unknown, undetermined, or infinite. We get around these difficulties by introducing the
existential quantifier ∃x . . ., and write simply
∃xBx. (7.5)
Note carefully: In logic, the word “some” means nothing more than “at least one” or
(in the case of continuous quantities such as liquids) “an amount greater than zero.” The
statement “Some mammals are warm-blooded” is therefore consistent with the possibilities
either that there is only one warm-blooded mammal or that all mammals are warm-blooded
(and in fact the latter statement happens to be true, although as logicians we do not concern
ourselves with the mere facts of biology).
If U were a larger set than the membership of Logic 2003A, then we would have to
write
∃x(Lx & Bx), (7.6)
which can be read as “Someone in Logic 2003A wears blue jeans.”
Alternate Notation
Quantifiers are often written in enclosing brackets, such as (∀x)F x. Many older logic books
write the universal quantifier without the ∀ symbol, so that (x)F x would be read as “for
all x, F x”.
A few books (such as the widely-read Metalogic
V by G. Hunter [11]) use
W so-called Stanford
notation, wherein universals are represented by and existentials by . This is inspired
by the fact that universals and existentials are generalized “ands” and “ors” respectively.
Scope of Quantifiers
It is possible to give a formal recursive definition of a well-formed formula of predicate logic
(see Lemmon [17], for instance), but we will not bother to do so here since it is generally
obvious which formulas are properly formed and which are not.
Here are the main things to keep in mind: quantifiers, like negation signs, apply to the
shortest possible string that follows them. Thus, for instance, in −(P a & Qb), the whole
expression (P a & Qb) is negated, while in −P a & Qb, only the sentence P a is negated.
Similarly, in ∃xF x & Gx, the quantifier binds only the x in F x; this expression, therefore,
is an open sentence. However, in ∃x(F x & Gx), the quantifier applies to all of (F x & Gx), .
Also, a quantifier only binds variables within its scope, and variables can be used over
again if there is no ambiguity about which quantifier they link with. Thus, ∃xF x & ∃xGx
means exactly the same thing as ∃xF x & ∃yGy.
7.2. PREDICATE LOGIC NOTATION 89
7.2.7 Order
The predicate logic we study in this book is said to be first-order . This means that the
variables in the quantifiers apply only to elements of the universe of discourse U. In second-
order logic we quantify over predicates and relations. This introduces further complications
that are well beyond the scope of this course.
Note carefully that a universal statement does not involve an existence claim. That
is, ∀x(Ax → Bx) does not claim that there are any A which are B. It says merely that
if there are any As, then they are B. (Thus, we can truly say “All orcs are nefarious”
without, fortunately, there being any orcs in the real world.) By the truth table for the
material conditional →, the conditional in the quantifier is true even if there are no As.
Such universals are said to be vacuously true.
Existential claims such as “Some animals are mammals” do assert existence. The latter
statement can also be read “There are (or exist) some animals which are mammals.”
If we specifically wanted to express the true statement that “Some but not all animals
are mammals,” then we would have to write
or equivalently
2. Some philosophers at the University of Lethbridge are brilliant but not witty.
∃x(P x & Lx & Bx & − W x).
Literally, this says that if anything is a dog but not a chihuahua, then it likes the cold.
Unfortunately, this does not tell us what is the case if a dog is a chihuahua. Lemmon must
have been aware of this, of course, and so I assume that his translation reflected a preference
for logical minimalism — that is, translating into symbols in a way that requires as few
commitments as possible. This may not be a bad practice in general. However, most logic
texts now translate exceptives in such a way as to make the intention as explicit as possible.
For instance, Lemmon’s sentence above would now be translated as
∀x((Dx & − Cx) → Lx) & ∀x((Dx & Cx) → −Lx). (7.18)
In words, to say that all dogs except chihuahuas like the cold is the same thing as saying
that if anything is a dog, then it is a chihuahua if and only if it does not like the cold.
Negative exceptives can be translated in a similar way. “No animal except the mongoose
likes to eat cobras” could be translated as
Examples such as exceptives illustrate the fact that the aim of symbolization is not
to translate the natural language expression literally into symbols, but to analyze and
express the implicit logical intention of the natural language expression — if one can be
found (since in natural language we are often not very logically precise at all). In order to
translate a sentence correctly, we have to figure out its logic, not merely transliterate words
into symbols without thinking about their meanings.
Note that order of quantifiers is crucial to the meaning. For instance, let U be the set of
all people, and P xy := “x is the parent of y”. Then ∀x∃yP xy says that everyone is the
parent of someone, while ∃x∀yP xy says that at least one person is the parent of everyone
— obviously a very different claim.
• To symbolize the phrase “an F ,” just remove the ∃ from the proposition ∃xF x (“There
is an F ”), to get xF x.
• To define a term letter that applies to any object in U that is an F , but only the
objects in U that are F s, pick a term letter that has not been used in any undischarged
assumption, say a, and write a := xF x. This can be read as “Let a be an F ”, or “Let
a stand for any item in U that is F .”
– To make this point clear, note the difference between f := xHx and Hf . The
first could read “let ’Frodo’ be a hobbit,” and the second could read “Frodo is a
hobbit.” The first formula is merely a stipulation that a letter will be used in a
certain way, while the second is a substantive (false) claim.
7.2. PREDICATE LOGIC NOTATION 93
This notational device will prove to be very useful in certain parts of predicate natural
deduction, to be introduced in the next chapter. It is also possible to symbolize the definite
article, but in order to do that we need the concept of identity, which will appear in Chapter
9.
94 CHAPTER 7. PREDICATE LOGIC: SYMBOLIZATION AND TRANSLATION
7.3 Exercises
In these exercises, use the following symbolizations:
Ax := “x is an aardvark”
V x := “x is from the Transvaal”
F x := “x is an ant”
Sx := “x likes to sleep in the sun”
Cx := “x is a cobra”
Dx := “x has a strange diet”
T x := “x is a termite”
Lx := “x is an animal”
M x := “x is a mongoose”
Exy := “x likes to eat y”
T xy := “x wins the Lou Marsh trophy in y”
m := “Mike Weir”
n := “2003”
Natural deduction for first-order predicate logic consists of ordinary propositional natural
deduction plus four special rules that allow us to take off quantifiers and put them back on
again.
∀xF x ↔ −∃x − F x
∃xF x ↔ −∀x − F x
∀x − F x ↔ −∃xF x
∃x − F x ↔ −∀xF x
When used to justify a move in a deduction, an instance of any one of these rules
can be referred to simply as Duality. Examples will be given below, once we review the
introduction and elimination rules for quantifiers.
95
96 CHAPTER 8. PREDICATE LOGIC: NATURAL DEDUCTION
1 (1) ∀x(P x → W x) A
2 (2) Pp A
1 (3) Pp → Wp 1 UE
1,2 (4) Wp 2,3 MP
Example: All horses are mammals, all mammals are warm-blooded; therefore,
all horses are warm-blooded.
Using obvious definitions of the predicate symbols, we can symbolize this as follows:
1 (1) ∀x(Hx → M x) A
2 (2) ∀x(M x → W x) A
1 (3) Ha → M a 1 UE
2 (4) Ma → Wa 2 UE
1,2 (5) Ha → W a 3,4 HS
1,2 (6) ∀x(Hx → W x) 5 UI
The arbitrary constant a introduced in line 3 does not appear in the premisses. The
use of UI in moving from line 5 to 6 is therefore valid.
Those who have studied categorical logic will recognize this sequent as the familiar
syllogism in Barbara: all A’s are B’s, all B’s are C’s; therefore, all A’s are C’s. (See Copi
[6].)
Restriction on UI
We cannot use UI on a term that was introduced in an undischarged assumption or a
definition.
8.2. INTRODUCTION AND ELIMINATION RULES 97
1 (1) Bp A
1 (2) ∀yBy 1 UI (?!)
1 (1) Pp&Wp A
1 (2) ∃x(P x & W x) 1 EI
Restriction on EI
The only restriction on EI is that we can only introduce an existential quantifier on one dis-
tinct term letter at a time. Thus for instance, it is valid to go from F a & Ga to ∃x(F x & Gx),
but invalid to try to go from F a & Gb to ∃x(F x & Gx).
1 (1) ∀x(P x → W x) A
2 (2) ∃xP x A
98 CHAPTER 8. PREDICATE LOGIC: NATURAL DEDUCTION
(3) a := xP x Df.
2 (4) Pa 2,3 EE
1 (5) Pa → Wa 1 UE
1,2 (6) Wa 4,5 MP
1,2 (7) ∃xW x 6 EI
The object is to prove the desired conclusion from the existential in line 2, in combina-
tion with the assumption at line 1. In 3 we introduce an arbitrary constant a. It is crucial
(as we shall see) that the arbitrary constant introduced here be a new constant that has
not appeared in any earlier lines (because that would, in effect, remove its arbitrariness).
We are saying that we will suppose that a is the name of the entity referred to in line 2.
Note very carefully that line 3 is not an assumption; in fact, it is not a proposition at
all. Rather, it is an instruction to use a certain symbol in a certain way. Hence it does
not have to be discharged.1 However, we do cite the definition on line 3 in the justification
column for line 4, because we use it to derive line 4.
In line 5 we can instantiate on line 1 with UE in terms of a because UE can be applied
to any term whatsoever. We then do a little elementary propositional logic, and then at
line 7 we get the desired conclusion by using EI to go from line 6.
What we are doing in EE is actually a very simple pattern of reasoning that we often
use in day-to-day discussions. Here is the same argument cast into informal language:
Suppose we accept that all philosophers are wise and that there are some philoso-
phers. Suppose also that a is a typical philosopher. Then a must be wise;
therefore, someone is wise.
Restrictions on EE
There are three restrictions on the use of EE.
2. The arbitrary constant cannot be one that appears in any previous assumptions unless
they were discharged.
These rules can be summarized by saying that when introducing a term by means of a
definition, always use a new letter. Also, don’t forget that we are not allowed to do Universal
Introduction on any term introduced by a definition.
1
Many logic texts (such as [17]) use a much more complicated version of EE in which the proposition
P a is introduced as the leading assumption of a subproof. Once the desired conclusion ∃xW x is reached, it
is necessary to back out of the subproof, discharging the initial assumption along the way. The use of the
article operator, as in line 3 above, eliminates the need for all of this complexity. Some elementary logic
texts, on the other hand, would move directly from lines 1 and 2 above to line 4, simply calling the move
‘Existential Elimination’ or ‘Existential Instantiation.’ This move is of questionable validity, since one does
not really know what a is. If a term is selected at random from U, that does not guarantee that it will
belong to the subset of U containing all but only those items whose existence is claimed by the existential
premiss we are working from.
8.2. INTRODUCTION AND ELIMINATION RULES 99
1 (1) ∃xF x A
(2) a := xF x Df.
1 (3) Fa 1,2 EE
1 (4) ∀xF x 2 UI (??)
From the assumption that something is F we seem again to have arrived at the conclusion
that everything is F ! In fact, the use of EE at line 3 is correct. The fallacy here is in line 4,
where we universalized on the name introduced in line 2. Never universalize on a constant
defined in terms of an article phrase.
8.4 Examples
In this section I develop a number of typical quantificational arguments to show the appli-
cation of the Introduction and Elimination Rules.
1 (1) ∀x(Hx → M x) A
2 (2) ∀x(M x → −Cx) A
1 (3) Ha → M a 1 UE
2 (4) M a → −Ca 2 UE
1,2 (5) Ha → −Ca 3,4 HS
1,2 (6) ∀x(Hx → −Cx) 5 UI
An instance in words:
1 (1) ∀x(Rx → V x) A
2 (2) ∃xRx A
(3) a := xRx Df.
2 (4) Ra 2,3 EE
1 (5) Ra → V a 1 UE
1,2 (6) Va 4,5 MP
1,2 (7) ∃xV x 6 EI
Instance in words:
A simple example shows that this is invalid: it is true that all natural numbers are either
even or odd, but it is false that either all natural numbers are even or all natural numbers
are odd.
The converse, however, is valid:
Note carefully that some of these results are interderivabilities, while some go only one way.
Quantifiers can in some cases be “distributed” over → but the results are more com-
plicated and are best dealt with on an individual basis. (See the Review Exercises.)
58 ∀xF x a` −∃x − F x
It is pretty clear what is going on in this proof: Assuming the opposite of the desired
conclusion leads to a contradiction with the starting assumption.
In line 2, a is an arbitrary name which could be applied to any member of the domain
of discourse. What this proof says is that given the starting premiss −∃x − F x, we get a
contradiction if we assume −F a for any a in the domain.
59 ∃xF x a` −∀x − F x
60 ∀x − F x a` −∃xF x
61 ∃x − F x a` −∀xF x
64 ∃x(P ∨ F x) a` P ∨ ∃xF x
65 ∀x(P ∨ F x) a` P ∨ ∀xF x
8.8. INTERDERIVABILITY ⇔ EQUIVALENCE THEOREM 105
66 ∃x(P → F x) a` P → ∃xF x
67 ∀x(P → F x) a` P → ∀xF x
68 ∃x(F x → P ) a` ∀xF x → P
69 ∀x(F x → P ) a` ∃xF x → P
Note the switch between universals and existentials in the last two formulas. I will give
proofs of the last formula here; the rest of the Rules of Passage are left as Review Exercises.
Any of the sequents 62–69 can be sited simply as “Passage” in the justifications column
of a proof.
8.10 A Shortcut
It is essential to master the use of the four introduction and elimination rules, because
they allow for general procedures that can be applied to any first-order quantificational
argument. Under some circumstances to be specified below, however, we can skip the use of
these rules and do Boolean operations directly on propositional functions within the scope
of quantifiers. This can greatly simplify many derivations.
Here are a few simple examples:
This is essentially a consequence of the fact that a universal claim is hypothetical and does
not assert existence, while an existential claim does.
By comparison, the following pattern of inference is valid:
However, line (3) does not assert the existence of hobbits, whether living in the Shire or
not. It merely says that there is something which if it is a hobbit, lives in the Shire. This
is like saying that if I were a hobbit, I would live in the Shire; and this in no way asserts
the existence of hobbits!
p := Plato U x := x is a bear
s := Socrates Hx := x is a person
h := Heidegger N x := x loves honey
r := Rufus Ax := x loves asparagus
k := The Yukon Rx := x loves broccoli
c := the Republic M x := x is a mammal
t := Thumper W x := x is warm-blooded
b := Being and Time Lxy := x loves y
Lx := x is a rabbit Cxy := x came from y
Bx := x is brown T xy := x takes y seriously
Dx := x is a dialogue Axy := x is the author of y
A
1. Translate the following arguments into symbols, and prove them valid:
B
1. Translate the following arguments into symbols and prove them valid:
C
1. Prove the Duality relations that were not proven in the Chapter; that is, Sequents
59, 60, and 61.
Chapter 9
In this chapter we introduce the use of two simple rules concerning the relation of identity.
These rules permit a powerful extension of predicate logic which enables us to express the
logical structure of a much larger class of quantificational propositions with a very minimal
increase in formal machinery.
By “identity” we simply mean a two-term relation which says that two proper names
or constants point to the same object in the universe of discourse. We could write this as
Ixy := x is identical to y
but it is easier, and more in keeping with standard mathematical usage, to express identity
as follows:
(x = y) iff x and y point to the same member of U.
We will also allow the use of the following obvious notational convenience:
(a 6= b) := −(a = b)
To analyze the meaning of identity is, in fact, a tricky philosophical problem. The
definition I have given here is perilously close to being circular, since we probably would
have to use the identity relation in order to express the fact that two symbols single out the
same individual. For our purposes, however, we will define identity simply to be a relation
that obeys the following introduction and elimination rules:
Identity Introduction (=I):
For any term a the formula a = a is a theorem, and may be introduced at any
point in a proof. In symbols:
` a = a.
109
110 CHAPTER 9. PREDICATE LOGIC WITH IDENTITY
9.1 Examples
Here are a few elementary examples which illustrate the use of these rules, and demonstrate
some basic properties of the identity relation:
70 a = b ` b = a
1 (1) a=b A
(2) a=a =I
1 (3) b=a 1,2 =E
In line (3) we cite two lines, one of which is the statement of the identity we are using, while
the other is the line into which we make a substitution. In line (2) we substitute b for a.
This proof demonstrates that identity is a symmetric relation.
71 a = b & b = c ` a = c
1 (1) a = b&b = c A
1 (2) a=b 1 &E
1 (3) b=c 1 &E
(4) a=c 2,3 =E
This proof demonstrates that identity is a transitive relation.
The next proof demonstrates a triviality: to say that something is Paul, and that thing
loves curry, is equivalent to saying that Paul loves curry. The main interest in this proof is
that it demonstrates the use of =E.
72 F p a` ∃x(x = p & F x)
I’ll prove one part of it; the converse is left as an exercise.
The next proof shows that the way we use identity here corresponds to our basic
notion of identity, which is that things that are presumed identical should have all the
same properties.
73 a = b ` F a ↔ F b
1 (1) a=b A
2 (2) Fa A (CP)
1,2 (3) Fb 1,2 =E
1 (4) Fa → Fb 2–3 CP
5 (5) Fb A (CP)
1,5 (6) Fa 1,5 =E
9.2. FALLACY ALERT! 111
1 (7) Fb → Fa 5–6 CP
1 (8) (F a → F b) & (F b → F a) 4,7 &I
1 (9) Fa ↔ Fb 8 Def↔
a = b ` F a ↔ F b. (9.1)
This simply expresses what we mean by identity. But do not attempt the following move:
F a ↔ F b ` a = b. (????) (9.2)
This is invalid because the two individuals a and b do not necessarily have to be the same
in order for F a to be true iff F b is true. Suppose F x is defined as “x is a philosopher.” The
statement “F (Socrates) ↔ F (Plato)” is true, and yet Socrates was a distinct individual
from Plato.
Remember, to say a = b is to say that the symbols a and b point to the same individual
in the universe of discourse. To say F a ↔ F b is to say that the propositions F a and F b
have the same truth value.
Here it is in words that are half-way between logic and ordinary English: For
all x and y, if x and y know the secret, then x and y are the same person.
In half-logic: Someone knows the secret, and anyone else who knows the secret
is that person.
In half-logic: there are two distinct people who know the secret, and anyone else
who knows the secret is identical to one or the other of those two people.
112 CHAPTER 9. PREDICATE LOGIC WITH IDENTITY
9.4 Superlatives
It is possible to express simple superlatives using identity. Here’s an example of such a
translation, using hopefully obvious notation:
Smith is the tallest person in Logic class.
Ls & ∀y((Ly & (y 6= s)) → T sy)
In half-logic: Smith is in the Logic class and if there is anyone else in the Logic
class who is not Smith then Smith is taller than that person.
9.5 Uniqueness
The identity relation can be used to express the concept of uniqueness. Suppose we want
to say that there is a unique individual who is Prime Minister of Canada. Let P x := x is a
Prime Minister of Canada. To express the concept that there is only one person who is the
Prime Minister of Canada, we can write
∃x(P x & ∀y(P y → (x = y)).
In words: there exists at least one Prime Minister of Canada, and any such individuals are
identical. This could also be read as saying that there is one and only one Prime Minister
of Canada, or that there is a unique Prime Minister of Canada.
It is sometimes convenient to have a special symbol to express the concept of unique
existence, and this is commonly done as follows:
∃!xP x := ∃x(P x & ∀y(P y → (x = y))
Bertrand Russell is said to have read this as “E-shriek! x. . . ”, and it can be read as “there
exists one and only one P ” or “there exists a unique P .”
The first sentence is false. The second sentence seems very suspicious, but it is not imme-
diately obvious how to assign it a truth value. Some philosophers have gone so far as to
declare sentences such as this, in which a definite description points to a nonexistent object,
to be meaningless. But that doesn’t seem quite right either, since we make deductions using
definite descriptions all the time, whether or not the objects mentioned are known to exist.
Consider, for instance, the following elementary argument:
Intuitively this seems to be valid, so we wouldn’t want to say that definite descriptions
have no logic or are even meaningless. And yet we still have to say how to represent its
validity formally. As before, we motivate the need for new types of symbols by showing
that there are arguments that are clearly valid, but whose validity we do not yet know how
to demonstrate.
What makes this argument work is clearly the fact that Rudolph is supposed to be the
lead reindeer, not just any reindeer. In 1905, Bertrand Russell suggested that we should
analyse propositions using definite descriptions as combinations of existence claims with
uniqueness claims. Thus, for instance, we could express “Harper is the Prime Minister” as
In words, “Harper is the Prime Minister and if anyone is the Prime Minister, that person is
Harper”. This statement needs to be distinguished carefully from the claim that “Harper is
a Prime Minister,” which would be symbolized simply as P h. The latter statement makes
sense, since there are, after all, several countries that have Prime Ministers, and several
former Prime Ministers of Canada.
To say “The Prime Minister is a former Finance Minister,” we can write
In other words, this simply says, “There exists a unique bald King of France,” and, at the
time of this writing, this statement is false.
We can also express the identity of two descriptions. “The author of Hamlet is the
author of Lear ” can be expressed as
As these examples suggest, a definite description such as “the Prime Minister of Canada”
functions grammatically very much like a proper name such as “Mr. Stephen Harper.”
What about negations of statements using definite descriptions? For instance, how
would we translate “Layton is not the Prime Minister?” Intuitively, it seems that to deny
114 CHAPTER 9. PREDICATE LOGIC WITH IDENTITY
this statement should amount to saying that either Layton is not a Prime Minister, or that
someone other than Layton is a Prime Minister; either or both of those possibilities would
be sufficient to negate the claim that Layton is the Prime Minister.
A quick bit of predicate natural deduction shows that this analysis is correct:
−(P l & ∀y(P y → (y = l)))
↔ −P l ∨ −∀y(P y → (y = l)) deM
↔ −P l ∨ ∃y − (P y → (y = l)) Duality
↔ −P l ∨ ∃y(P y & (y 6= l)) Neg→
Russell’s method of analysing the logic of definite descriptions has generated a lot of
controversy in the literature on philosophy of language. However, it provides a workable
logical analysis of definite descriptive claims and, as we shall show next, allows us to demon-
strate the validity of arguments involving definite descriptions which should, indeed, turn
out to be valid. Russell’s method also illustrates a point that we noted earlier, which is that
the grammatical structure of a sentence in a particular natural language such as English is
not always a good indication of its underlying logical form.
In his later years Russell defended his approach against criticism by P. F. Strawson:
My theory of descriptions was never intended as an analysis of the state of mind
of those who utter sentences containing descriptions. Mr Strawson gives the
name ‘S’ to the sentence ‘The King of France is wise’, and he says of me ‘The
way in which he [Russell] arrived at the analysis was clearly by asking himself
what would be the circumstances in which we would say that anyone who uttered
the sentence S had made a true assertion’. This does not seem to me a correct
account of what I was doing. . . . I was concerned to find a more accurate and
analysed thought to replace the somewhat confused thoughts which most people
at most times have in their heads. [21, p. 243]
Russell, and indeed many of those who contributed to modern symbolic logic, believed that
the job of the logician is not only to describe the logic of ordinary reasoning in a precise
way, but to devise techniques that can help us to reason more effectively.
9.7.1 Example
Consider the following argument:
Harper is the Prime Minister.
Layton is not Harper.
∴ Layton is not the Prime Minister.
Using obvious definitions for the term and predicate symbols, we can prove this argu-
ment valid as follows:
1 (1) P h & ∀x(P x → (x = h)) A
2 (2) l 6= h A
1 (3) Ph 1 &E
1 (4) ∀x(P x → (x = h)) 1 &E
5 (5) P l → (l = h) 4 UE
1,2 (6) −P l 2,5 MT
1,2 (7) −P l ∨ ∃y(P y & (y 6= l)) 6 vI
9.8. THE DEFINITE ARTICLE 115
Note, again, that line (6) only says that Layton is not a Prime Minister. We need to “or” on
the extra clause that states that someone else might be Prime Minister as well as Layton.
A Fine Point
Many logic texts write statements such as “Harper is the Prime Minister” in the following
more complicated form:
∃x(P x & ∀y(P y → (y = h)) & P h) (9.3)
This is equivalent to the simpler form we use here, which can be used whenever we know
the proper name of “the such-and-such”. This can be proven from the fact that
∃x(P x & (x = h)) a` P h. (9.4)
This is proven in the Exercises, below.
This is a very simple piece of reasoning, and it should have a very simple proof.
First, we need to define some symbols:
Axy := x is an author of y
h := Hamlet
m := Macbeth
s := Shakespeare.
We begin the proof by defining proper names for the author of Hamlet and the author
of Macbeth; we then write our two premisses in terms of those proper names, and make a
simple substitution:
This is an enormous simplification over the methods for proving such arguments that will be
found in most recent logic texts. What makes it work is the ability to define the “temporary”
proper names a and b; with these in hand, very simple facts about the nature of identity
can be applied in ways that are as obvious as the argument itself. Behind the notation
is the logical (or in part grammatical) fact that because definite descriptions such as “the
author of Hamlet” pick out a unique object, they function exactly the same way as do
proper names (which are labels for unique objects) ; we can therefore represent their logic
quite faithfully if we simply assign to the definite description a proper name (if we do not
already know a name of the object being described). The only restriction is, as usual, that
we have to use a term letter that has not been used before.
9.9. REVIEW EXERCISES 117
a := Aristotle
s := The Stagirite1
p := Plato
e := The Ethics
r := The Republic
t := The Timaeus
Kx := x knew (knows) The Good
Axy := x was (is) an author of y
A
1. Translate into symbols:
(a) p 6= s
(b) r 6= t
(c) Apr ∨ −Apr
B
1. Translate into symbols:
4. Translate the following argument into symbols, and prove the resulting sequent:
C
1. Find a general recursive expression for the numerically definite quantifier ∃n xF x for
any n.
Chapter 10
Chapter 2
Exercises 2.2.8
A
(c) T → −C
(e) −B → A
(g) B → C
(i) −C → −T
(k) A → −B
(m) T ∨ −C or −C ∨ T
(g) Either Alice does not go to the store or Tad does not stay home.
119
120 CHAPTER 10. SOLUTIONS TO SELECTED EXERCISES
Exercises 2.3.1
A
1. (a) F
(c) T
(e) T
2. (a) T
(c) F
(e) T
Exercises 2.6.1
A
1. (a) (B & T ) → (A → C)
(c) C ↔ T
(e) −T → (A ∨ B)
2. (a) If Alice and Bob go to the store and Cheryl goes to work, then Tad does’t stay home.
(c) If Alice goes to the store then if Bob goes to the store Cheryl goes to work.
or:
If Alice goes to the store then Bob goes to the store only if Cheryl goes to work.
(e) If Alice goes to the store only if Bob goes too, then Cheryl goes to work only if Tad
stays home.
Exercises 2.7.1
A
1. (b) −C ∨ −T
(d) −(C → T )
2. (b) Bob going to the store does not imply either Cheryl not going to work or Tad staying
home.
(d) Alice goes to the store if and only if Bob does not.
121
1. (a) P, P → −Q ` −Q
1 (1) P A
2 (2) P → −Q A
1,2 (3) −Q 1,2 MP
(c) P & Q, P → R ` R
1 (1) P &Q A
2 (2) P →R A
1 (3) P 1 &E
1,2 (4) R 2,3 MP
(e) − − P ` P ∨ R
1 (1) −−P A
1 (2) P 1 DN
1 (3) P ∨R 2 vI
(c) P ∨ Q, Q → R ` P ∨ R
1 (1) P ∨Q A
2 (2) Q→R A
3 (3) P A (vE)
3 (4) P ∨R 3 vI
5 (5) Q A (vE)
2,5 (6) R 2,5 MP
2,5 (7) P ∨R 6 vI
1,2 (8) P ∨R 1,3–4,5–7 vE
122 CHAPTER 10. SOLUTIONS TO SELECTED EXERCISES
(e) P → Q, Q → R, −R ` −P
1 (1) P →Q A
2 (2) Q→R A
3 (3) −R A
4 (4) P A (RAA)
1,4 (5) Q 1,4 MP
1,2,4 (6) R 2,5 MP
1,2,3,4 (7) R& −R 3,6 &I
1,2,3 (8) −P 4–7 RAA
Exercises 3.7
2. (a) P, Q ` P → Q
1 (1) P A
2 (2) Q A
2 (3) P →Q 1–2 CP
(g) P → (Q → R), (Q → R) → T ` P → T
1 (1) P → (Q → R) A
2 (2) (Q → R) → T A
1,2 (3) P →T 1,2 HS
1. (a) P → Q, R → S, −Q ∨ −S ` −P ∨ −R (DD)
1 (1) P →Q A
2 (2) R→S A
3 (3) −Q ∨ −S A
4 (4) −Q A (vE)
1,4 (5) −P 1,4 MT
1,4 (6) −P ∨ −R 5 vI
7 (7) −S A (vE)
2,7 (8) −R 2,7 MT
2,7 (9) −P ∨ −R 8 vI
1,2,3 (10) −P ∨ −R 3,4–6,7–9 vE
2. (a) P, Q ` P ↔ Q
1 (1) P A
125
2 (2) Q A
2 (3) P →Q 1–2 CP
1 (4) Q→P 1–2 CP
1,2 (5) (P → Q) & (Q → P ) 3,4 &I
1,2 (6) P ↔Q 5 Def↔
(e) R, R ↔ P, −Q → −P ` Q
1 (1) R A
2 (2) R↔P A
3 (3) −Q → −P A
3 (4) P →Q 3 Trans
2 (5) (R → P ) & (P → R) 2 Def↔
2 (6) R→P 5 &E
2,3 (7) R→Q 4,6 HS
1,2,3 (8) Q 1,7 MP
(g) P ↔ S, P ∨ Q, P ∨ R, −Q ∨ −R ` S
1 (1) P ↔S A
2 (2) P ∨Q A
3 (3) P ∨R A
4 (4) −Q ∨ −R A
5 (5) −Q A (vE)
2,5 (6) P 2,5 DS
7 (7) −R A (vE)
3,7 (8) P 3,7 DS
2,3,4 (9) P 4,5–6,7–8 (vE)
1 (10) (P → S) & (S → P ) 1 Def↔
1 (11) P →S 10 &E
1,2,3,4 (12) S 9,11 MP
(i) P → (Q ∨ R), (P → Q) → X, (P → R) → Y ` X ∨ Y
1 (1) P → (Q ∨ R) A
2 (2) (P → Q) → X A
126 CHAPTER 10. SOLUTIONS TO SELECTED EXERCISES
3 (3) (P → R) → Y A
1 (4) (P → Q) ∨ (P → R) 1 Dist↔
5 (5) P →Q A (vE)
2,5 (6) X 2,5 MP
2,5 (7) X ∨Y 6 (vI)
8 (8) P →R A (vE)
3,8 (9) Y 3,8 MP
3,8 (10) X ∨Y 6 vI
1,2,3 (11) X ∨Y 1,5–7,8–10 vE
(a) MP
(b) MT
(c) vI
(d) HS
(e) Imp
(a) n Imp(×2)
(b) n Imp, Dist
(c) n Imp, DN
(d) n Imp, Dist
(e) n, n+1 MT, DN
(f) n Assoc, Comm.
1. (a) ` (P & − P ) → Q
1 (1) P & −P A (CP)
1 (2) Q 1 ExF
(3) (P & − P ) → Q 1–2 CP
(b) ` P → (−P → Q)
127
1 (1) P A (CP)
2 (2) −P A (CP)
1,2 (3) P & −P 1,2 &I
1,2 (4) Q 3 ExF
1 (5) −P → Q 2–4 CP
(6) P → (−P → Q) 1–5 CP
Alternate proof:
(1) P ∨ −P ExM
(2) (P ∨ −P ) ∨ Q 1 vI
(3) −P ∨ (P ∨ Q) 2 Assoc, Comm
(4) P → (− − P ∨ Q) 3 Imp, DN
(5) P → (−P → Q) 4 Imp
(c) ` P ∨ (P → Q)
(1) −P → (P → Q) TI (Ques. 1(a))
(2) − − P ∨ (P → Q) 1 Imp
(3) P ∨ (P → Q) 2 DN
Alternate proof:
(1) P ∨ −P ExM
(2) (P ∨ −P ) ∨ Q 1 vI
(3) P ∨ (−P ∨ Q) 2 Assoc
(4) P ∨ (P → Q) 3 Imp
(d) ` (P → Q) ∨ (Q → P )
(1) P ∨ −P ExM
(2) (P ∨ −P ) ∨ (Q ∨ −Q) 1 vI
(3) (−P ∨ Q) ∨ (−Q ∨ P ) 2 Assoc (×2), Comm (×3)
(4) (P → Q) ∨ (Q → P ) 3 Imp (×2)
(e) ` (P → Q) ∨ (Q → R)
(1) Q ∨ −Q ExM
(2) −P ∨ (Q ∨ −Q) 1 vI
(3) (−P ∨ (Q ∨ −Q)) ∨ R 2 vI
(4) (−P ∨ Q) ∨ (−Q ∨ R) 3 Assoc (×2)
(5) (P → Q) ∨ (Q → R) 4 Imp (×2)
(f) ` (−P → P ) → P
1 (1) −P → P A (CP)
2 (2) −P Q (RAA)
1,2 (3) P 1,2 MP
1,2 (4) P & −P 2,3 &I
1 (5) P 2–4 RAA, DN
(6) (−P → P ) → P 1–5 CP
128 CHAPTER 10. SOLUTIONS TO SELECTED EXERCISES
Alternate proof:
1 (1) −P → P A (CP)
1 (2) −−P ∨ P 1 Imp
1 (3) P ∨P 2 DN
1 (4) P 3 Idem
(5) (−P → P ) → P 1–4 CP
1. (a) ` P ∨ −P
1 (1) −(P ∨ −P ) A (RAA)
2 (2) P A (RAA)
2 (3) P ∨ −P 2 vI
1,2 (4) (P ∨ −P ) & − (P ∨ −P ) 1,3 &I
1 (5) −P 2–4 RAA
1 (6) P ∨ −P 5 vI
1 (7) (P ∨ −P ) & − (P ∨ −P ) 1,6 &I
(8) P ∨ −P 1–7 RAA, DN
1. Write the complete truth table for each of the following wffs:
(a)
− (P ∨ Q) & − (P & Q)
F T T T F F T T T
F T T F F T T F F
F F T T F T F F T
T F F F T T F F F
130 CHAPTER 10. SOLUTIONS TO SELECTED EXERCISES
(b)
− (− (P → Q) & − (Q → P ))
T F T T T F F T T T
T T T F F F F F T T
T F F T T F T T F F
T F F T F F F F T F
(c)
(d)
P → (Q → R)
T T T T T
T F T F F
T T F T T
T T F T F
F T T T T
F T T F F
F T F T T
F T F T F
(a)
− (P → Q)
T T F F
(b)
− (P ↔ Q)
T T F F
(c)
(− P ↔ Q)
F T T F
131
(d)
(− P & − Q) ∨ (P & Q)
F T F F T T T T T
F T F T F F T F F
T F F F T F F F T
T F T T F T F F F
(a)
P → P
T T T Tautology
F T F
(b)
− P → P
F T T T Contin-
T F F F
gent
(c)
P → (− P → P )
T T F T T T Tautology
F T T F F F
(d)
(P ∨ −P ) → (P & − P )
Contradiction
T F F
4. Check the following sequents for validity. Use short-cuts where they are suitable.
(a)
P ∨ Q − Q P
F F F T F F
This is valid, since making the conclusion F forces a premiss
to be F.
(b)
P → Q − Q − P
T F F T F F T
Valid
132 CHAPTER 10. SOLUTIONS TO SELECTED EXERCISES
(c)
P → Q P → (P & Q)
T F F T F T F F
Valid
(d)
P → Q − P − Q
F T T T F F T
Invalid, since there is a line in the truth table in which the
premisses are T and the conclusion F.
1. Use truth tables to classify the following wffs as either tautologous, inconsistent, or
contingent.
(a)
P ∨ (Q & R)
F F F F F
T T T T T
Contingent; to demonstrate this, all we have to do is show
that there is at least one line that comes out T, and one that
comes out F.
(b)
P → (R & S)
T F F F
T T T T T
Contingent
(c)
P → (Q → P )
T T T T
Tautology
(d)
(P → R) → (P → (P & R))
T F F T T F T F F
Tautology; since the only way to make the consequent F
makes the antecedent F as well.
133
(e)
(P → (Q → P )) → R
T T T
T F F
(P ↔ Q) ↔ (Q ↔ − P)
T T T F T F F T
T F F F F T F T
F F T F T T T F
F T F F F F T F
Contradiction
(g)
P → (R & − R)
T F F
F T F
Contingent
(h)
(P → R) → − P
T T T F F T
F T T T F
Contingent
(i)
− (P → Q) & − (Q → P)
F T T T F F T T T
T T F F F F F T T
F F T T F T T F F
F F T F F F F T F
Contradiction
134 CHAPTER 10. SOLUTIONS TO SELECTED EXERCISES
P → Q Q → R P → R
T F F F T F T F F
Valid
(b)
P ↔ Q Q ↔ R P ↔ R
T F F F T F T F F
F F T T T T F F T
Valid
(c)
P → Q R → Q P → R
T T T F T T T F F
Invalid
(d)
P → Q − Q → P
F T F T F F F
Invalid
(e)
P → Q R → S − Q ∨ − S − P ∨ − R
T T T T T T F T F F T F T F F T
Valid
(f)
P ∨ Q P → R Q → R R
F F F F T F F T F F
Valid
(g)
P → Q P → (P ∨ Q)
F T T T T T
Valid, since the conclusion is a the-
orem.
135
(h)
P → Q P → (P & Q)
T F F T F T F F
Valid
(i)
P ∨ R P → (R ∨ S) − S − P
T T T T T T T F T F F T
Invalid
(j)
P → (M & N ) M → R N → S − (R ∨ S) − P ∨ M
T F F F F F T F F T F T F F F F T F F
Valid
136 CHAPTER 10. SOLUTIONS TO SELECTED EXERCISES
Exercise 6.7
1(b) P ∨ QX
−P The tree closes; therefore the sequent is valid.
−Q
@
@
P Q
× ×
1(c) P ∨ QX
P → RX The tree closes; therefore the sequent is valid.
Q→SX
−(R ∨ S) X
−R
−S
@
@
−Q S
@ ×
@
−P R
@ ×
@
P Q
× ×
1(e) P → QX
−(P → (P & Q)) X The tree closes; therefore the sequent is valid.
P
−(P & Q) X
@
@
−P −Q
×
@
@
−P Q
× ×
1(g) P
−(P → Q) X The tree is open; therefore the sequent is invalid.
P
−Q
1(j) P → (Q & R) X
−Q The tree is closed; therefore the sequent is valid.
P
@
@
−P Q&RX
× Q
R
×
137
OR
(e) ∀x(M x → −V x)
(f) T mn
(g) xM x
B
1. (a) 1 (1) ∀x(Ahx → −∃yT yx) A
2 (2) Ahb A
1 (3) Ahb → −∃yT yb 1 UE
1,2 (4) −∃yT yb 2,3 MP
1,2 (5) ∀y − T yb 4 Duality
1,2 (6) −T sb 5 UE
(b) 1 (1) ∀x(U x → N x) A
2 (2) U r & Br A
2 (3) Ur 2 &E
2 (4) Br 2 &E
1 (5) Ur → Nr 1 UE
1,2 (6) Nr 3,5 MP
1,2 (7) Br & N r 4,6 &I
1,2 (8) ∃x(Bx & N x) 7 EI
(c) 1 (1) Apc A
2 (2) Dc A
1,2 (3) Apc & Dc 1,2 &I
1,2 (4) ∃x(Apx & Dx) 3 EI
(d) 1 (1) ∀x(U x → N x) A
2 (2) ∃x(Bx & U x) A
3 (3) ∀x(U x → M x) A
(4) a := x(Bx & U x) Df.
2 (5) Ba & U a 2,4 EE
2 (6) Ba 5 &E
2 (7) Ua 5 &E
3 (8) Ua → Ma 3 UE
2,3 (9) Ma 7,8 MP
1 (10) Ua → Na 1 UE
1,2 (11) Na 7,10 MP
1,2,3 (12) M a & Ba & N a 6,9,11 &I
1,2,3 (13) ∃x(M x & Bx & N x) 12 EI
(e) 1 (1) ∀x(U x → M x) & ∀x(Lx → M x) A
2 (2) ∀x(U x → −Ax) A
3 (3) ∃x(U x & Cxk) A
(4) a := x(U x & Cxk) Df.
3 (5) U a & Cak 3,4 EE
3 (6) Ua 5 &E
2 (7) U a → −Aa 2 UE
140 CHAPTER 10. SOLUTIONS TO SELECTED EXERCISES
2. (a) Sequent 62
i. ∃x(P & F x) ` P & ∃xF x
1 (1) ∃x(P & F x) A
(2) a := x(P & F x) Df.
1 (3) P &Fa 1,2 EE
1 (4) P 3 &E
1 (5) Fa 3 &E
1 (6) ∃xF x 5 EI
1 (7) P & ∃xF x 4,6 &I
ii. This could be a good exam question!
(b) Sequent 63
i. 1 (1) ∀x(P & F x) A
1 (2) P &Fa 1 UE
1 (3) P 2 &E
1 (4) Fa 2 &E
1 (5) ∀xF x 4 UI
1 (6) P & ∀xF x 3,5 &I
ii. This could be a good exam question!
(c) Sequent 64
i. 1 (1) ∃x(P ∨ F x) A
(2) a := x(P ∨ F a) Df.
1 (3) P ∨ Fa 1,2 EE
4 (4) P A (vE)
4 (5) P ∨ ∃xF x 4 vI
6 (6) Fa A (vE)
6 (7) ∃xF x 6 EI
6 (8) P ∨ ∃xF x 7 vI
1 (9) P ∨ ∃xF x 3,4–5,6–8 vE
ii. This could be a really good question for the exam! Hint: it is basically the
reverse of the above, with EE’s inside vE subproofs.
(d) Sequent 65
i. 1 (1) ∀x(P ∨ F x) A
1 (2) P ∨ Fa 1 UE
3 (3) P A (vE)
3 (4) P ∨ ∀xF x 3 vI
5 (5) Fa A (vE)
5 (6) ∀xF x 5 UI
141
5 (7) P ∨ ∀xF x 6 vI
1 (8) P ∨ ∀xF x 2,3–4,5–7 vE
This proof illustrates a subtle point. It might seem that the use of UI in
line 6 is illegitimate, since it is done on the term letter a that appears in the
assumption on line 5. The catch is that a was actually introduced on line
2, where it was treated as an entirely arbitrary constant to which line 1 is
applied. In other words, a was not actually introduced in an assumption! It
is the idea that a is F that is assumed in line 5.
ii. Another possible exam question!
(e) Sequent 66
i. 1 (1) ∃x(P → F x) A
2 (2) P A
(3) a := x(P → F x) Df.
1 (4) P → Fa 1,3 EE
1,2 (5) Fa 2,4 MP
1,2 (6) ∃xF x 5 EI
1 (7) P → ∃xF x 2–6 CP
ii. 1 (1) P → ∃xF x A
2 (2) −∃x(P → F x) A (RAA)
2 (3) ∀x − (P → F x) 2 Duality
2 (4) −(P → F a) 3 UE
2 (5) P & − Fa 4 Neg→
2 (6) P 5 &E
2 (7) −F a 5 &E
2 (8) ∀x − F x 7 UI
2 (9) −∃xF x 8 Duality
1,2 (10) ∃xF x 1,6 MP
1,2 (11) ∃xF x & − ∃xF x 9,10 &I
1 (12) ∃x(P → F x) 2–11 RAA
(f) Sequent 67
i. 1 (1) ∀x(P → F x) A
2 (2) P A (CP)
1 (3) P → Fa 1 UE
1,2 (4) Fa 2,3 MP
1,2 (5) ∀xF x 4 UI
1 (6) P → ∀xF x 2–5 CP
ii. The converse is left as an exercise.
(g) Sequent 68
i. 1 (1) ∃x(F x → P ) A
2 (2) ∀xF x A (CP)
(3) a := x(F x → P ) Df.
1 (4) Fa → p 1,3 EE
2 (5) Fa 2 UE
1,2 (6) P 4,5 MP
1 (7) ∀xF x → P 2–6 CP
142 CHAPTER 10. SOLUTIONS TO SELECTED EXERCISES
C
1. Prove ∃xF x a` −∀x − F x
2. Prove ∀x − F x a` −∃xF x
(a) 1 (1) ∀x − F x A
2 (2) ∃xF x A
(3) a := xF x Df.
2 (4) Fa 2,3 EE
1 (5) −F a 1 UE
1,2 (6) Fa& − Fa 4,5 &I
2 (7) −∀xF x 1–6 RAA
1,2 (8) ∀xF x & − ∀xF x 1,7 &I
1 (9) −∃xF x 2–8 RAA
Note the double RAA in this elegant proof, whose essential structure is due to
Lemmon.
(b) 1 (1) −∃xF x A
2 (2) Fa A
2 (3) ∃xF x 2 EI
1,2 (4) ∃xF x & − ∃xF x 1,3 &I
1 (5) −F a 2–4 RAA
1 (6) ∀x − F x 5 UI
3. Prove ∃x − F x a` −∀xF x
144 CHAPTER 10. SOLUTIONS TO SELECTED EXERCISES
(a) 1 (1) ∃x − F x A
2 (2) ∀xF x A (RAA)
(3) a := x(−F x) Df.
1 (4) −F a 1,3 EE
2 (5) Fa 2 UE
1,2 (6) Fa& − Fa 4,5 &I
1 (7) −∀xF x 2–6 RAA
(b) 1 (1) −∀xF x A
2 (2) −∃x − F x A
3 (3) −F a A
3 (4) ∃x − F x 3 EI
2,3 (5) ∃x − F x & − ∃x − F x 2,4 &I
2 (6) Fa 3–5 RAA
2 (7) ∀xF x 6 UI
1,2 (8) ∀xF x & − ∀xF x 1,7 &I
1 (9) ∃x − F x 2–8 RAA
145
B
1. (a) Apr & ∀y(Ayr → (y = p))
(b) −(Aar & ∀y(Ayr → (y = a)))
This is equivalent to
−Aar ∨ −∀y(Ayr → (y = a))
and this is equivalent to
−Aar ∨ ∃y(Ayr & (y 6= a)).
2. (a) The author of the Republic was the author of the Timaeus.
(b) Plato was the author of the Republic and the Timaeus.
146 CHAPTER 10. SOLUTIONS TO SELECTED EXERCISES
1 (1) Pl A
(2) l=l =I
1 (3) P l & (l = l) 1,2 &I
1 (4) ∃x(P x & (x = l)) 3 EI
4. 1 (1) Kp A
2 (2) a 6= p A
3 (3) ∀x∀y((Kx & Ky) → (x = y)) A
3 (4) (Kp & Ka) → (p = a) 3 UE (× 2)
2,3 (5) −(Kp & Ka) 2,4 MT
2,3 (6) −Kp ∨ −Ka 5 deM
1,2,3 (7) −Ka 1,6 DS
Bibliography
[1] Benaceraff, Paul, and Putnam, Hilary (eds.) Philosophy of Mathematics: Selected
Readings. Englewood Cliffs, NJ: Prentice-Hall, 1964.
[2] Bergmann, M., Moor, J., and Nelson, J. The Logic Book. Second Edition. New
York: McGraw Hill, 1990.
[3] Birch, David. Illustrations by Devis Grebu. The King’s Chessboard . New York:
Penguin, 1988.
[4] Boolos, George,and Jeffrey, Richard Computability and Logic Second Edition.
Cambridge: Cambridge University Press, 1980.
[5] Brown, George. S. The Laws of Form. London: George Allen and Unwin, 1969.
[6] Copi, I. M., and Cohen, C. Introduction to Logic Tenth Edition. Upper Saddle
River, NJ: Prentice-Hall, 1998.
[7] DeVidi, David, and Solomon, Graham, “On Confusions About Bivalence and Ex-
cluded Middle”, Dialogue 38(4), 1999, 785–799.
[9] Halmos, P. R. Naive Set Theory. New York: Van Nostrand Reinhold, 1960.
[10] Hofstadter, Douglas. Gödel, Escher, Bach: An Eternal Golden Braid.New York:
Basic Books, 1979.
[14] Jeffrey, R. Formal Logic, Its Scope and Limits. Third Edition. New York: McGraw-
Hill, 1991.
147
148 BIBLIOGRAPHY
[17] Lemmon, E. J. Beginning Logic. Indianapolis, IN: Hackett, 1978. First published
1965, Van Nostrand Reinhold.
[18] Mates, Benson Elementary Logic. Second Edition. New York: Oxford University
Press, 1972.
[19] Rucker, Rudy Infinity and the Mind: The Science and Philosophy of the Infinite.
New York: Bantam Books, 1983.
[20] Russell, Bertrand. Introduction to Mathematical Philosophy. London: Allen & Un-
win, 1918.
[21] Russell, Bertrand. My Philosophical Development. London: Allen & Unwin, 1959.
[24] Smullyan, Raymond. What is the Name of This Book? Englewood Cliffs, NJ:
Prentice-Hall, 1978.