An Introduction To Reasoning - Cathal Woods2010

Download as pdf or txt
Download as pdf or txt
You are on page 1of 276

AN INTRODUCTION TO REASONING

AN INTRODUCTION TO REASONING

CATHAL WOODS

2011, 2010 This work is licensed under the Creative Commons Attribution-Noncommercial-Share Alike 3.0 License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-sa/3.0/ or send a letter to Creative Commons, 171 Second Street, Suite 300, San Francisco, California, 94105, USA.

TABLE OF CONTENTS Preface Chapter 1 Critical Reasoning 1.1 The Critical Reasoner 1.2 Critical Reasoning 1.3 Overview: Part 1Analysis & Basic Evaluation 1.4 Overview: Part 2Induction & Scientific Reasoning 1.5 Overview: Part 3Deduction 1.6 Related FieldsDecision-Making & Problem-Solving PART 1ANALYSIS & BASIC EVALUATION Chapter 2 Recognizing & Classifying Reasoning 2.1 Reasoning, Arguing & Explaining 2.2 Identifying Reasoning 2.3 The Relationship Between Arguing & Explaining 2.4 Sentences & Propositions Chapter 3 Standard Form & Diagrams 3.1 Standard Form 3.2 Diagrams 3.3 Reasoning With A Conjunction In The Target Proposition 3.4 Compound Reasoning 3.5 AnalysisSummary (So Far) 3.6 Objections & Rebuttals 3.7 Analyzing Long Passages 3.8 Analyzing Very Long Passages Chapter 4 EvaluationIntroduction 4.1 Two Criteria 4.2 Getting Clear On The Meaning 4.3 Sources 4.4 Reason Substitutes 4.5 Evaluating The Various Reasoning Structures 4.6 Adding Warrants 4.7 Sincerity & Charity 12 14 20 22 vi

1 4 5 6 8 9

31 32 38 40 42 43 49 50

55 69 57 63 66 71 76

ii

Chapter 5 Basic Evaluation Of Arguments 5.1 Soundness 5.2 Validity 5.3 Cogency 5.4 Validity & Cogency Contrasted 5.5 Adding Warrants To Arguments PART 2INDUCTION & SCIENTIFIC REASONING Chapter 6 Induction 6.1 Introduction 6.2 Inductive Generalization (IG) 6.3 Instantiation Syllogism (IS) 6.4 Induction To A Particular (IP) 6.5 A Summary Of Argument FormsIG, IS, IP Chapter 7 Evaluating Explanations 7.1 Truth Of The Reason(s) 7.2 Correlation 7.3 Different Strengths Of Correlation 7.4 The Present-Present Fallacy 7.5 Correlation & Causation 7.6 A Summary Of FormsExplanation Chapter 8 More About Discovering Correlations 8.1 Introduction 8.2 Sufficient Condition, Necessary Condition 8.3 Contributing Factors As INUS Conditions 8.4 Randomized Experimental Studies 8.5 Controlled Experiments 8.6 Inference To The Best Explanation (IBE) 8.7 A Summary Of Terminology & Forms Chapter 9 Arguments Using Correlations 9.1 Introduction 9.2 Inference To An Explainee (IE) 9.3 Inference To The Most Likely Explainer (ML) 9.4 Argument By Analogy (AAn) 9.5 A Summary Of Argument FormsIE, ML, AAn

79 79 83 85 89

95 96 105 108 111

112 112 116 119 123 126

127 127 129 133 136 138 143

144 144 147 149 155

iii

PART 3DEDUCTION Chapter 10 The Venn Diagram Method 10.1 Categorical Generalizations 10.2 Some Extras On Categorical Generalizations 10.3 Venn Diagrams & Categorical Generalizations 10.4 Existential Commitment 10.5 Immediate Inferences, Categorical Syllogisms & The Venn Diagram Method Chapter 11 The Big 8 Method 11.1 Introduction 11.2 Logically Structured English Propositions 11.3 Simple Propositions, Compound Propositions, Ambiguous Propositions 11.4 Some Extras On Negations, Disjunctions, & Conjunctions 11.5 Some Extras On Conditionals 11.6 The Big 8 Method; Asserting The Antecedent 11.7 Asserting The Consequent 11.8 Contradicting The Consequent 11.9 Contradicting The Antecedent 11.10 Hypothetical Syllogism, Constructive Dilemma, Destructive Dilemma & Disjunctive Syllogism 11.11 Recap Of The Big 8 Method 11.12 A Summary Of Argument FormsThe Big 8 Chapter 12 The Method Of Derivation 12.1 Advantages Of The Method Of Derivation 12.2 Logically Structured Symbolic Propositions 12.3 Rules Of Derivation 12.4 The Method Of Derivation 12.5 Three Additional Rules Of Derivation 12.6 Rules Of Equivalence 12.7 Conditional Derivations & Indirect Derivations 12.8 A Summary Of RulesMethod of Derivation Chapter 13 The Truth Table Method & The Truth Tree Method 13.1 Advantages & Disadvantages Of The Truth Table Method 13.2 Truth Values & Truth Tables For The Logical Operators 13.3 Setting Up Truth Tables 13.4 The Truth Table Method 13.5 Logical Equivalence & Inequivalence, & Logical Contradiction 13.6 Targeted Truth Tables 13.7 The Truth Tree Method 13.8 A Summary Of Truth ConditionsTruth Tables & Truth Trees 157 158 160 162 164

168 168 169 171 173 176 178 179 180 181 182 183

184 184 187 188 193 195 199 203

204 204 207 210 213 214 218 223

iv

APPENDICES Appendix to Chapter 6 Problems in Inductive Logic 6A.1 The Lottery Paradox 6A.2 The Problem Of Induction 6A.3 The New Riddle Of Induction Appendix To Chapter 7 Mill's Methods 7A.1 Introduction 7A.2 The Method Of Agreement 7A.3 The Method of Double Agreement 7A.4 The Method Of Difference 7A.5 The Method Of Concomitant Variation 7A.6 The Methods & Cogency 7A.7 Summary Of FormsMill's Methods A1 A3 A4

A7 A7 A9 A10 A12 A14 A16

Notes For Teachers

A17

Bibliography Summary Of Forms, Terminology, Etc.

Preface 1. The main implicit aims of this text are two-fold. First, it focuses heavily on the reasoning that is just beyond the natural ability of many human beings (and not much further). For some people, this book will simply encode how they already think, but for most people, the reasoning described here goes beyond the kind of reasoning they do in everyday life. Deductive logic, covered here in part 3, provides an example that will be familiar to anyone who works in logic or the psychology of reasoning: while both disjunctive syllogism (or argument by elimination) and one form of reasoning with conditionals, asserting the antecedent (or modus ponens), are easy, the form of conditional reasoning called contradicting the consequent (or modus tollens) is, in many contexts and in the abstract, not. Similarly, differentiating between the logical import of "if" and "only if" does not come naturally to most people. One form of reasoning that this text emphasizes (which is not usually found in critical reasoning and logic courses but sometimes in introductory courses in the guise of necessary and sufficient conditions) is the difference between relation and correlation. Human beings tend to be mistake relation for correlation and make generalizations about relations on the basis of scant or selective experience. Part 2 thus rehearses this idea in a variety of guisesa double use of induction, the present-present fallacy, necessary and sufficient conditionsin an attempt to make students familiar with the idea, or at least, instinctively cautious when a claim of correlation (or cause) is made. The second aim of the text stems from a frank acknowledgment that reasoning is as much art as it is science. Any reader (or student or teacher) who wishes to stick to deductive logic and believes that natural-language passages can be cleanly regimented into abstract form is fooling herself. Rather, a guiding thought behind the text is that the reader will be a better reasoner if she has some idea of how difficult good reasoning can be, in terms of what we as reasoners are trying to do. Paradoxically, I think that a reader can be made to feel more in command of the reasoning enterprise and be willing to assume responsibility for it by being exposed to the straining seams and the cracks. The text thus goes some way to alerting readers to various difficulties, on topics such as the separation of analysis and evaluation, breaking apart propositions, analyzing long passages such as editorials, whether a bad reason or argument is a reason or argument at all, how sample selection can go wrong, the amount of work one needs to do to confidently accept a conclusion of an argument involving a generalization, the problem vi

of induction (treated at length in an appendix), correlation and causation, scientific methods, and others. 2. Beyond an introductory chapter, An Introduction To Reasoning is divided into three parts, of four chapters each. The first covers analysis and basic evaluation. The ultimate goal is to provide a method that will allow readers to tackle lengthy arguments and explanations such as are found in editorials and popular scientific writing. To this end, a thorough treatment of analyzing passages by diagramming is provided. Chapter 4 distinguishes evaluating the reasons and evaluating the reasoning and 5 the distinction between validity and cogency. The second part introduces evaluation and focuses largely on induction and scientific reasoning as the way in which much of the knowledge used in evaluation is generated. These general principles or background statements are called warrants. Chapters 6, 7 and 8 are a prolonged discussion of how our experience is processed using induction in order to arrive at statements of the relationship and correlations between different types of thing. Chapter 9 discusses arguments involving explanations and theories: Inference to an Explainee, Inference to the Most Likely Explainer and Argument By Analogy. The third part covers specific methods for evaluating arguments (or inference in general) for validity, or what is commonly called deductive logic. It begins (in chapter 10) with a rudimentary treatment of categorical logic before spending three chapters on propositional logic. Chapter 11 then provides two intermediate steps on the route to propositional logic. First, in "logically structured English" the propositions are symbolized but logical words left in English; second, the "Big 8 Method" focuses on the most used inference forms as stand-alone patterns of inference. Confident readers might skip to chapter 12 after reading 11.5. For a longer overview, see chapter 1. Different readers will find the book incomplete in different ways. There is no treatment of probability or Bayes' Rule in part 2; there is no treatment of predicate logic in part 3. For classroom use, however, it is more than sufficient for a complete 15-week term in critical reasoning or introductory logic. Notes For Teachers are included as an appendix for readers looking for background and further references for various points.

vii

3. The book is made available on-line (in pdf) for free, under a Creative Commons License. (See the title page for details.) Being on-line hopefully means that the book can be easily updated with both new material in the text and additional questions for the exercise sets sent in by readers like you. Please send your exercise suggestions to cathalwoods at gmail dot com . Criticisms and suggestions for the text can also be sent to this address. Thanks 1. Intellectual debts too great to articulate are owed to scholars too many to enumerate. At different points in the work, learned readers might certainly detect the influence of those who have been particularly stimulating, including Toulmin (especially 1958), Fisher & Scriven (1997), Walton (especially 1996), Epstein (2002), Johnson-Laird (especially 2006), Scriven (1962), Giere (1997). 2. Thanks are due to Virginia Wesleyan College for providing me with 'Summer Faculty Development' funding in 2008, and Gaby Alexander (2008), Ksera Dyette (2009), Mark Jones (2008), Lauren Perry (2009), Alan Robertson (2010) with undergraduate research funds. 3. Particular thanks are due to my (once) Ohio State colleague Bill Roche. The book began as a collection of lecture notes, combining work by myself and Bill. Dedication 1. This work is dedicated to all of our friends from our time at Ohio State, including the "cast" of the book Jack (Arnold) (Nick) Jones Gill (McIntosh) Henry (Pratt) (Josh) Smith, and Jim (the Great Dane) Cathal Woods Norfolk, Virginia, USA

viii

Chapter 1 Critical Reasoning 1.1 The Critical Reasoner 1. This book will you show you how to reason well. But knowing how to reason well is not even half the battle. And so, in the interest of "truth in advertising", it must be said clearly and without delay that this book will not improve your reasoning unless you want it to. This book will tell you a lot about reasoning and how to reason well, but its impact will be limited unless you currently feel unsatisfied with your current standard of reasoning and are determined to improve it. In the style of a "12 Step" program, the first step is to say Hello. My name is ________________ and I have a problem with reasoning. If you think you reason just fine, you can put away this book now. If you are continuing on, let it be said again that while knowing about good reasoning is commendable, being the kind of person who actually employs reasoning skills under any kind of pressure is a whole 'nother ball of wax. Many intelligent people, who know all about good reasoning, nonetheless fail to employ those standards when they lift their heads from the book and leave the library. We might distinguish, therefore, the IQ (intelligence quotient) of an individual from what might be called the RQ or rationality quotient of an individual. Or, we can distinguish critical reasoning (in the sense of the theory and description of good reasoning) from the critical reasoner (the person who is able to reason, in the wild, in accordance with these standards). 2. Some people admit that they have a problem with reasoning, but don't think that learning about and practicing reasoning is worth the effort. They think that thinking about things only makes one confused and that one's "gut instinct" is sufficient for most situations. Plus, thinking things through is hard and the effort isn't worth the pay-off. It is true that confusion sometimes results from thinking. And it is also true that thinking things through is difficult. But even though thinking is sometimes confusing and difficult, and although we often escaped unharmed when we form beliefs without any critical thought, life goes better, generally speaking, when we think carefully about things.

Stop to think for a moment about the last dumb thing you did or said. A hasty judgment? Telling less than the whole truth? Dodging responsibility? Unneeded purchases? The list goes on and on. Why did you do it? Did you think about it at all? If you did, were the reasons and the reasoning any good? Unfortunately, humans make many judgments and decisions and form explanations without reasoning at all. And even when they do pause to think, they sometimes reason rather poorly. A lot of the time we get away with it, but the slightest complication will quickly have us feeling foolish. It's perhaps for these reasons like these that you are interested in improving your reasoning. 3. Unfortunately, you cannot simply improve our rationality quotient or become a critical reasoner by deciding to always be rational, because how you react to real-world situations are habits which can only be modified with practice. Making rationality habitual requires the same attention and training that developing any mental habit requires. As you are no doubt aware, lots of people make resolutions only to break them quickly. The difference between knowing about reasoning and using that knowledge is rather like the difference between knowing what foods you should eat in order to be healthy and actually eating those foods. Just as hunger has a strange power to short-circuit our knowledge of healthy food, so something gets in the way of employing reasoning skills. What is it that gets in the way? In a phrase: short-term benefit. Human reasoning is typically governed not by the methods of reasoning and the goal of truth but by processes which are biased in favor of short-term benefit. Appetites and emotions often crowd out our reasoning with their demand for swift satisfaction. These desires provide strong and quick-acting processes for generating beliefs and decisions. These are often important to satisfy, but in some situations they can go wrong. For example, we desire to get goods for ourselves, but we sometimes do so foolishly or unjustly. We eat anything within eyesight; we procrastinate because procrastination always pays off in the short term, while long-term success is at best probable; we are persuaded to buy what others are selling by appeal to our short-term desires. We also desire to strengthen our immediate ties to family, friends, and other communities, but we sometimes do so irrationally. In our desire to fit in, we follow the example of others, or tradition, or popular opinion, or the opinion of authorities, even to 2

our detriment. We are unduly influenced by threats of exclusion, as when we are accused of being unpatriotic, or unholy, for not adopting some belief or practice. We claim to know more than we do, or with greater certainty than is really warranted, driven by a desire to be informative and so useful to others. These types of desire are sometimes at odds with our better judgment: it would be better, in the long-term, not to have another slice of cake, not to put off taking care of business, not to gain status by lying. Admitting that one is wrong, or even that one does not know, is extremely difficult for humans, since being a knower is a source of value to ourselves and to the community. We have an arsenal of tricks available to avoid admitting error. When we already have adopted a belief, we are poor at updating it in the light of new evidence. Even contradictory evidence can be ignored or re-interpreted. Our critical capacity is particularly blunted when the belief in question is self-serving or one that we wish to be true. It is natural to be defensive about changing our beliefsthey're our beliefs, after all! When one of our beliefs dies, we feel that we have died. It is only with difficulty that we say "I was wrong." or even "I could be wrong.". We find it unsettling to lack an explanation, to suspend judgment or not make a decision. The default activity of our brains seems to be to constantly generate beliefs and theories. But even basic processes such as eyesight can go wrong, as when we are fooled by an optical illusion, for example, or when we see what we have been primed to see or hear what we expect to hear. We detect patterns where there are none because we are seeking patterns and attempting to formulate causal connections, leading people to believe that there are lucky charms or pieces of clothing, or that they have psychic powers or have just been given a message from a ghost or spirit. When these mental short-cuts or biases combine, as they often do, we can begin to understand why people form cargo cults or believe in alternative medicine, or ESP, or UFOs, or The Secret, or in general, how we end up believing things that "just ain't so". Fueled by our desire to believe, we let down our critical guard at the moment when we most need our critical faculties to do their job. 4. A critical reasoner, by contrast, demands reasons and arrives at judgments or decisions using the methods of reasoning. That is, instead of "Can I believe that?", you must constantly ask "Why should I believe that?" and further "Does the evidence compel me to believe?". Reasoning well requires the autonomy to require evidence, the courage 3

to follow the evidence where it leads even though it might lead to the abandonment of cherished beliefs, and the patience and perseverance to wrestle with confusing and difficult issues, and with others when they are less than rational. It also requires the humility to recognize that others are often a valuable source of knowledge and that one's own perspective is often incomplete and even WEIRD. Most of all, at the start and at the end and at every point in between, we need modesty and self-awareness. That is, we need to be critical of our reasoning. If we are unwilling to shine a spotlight on how we form beliefs and theories and make decisions, we have no hope of improvement. 1.2 Critical Reasoning 1. Critical reasoning is evaluative reasoning, and, in the context of this book, reasoning which evaluates the reasons and reasoning found in arguments and explanations, as opposed to the evaluation of horses or bourbon or paintings. Arguments and explanations are constructed of propositions. A proposition affirms or denies a predicate of a subject, whether in the present, the past, the future, whether real or imagined. A proposition is capable of being either true or false, unlike, say, a question or a command or a wish. If you in fact believe the proposition being entertained you can also it call it a belief, though you can also reason hypothetically, that is, you can make an assumption and see what follows from it. Reasons and reasoning are the heart of both arguments and explanations. In an argument, the truth of an initial proposition(s) is supposed to justify believing that the later proposition is also true. The belief "Jack pushed Gill into the water at the sea-side." is offered in order to justify believing "Jack was mean to Gill.". Similarly, the propositions "Jack is 5' 6"." and "Gill is 5' 4"." provide good reasons for believing "Jack is taller than Gill.". Similarly, consider the following hypothetical argument, or inference: the suppositions "Cats bark." and "Gill has a cat." would justify the (hypothetical) conclusion that Gill's cat would bark. (Here, there is no commitment to the truth of "Cats bark." and so no commitment to "Gill's cat barks.". This is an inference and a piece of reasoning, but not an argument.) In an explanation, on the other hand, the reasons are supposed to explain, that is, make clear why or how (or, less often, what,) something is. For example, the presence of beetles of a certain type might explain a poor harvest, or the sun shining into a player's 4

eyes might explain a dropped ball. (An argument, by contrast, argues that a certain proposition is true, rather than why it is true.) 2. Analyzing arguments and explanations and evaluating their reasons and reasoningand, in the same ways, being able to formulate good arguments and explanationsare difficult tasks. The aim of this book is to make you better and faster at these tasks. You might hope that, having learned about how to reason well (and poorly), you will no longer need to evaluate our own reasoning; you will simply be able to reason well without critically reflecting on what you are doing. In fact, however, it's likely that you will always need to reflect critically on your reasoning, but as you get better at it, you will move on to criticizing your reasoning in new and more sophisticated ways. When you evaluate reasoning you are reasoning about the quality of your reasoning. This reasoningthe reasoning about your reasoningis also subject to evaluation. The human being is a reflective animal and a life without examination is, according to Socrates, not a life fit for human beings. 1.3 Overview: Part 1Analysis & Basic Evaluation 1. Before you evaluate whether the reasons offered as a justification for accepting a conclusion as true or as an explanation for some state of affairs really do their job, you have to be sure that you have understood the structure of the reasoning at the heart of the argument or explanation. Analyzing a passage means breaking it down into its parts, the propositions which are the reasons and the proposition which is the conclusion (in an argument) or the explainee (in an explanation). You might think this is easy, and some of the material in chapter 2 will be familiar to you from your everyday acquaintance with arguments and explanations, but as you will see in chapter 3, passages can be complex, and so it is very helpful to have a general analytic procedure. In particular, chapter 3 presents a method for diagramming passage that involve objections and rebuttals, such as are often found in lengthy texts, from editorials to academic articles and books. 2. Chapter 4 splits the task of evaluation into two: evaluating the truth of the reasons and evaluating the reasoning, that is the supposed justificatory or explanatory connection between the reasons and the conclusion or explainee. The discussion of the first focuses on problems with determining the meaning of the propositions involved and the trustworthiness of various sources, while the discussion of the second describes 5

how the basic question of the connection between the reasons and the conclusion or explainee can be applied to the reasoning structures presented in chapter 3. 3. As applied to arguments (in chapter 5), the basic question of the strength of the connection becomes "Do the premises justify belief of the conclusion?". Consider the following argument: "Democracy requires the consent of the governed. Therefore, it is a just form of government.". To critically evaluate the reasoning in is argument, we ask "Does (or would, supposing it to be true) the fact that democracy requires the consent of the governed justify belief that democracy a just form of government?". The key idea of the chapter is that the strength of the justification can vary; that is, the support given by the premises to the conclusion can be weaker or stronger. Only when it meets some threshold do we accept that the argument is well-reasoned. Below this, we say that the conclusion is not strongly supported and not worthy of acceptance. 1.4 Overview: Part 2Induction & Scientific Reasoning 1. The key idea coming from chapters 4 and 5 is that of a warrant. A warrant is a background or connecting proposition which states a condition that the audience must accept as true if the reasoning is to be counted as strong. In a few scenarios, such as the tax code, legal code and others, the warrant is a matter of convention. But in the vast majority of cases, the warrants used in arguments and explanations are generalizations based on experience of the world. 2. Chapter 6 thus begins a discussion of how we arrive at general propositions. A proposition such as "Most Irish people are English speakers." is a generalization because it concerns Irish people generally and English speakers generally, rather than about any one or more particular Irish people or English speakers. It is also quantified in that it states the proportion of Irish people who are English speakers. The quantity here is expressed by "most". We arrive at a quantified general conclusion based on observation of instances. From repeated cases (e.g. "Sen is an Irish person and is an English speaker.", "Enda is an Irish person and is an English speaker." and so on) we infer a quantified general conclusion. This process of ascending from data describing particular instances to a general proposition expressing the extent to which two types of thing are correlated is induction. A simple use of induction is to establish the frequency with which one type of thing (F) is related to another (G). Such generalizations can then be used in arguments to support acceptance of a conclusion. 6

3. Explanations (in chapter 7) require generalizations which not only say that all or most Fs are Gs, but also that G is less likely when F is absent. F and G, that is, must be correlated. A few other conditions are also required, to strengthen the correlation into a causal claim. 4. Chapter 8 provides another opportunity to work on the crucial idea of correlation, this time in the guise of the distinction between 'necessary condition' and 'sufficient condition'. Some things are sufficient and not necessary for others (such as, "Selling 1 million copies of a recording in the U.S. is sufficient for being awarded a platinum disc."), some are necessary and not sufficient (e.g. being bitten by a mosquito is necessary, but not sufficient, for contracting malaria), some are both necessary and sufficient (e.g. receiving the most votes is necessary and sufficient for winning an election), and some are neither necessary or sufficient (or "accidental", e.g. being red is neither necessary for being a chair, nor sufficient). The distinction between necessary and sufficient conditions is used to discuss the fact that many of our explanatory principles, although they express a correlation, are contributing factors (here understood as INUS conditions) which are causes only in the company of other factors. For example, we cannot yet describe any set of states which are together sufficient for a smoker to develop lung cancer. Rather, we can only say, with a certain level of confidence, that smoking raises the probability of lung cancer by a certain percentage, as compared with not smoking. The second half of chapter 8 discusses randomized experimental studies and controlled experiments as ways in which humans continue to work are refining our scientific knowledge, so that we can arrive at the best explanation. 5. Chapter 9 considers three types of circumstance in which we have some but not all of the pieces of an explanation. When we have a connecting proposition and the antecedent conditions, we can make an inference to an explainee. Sherlock Holmes and Greg House make use of inference to the most likely explanation in cases where there are multiple alternative explanations but no specific explainer. Finally, this chapter also includes argument by analogy, in which a complex explanation or theory is transferred from one domain to another. Such reasoning is used in word puzzles such as "Foot is to toe as hand is to ______." but is also used to generate candidate explanations when an investigation is at a loss.

1.5 Overview: Part 3Deduction 1. Part 3 concerns deductions, that is, arguments employing background connecting propositions from which any vagueness has been (often artificially) removed and which are taken to be true without reservation and which can thus be used not only to strongly support a conclusion but to guarantee it. 2. Generalizations about relations using the quantifying words "All" and "Some" permit premises which (in some combinations) firmly support a conclusion, as in "All Great Danes are dogs. All dogs are animals. So, all Great Danes are animals.". Arguments involving only propositions which quantify over types (or: categories) of thing are considered in chapter 10, under the traditional heading of categorical logic. The method for evaluating such arguments is the Venn Diagram method. 3. Chapters 11, 12 and 13 concern the logic of arguments involving at least one compound proposition as a principle, that is, at least one conjunction, disjunction or material conditional. These are all compound propositions since they have two or more propositions as parts. The methods we use to evaluate such arguments are methods in propositional logic. We can make propositions about more than one state of affairs at a time by conjoining propositions describing each state with the word "and", as in "Jack is hungry and Gill is hungry.", "Jack is tired and hungry.", "Great Danes are tall and four-legged.", "Jack went up a hill and Gill went up a hill.", "Cats are animals and the Sun is very far from Earth.". The basic form of a conjunction is "<Proposition 1> and <proposition 2>.". We can assert multiple propositions as alternates by using the word "or" (or "or else", if the speaker is taking care to indicate that only one of the alternative is possible), as in "Jack is hungry or Gill is hungry.", "The dog was killed by someone who knew him, or it was killed in an accident.", "Jack is either hungry or (else) tired.", and "Great Danes are either dogs or (else) birds.", "Jack is asleep or (else) on the telephone.". (A conjunction asserts that both propositions are true; a disjunction asserts that at least one is true.) The basic form of a disjunction is "<Proposition 1> or <proposition 2>.". We can also assert that believing one proposition to be true allows us to believe another, in what is called a material implication or a conditional. Material implications link states together without implying that one is the cause of the other. They merely propose that belief in the two states of affairs can be linked. Consider the proposition, "If the bear scat is warm, then there is a bear nearby.". This proposition is not proposing that 8

the warm bear scat caused the bear to be nearby (if anything, the reverse is the case); it merely asserts that if we take the first to be true, we can take the second to be true. Material implications have the basic form "If <proposition 1>, then <proposition 2>.". Compound propositionsconjunctions, disjunctions, material conditionalsare complex propositions, that is, they are all propositions which have a proposition as a part. Indeed, conjunctions, disjunctions and material conditionals have propositions for two (or more) of their parts; hence the name "compound". One important complex proposition, though not a compound proposition, is a negation. Negations are often used along with compound propositions. Propositions involving entities, properties and classes can be negated, as in "Jack and Gill are not in the same place.", "The cat did not move.", "The tower is not blue.", "Jack is not a dog.", "Cats are not clay.". Negation can be applied to any proposition. The basic form of a negation is "It is not the case that <proposition>.". Arguments can be constructed by using at least one conjunction, disjunction or material implication and repeating one or more of the constituent propositions in another proposition. For example, if we know "Jack is asleep or on the telephone." and we further learn "Jack is not on the telephone.", we can infer that he is asleep. In this argument, both "Jack is on the telephone." and "Jack is asleep." can be found in different propositions: "Jack is on the telephone." is in the first it is part of the disjunction "Jack is asleep or on the telephone." while in the second it is found in negated form, "Jack is not on the telephone.". "Jack is asleep." appears in both the first premise and the conclusion. Or again, if we accept "Jim is a dog." and "Jim is at the park." we would also accept "Jim is a dog and Jim is at the park.". In this argument the conclusion is a complex proposition (in this case, a conjunction) and the simple propositions "Jim is a dog." and "Jim is at the park." are both found more than once. A variety of methods for evaluating arguments involving compound propositions is considered in chapters 11 (the Big 8 method), 12 (the method of derivation) and 13 (the truth table and truth tree methods). 1.6 Related FieldsDecision-Making & Problem-Solving 1. In addition to "reasoning" and "argument" and "explanation", the three terms "judgment", "decision-making" and "problem-solving" are often used in the titles of books and courses on, or involving, critical reasoning. This book considers judgment 9

insofar as evaluative reasoning is judgment, but does not consider decision-making or problem-solving. 2. To judge is to reach a conclusion (whether on the basis of propositions or otherwise) and judgment is involved in decision-making and problem solving. "Judge" and "decide" are difficult to keep separate and are often used interchangeably. "Judge" is more typically used with respect to what to believe, while "decide" is more typically used with respect to what to do. Judgments are about the truth and falsity of propositions describing states of affairs while decisions are about what is good and bad (or best, better and worse). This distinction is not firm, however, since a decision to (for example) go fishing can be construed as a judgment that going fishing is the best thing to do and in this book you will see plenty of examples of arguments (attempts to get an audience to form a judgment) about what to do. 3. Problem-solving involves decision-making and judging, and an additional activity besides. We are often faced with situations that are entirely open-ended, meaning that we do not have any guide as to what propositions to consider. The questions "What will the weather be this afternoon?", "When will I eat lunch today?", and "What are the causes of the reduction in violent crime of recent years?" could each be answered in a variety of waysthere is no definite suggestion or suggestions on offer as to how the world will be, or what to do, or how to explain. Problem-solving thus involves generating alternatives. In response to the problem, we might generate one solution, or more than one. 4. This book does not cover decision-making as a separate subject. Nor does it cover problem-solving's additional step of generating alternatives. Decision-making, in the sense of having to choose one of the available options, has received a lot of attention, even on-line, as has game theory, which concerns decisions which depends on the actions of others. For decision-making, try Decision Theory: A Brief Introduction. A quick internet search will reveal many more. For game theory, a variety of texts can be found at The Economics Network. Problem-solving texts are typically specific to a particular discipline (such as business or medicine) and are proprietary.

10

PART 1

ANALYSIS & BASIC EVALUATION

Chapter 2 Recognizing & Classifying Reasoning 2.1 Reasoning, Arguing & Explaining 1. People argue by giving arguments. And people explain by giving explanations. Arguments and explanations present instances of reasoning. To reason is to take one or more proposition(s) as a reason(s) for another proposition. 2. To argue is to present a piece of reasoning in which there is (thought to be) a justificatory relationship between one or more propositions and another. The initial propositions (are thought to) justify acceptance that the state of affairs in the further proposition is the case. When arguing the initial propositions are called the premise(s); the proposition which the speaker is trying to convince the audience to believe is called the conclusion. At the beginning of the argument, the audience either has no opinion about the conclusion or might be doubtful about its truth. The goal of the argument is to convince the audience to add the conclusion as a new belief or in place of an existing belief. For example, imagine that a detective has announced that Henry stole a computer which was unexpectedly missing but has now been recovered. Bill is a friend of Henry's and is doubtful. He asks for evidence, that is, for reasons why he should believe that Bill stole the computer. The detective says "Henry's finger-prints were found on the computer. So, Henry stole the computer.". That is, the detective presents the evidence together with the conclusion, in order to get the audience (in this case, Bill) to believe the conclusion. When people argue, they are not engaged in a heated exchange of opinions. This is the everyday understanding of "argue" and "argument"; we imagine two people shouting at each other with a certain level of insistence and perhaps anger. Such exchanges, however, rarely involve propositions justifying belief in a conclusion. Rather, speakers contradict one another without providing justifications for their positions. 3. To explain is to present a piece of reasoning in which there is (thought to be) an explanatory relationship between one or more propositions and another. The initial propositions (are thought to) explain how or why the state of affairs described in the further proposition is the case. There are various types of explanation. For example, you can explain how to do something (e.g. "To tie your laces, start by crossing one over the 12

other "), how to use a word (e.g. "Jack explained that, in Irish, "romhaire" means "computer"."), how a goal will be achieved (e.g. "First we'll knock you out, then make an incision ") or what something is (e.g. "The teacher is explaining that water is comprised of hydrogen and oxygen atoms in a two-to-one ratio."). This book will focus on explanations where speakers (try to) explain how or why some state of affairs came to be, or is coming to be, or will come to be, or comes to be, generally speaking, such as "The teacher is explaining how the continents came to be in their current positions.". An explanation presents one or more propositions, which we will call the explainer(s), in to attempt to explain a state of affairs described in a further proposition which is called the explainee. The explainee expresses a phenomenon or state of affairs or event, such as "All of your roses have died." or "A lunar eclipse will be visible later today." or "Every October the leaves fall off the tress.". Phenomena can be specific or general. For example, "This grass is brown." concerns some specific patch of grass, while "Grass turns brown when deprived of sunlight." is general. At the beginning of the explanation, the audience already believes the explainee but does not understand what the explainee is, or why or how come it came (or is coming, will come, generally comes) to be. The goal of an explanation is an understanding of the explainee. For example, Smith sees that a normally full reservoir is low. He accepts the evidence of his senses and so believes the proposition "The reservoir is low.". But he does not know why the reservoir is low. He asks Jones, "Why is the reservoir low?". Jones explains, "The reservoir is low because a lot of power has been needed to power fans and air-conditioners in this heat wave.". 4. Humans argue for and explain all kinds of propositions, about any subject matter at all: who committed the crime, where a group of friends should go out to eat, what a person should do when confronted by a moral problem, or whether or not a scientific claim is true. Here are some examples in detail: (i) If we're going to the movies, we should go to see Snakes on a Plane. It's an action-adventure movie, which I'm definitely in the mood for. It stars Samuel L. Jackson, who is a great actor. Plus, my brother went to see it yesterday, and he says it's a blast. If it is in our power to prevent something bad from happening, without thereby sacrificing anything of comparable moral importance, we ought, morally, to do so. Famine is something bad, and it can be prevented without sacrificing anything of comparable moral importance. So, we ought to prevent famine. (Peter Singer)

(ii)

13

(iii)

Henry took the computer because he mistakenly believed that it was scheduled for repair.

Arguments and explanations can be important: If you accept an argument, then you adopt the conclusion. This means adopting a belief (and perhaps discarding an existing belief) and often means modifying your behavior in various ways, some of them substantial. If, for example, you do little to alleviate famine, but the argument in (ii) just above is convincing, you ought to be doing a lot more to prevent famines from occurring. If you accept the explanation in (iii), our expanded understanding of the world (in this case, Henry's motivations for acting in the way he did) might greatly influence your treatment of Henry. Arguments and explanations can be complex: it can be hard to tell what the structure of the reasoning is. And even when it is clear what the structure is, it is hard to tell whether the premises or reasons in fact support the conclusion strongly or explain the explainee well. Complex and important reasoning is typical of lots of real life reasoning: the reasoning in court cases can be both complex and important. For this reason, it is worth practicing the analysis and evaluation of reasoning. Such practice is the purpose of the text you have before you. 2.2 Identifying Reasoning 1. Let us review the terminology used in connection with arguing and explaining and see it at work in various passages. When arguing, the propositions doing the justifying are called the premises (also the justification, the grounds, or the evidence, or the reasons) and the proposition the speaker is trying to get others to believe is called the conclusion. When explaining, the propositions doing the explaining are called the explainers (also the explanans or the reasons) and the proposition being explained is called the explainee (also the explanandum, the (target) phenomenon, or the (target) state of affairs). The propositions doing the justifying or explaining, that is, the premises and the explainers, can generically be called the reasons, while the proposition being justified or explained we can call generically the target proposition. When arguing, the premises can be said to (be thought to) justify, support, make likely, imply, establish, demonstrate, prove, the conclusion, and the conclusion is said to (be 14

thought to) be supported by, be made likely by, be justified by, be implied by, follow from, be derivable from, be established by the reasons. (There are differences between these terms, some of which you'll see as you progress through the book, but they all involve premises supporting a conclusion.) Arguers can be described as concluding or drawing the conclusion that , or arguing for the conclusion. When explaining, the explainers can be said to explain or give an account of an explainee, while the explainee is explained by the explainers. Here is a table summarizing the main terminology we'll use:
Reason(s) In an explanation In an argument Explainer(s) Premise(s) relation to the explain justify (belief of) Target Proposition the explainee the conclusion

(While we're on the subject of terminology, you should note that this book will present examples for your consideration in what are called passages, some of which will be dialogues. These passages will have a speaker, even though you are reading, and an audience. Most passages will be preceded by a context, in italics.) 2. When faced with a passage or dialogue, you must first determine whether or not it contains reasoning, and in particular whether it describes someone arguing or explaining. The first thing to note is that, although every instance of reasoning can be presented in a set of propositions, not every set of propositions will involve reasoning. Here are some examples: (i) At Widget-World Corporate Headquarters: We believe that our company must deliver a quality product to our customers. Our customers also expect first-class customer service. At the same time, we must make a profit. One student speaks to another: The instructor passed out the syllabus at 9:30. Then he went over some basic points about reasoning, arguments and explanations. Then he said we should call it a day.

(ii)

The sentences in passage (i) are propositions. (Propositions will be discussed in the next section, 2.4.) It is not the case, though, that the speaker is using any of the propositions to justify or explain any one of the others. (ii) is a narration: this happened, then this happened, and then this happened. Everything that happened is expressed in

15

a proposition, but the truth of none of the propositionsfor example, that he said we should call it a dayis justified by or explained by the others. 3. Passages or dialogues involving reasoning will often but not always include certain words or phrases to introduce one or more of the reasons or the target. We call these flag or indicator words or phrases. They indicate that the passage involves reasoning and some also indicate whether the speaker is attempting to justify belief of some conclusion, or to explain some phenomenon. First, be on the lookout for any of the terms we have introduced as the special vocabulary associated with reasoning, arguing and explaining, such as "reason(s)" (as in "This is the reason "), "conclusion" or "conclude" (as in "We can conclude "), "argue" (as in "I would argue that "), and so on. If you see any of these words being used in phrases to introduce or describe what is going on, you will know that reasoning is being attempted and (for many of these words or phrases) whether it is arguing or explaining. In addition, a number of other common words or phrases are used in the context of reasoning. For the reasons (the premises or explainers) these include since for given that because And for the target proposition (the conclusion or explainee) therefore so hence thus . . . as a result 4. Flag words and phrases are some help, but they are far from perfect. You should listen (or read) carefully to try to gather as much contextual information as possible so that you can answer the basic questions for arguments and explanations: Argument: Is the speaker of this passage trying to justify belief of a proposition which was previously not believed, or was doubted or disbelieved? If the answer is "Yes." the passage involves an argument. Consider the following scenario: Jack is at the breakfast table and shows no sign of hurrying. Gill says: You should leave now. It's almost nine a.m. and it takes three hours to get there. 16

In the context described by the words in italics, the set of propositions is best construed as an argument. Jack's inaction suggests that he does not accept the conclusion "Jack should leave now." and so Gill provides reasons that might convince him. Explanation: Is the speaker of this passage attempting to explain to an audience how or why some past or present or general state of affairs came or is coming or generally comes into being? If the answer is "Yes." the passage involves an explanation. Consider the following scenario: In a text book on the brain: Axons are distinguished from dendrites by several features, including shape (dendrites often taper while axons usually maintain a constant radius), length (dendrites are restricted to a small region around the cell body while axons can be much longer), and function (dendrites usually receive signals while axons usually transmit them). The propositions are best construed as an explanation of how axons and dendrites differ. (In the context of a text book, the target proposition (in this case "Axons and dendrites are different.") is often not previously known, but is immediately accepted by the reader on the authority of the text and immediately becomes a subject for explanation.) Without context or flag words, it is often impossible to tell whether a passage involves arguing or explaining. Consider the following "bare" passage: The game is cancelled since it is raining heavily. In some contexts, this passage could be an attempt to justify a new belief. The speaker presents the fact that it is raining heavily as a reason which he hopes will convince the audience of the truth of the conclusion, that the game is cancelled. Alternatively, this passage could, in other contexts, be an explanation. It would be an explanation if the speaker and the audience agree that the game is cancelled, and the speaker is presenting the fact that it is raining heavily as the reason for the cancellation. Here's another example which, depending on the context, could be an argument or an explanation: Some people have been able to give up cigarettes by using their will-power. Everyone can draw on their will-power. So, anyone who wants to give up cigarettes can do so. In this passage, are the two propositions "Some people have found success by using their will-power." and "Everyone can draw on their will-power." being used to 17

argue for or explain the final proposition, "Anyone who wants to give up cigarettes can do so."? If this was not previously believed or was disbelieved, the speaker would be arguing and the audience might come to believe it on the basis of the evidence and the supporting relationship between the evidence and the conclusion. Imagine that the context of the argument above is a debate about giving up smoking, where it has already been suggested that not everyone can give up smoking. In such a scenario, the person who presents this piece of reasoning will be understood as attempting to convince the audience to give up their old belief, or their skepticism, and accept the final proposition as true. Alternatively, if the conclusion is already agreed to, but there is ignorance as to why the conclusion is true, the passage above will function as an explanation. Imagine that the speaker and audience already believe that everyone can give up smoking and the speaker then explains why this is so. In giving an explanation, the speaker presents information which makes the explainee true. Compare the following passages: Highway repairs begin downtown today. And a bridge lift is scheduled for the middle of rush hour. I predict that traffic is going to be terrible. Highway repairs begin downtown today. And a bridge lift is scheduled for the middle of rush hour. That's why traffic is going to be terrible. The words "I predict" in the first version suggest the conclusion is a novel belief, while "That's why" in the second version suggest that the final proposition was already believed. Explanations are not intended increase or correct our knowledge. This is because in an explanation the explainee (the target proposition) is already believed. To return to the reservoir example, imagine that Smith assumes the reservoir is at its normal level when Jones says "The reservoir is at a low level because of several releases to protect the down-stream ecology.". Jones might intend this as an explanation, but since Smith does not share the belief that the reservoir's water level is low, he will first have to be given reasons for believing that it is low. The conversation might go as follows: Jones: The reservoir is at a low level because of several releases to protect the down-stream ecology. Smith: Wait. The reservoir is low? Jones: Yeah. I just walked by there this morning. You haven't been up there in a while?

18

Smith: I guess not. Jones: Yeah, it's because they've been releasing a lot of water to protect the ecology lately. 5. Let's take some first steps in marking up passages. When you think that a passage involves reasoning, you should (i) put any flag words in the passage that tell you that reasoning is going on, and what kind of reasoning is going on, in parentheses, and (ii), underline the conclusion or the explainee. You should also (iii) explain in writing other clues which tell us that reasoning is taking place and what kind of reasoning. Note that flag words and phrases are not part of the propositions. When they occur alongside or inside the target proposition, we do not underline them. Here are four examples: (a) At a religious study group: Nature is so wonderful, it must have been created. And (so,) a creator God exists.

The word "so" in (a) suggests reasoning and tells us that "A creator God exists." is conclusion or an explainee. None of the words in the passage tell us which, and so we turn to the context, which is described in italics. At a religious study group, it is reasonable to assume that the target is already accepted, and so this passage describes an explanation. (Though it is not definitely so, and in many cases it is not clear which it is, as we discussed just above.) (b) On a cable sports show: Cal Ripken has provided years of valuable service to the Orioles. He has appeared in 19 All-Star games. He was a World Series champion in 1983. His number has been retired by the Orioles. (For these reasons,) he deserves a spot in the Hall of Fame.

In (b), the words "for these reasons" tell us that reasoning is going on. The nature of the following proposition ("Cal Ripken deserves a spot in the Hall of Fame.") suggests that it is a conclusion and the preceding propositions ("Cal Ripken has provided years of valuable service to the Orioles." and so on) are premises. The fact that he has accomplished these feats is supposed to be good evidence for the truth of the claim that he merits a place in the Hall of Fame. (c) During a downpour: It started raining (because) the atmospheric pressure dropped.

In (c), the word "because" indicates reasoning but is ambiguous between argument and explanation. The context tells us that the rain is currently falling, which would 19

presumably be obvious to everyone. This suggests that the passage contains an explanation. (d) Bill has just demonstrated a new appliance for making tea: See? Nothing to it. I just put the water in here, pushed the button, and a minute later, the tea started flowing.

In (d), the tense of the verb ("started") and the fact that Bill is present for the demonstration indicate that the passage contains an explanation, of which "The water started boiling." is the explainee. 6. Finally, a word about obviously bad reasoning. Often you will judge right away that a conclusion does not, in fact, follow from the premises or that the explainee is not explained by the explainers. Such passages still have propositions being offered as premises or as explainers. It is thus possible that a set of propositions with no apparent relation between reasons and target should be understood as an argument or explanation, if flag words or the context demand it. For example, imagine someone says: Stocks are up this morning. And so, the Yankees will beat the Red Sox in this afternoon's game. The flag word "so" indicates that the speaker is reasoning (whether he is arguing or explaining) and thinks there is some connection between the first proposition ("Stocks are up this morning.") and the second ("The Yankees will beat the Red Sox in this afternoon's game."), though the mind struggles to understand how "Stocks are up this morning." in any way supports or explains the proposition "The Yankees will beat the Red Sox in this afternoon's game.". It is possible that the speaker does not understand how to use the word "so". It is also possible, on the other hand, that the speaker sees some connection between the two that the audience does not, and so you stand to learn something from the speaker. You might thus err on the side of caution and take the speaker as being sincere when he uses "so" and treat what he says as an argument or explanation. 2.3 The Relationship Between Arguing & Explaining 1. To argue is to (attempt to) justify acceptance of a conclusion as true. The premises on offer justify, or at least attempt to justify, believing that the target proposition is true or worthy of belief. Arguing attempts to expand or correct our

20

knowledge of the world. Explaining, on the other hand, does not attempt to convince the audience of the truth of the conclusion; rather the explainers on offer (attempt to) make clear what something is or why or how it comes to be. An explanation expands our understanding of the world. 2. However, one person's argument can be another person's explanation, and vice versa. Consider the following case. Is Bill arguing or explaining? Bill and Henry have just finished playing basketball. Bill: Man, I was terrible today. Henry: I thought you played fine. Bill: Nah. It's because I have a lot on my mind from work. What is happening here? What's happening is that Bill and Henry disagree about what is happeningarguing or explaining. Henry doubts Bill's initial statement, which should provoke Bill to argue. But instead, he ploughs ahead with his explanation. What Henry can do in this case, however, is take the reason that Bill offers as an explanation (that Bill is preoccupied by issues at work) and use it as a premise in an argument for the conclusion "Bill played terribly.". Perhaps Henry will argue (to himself) something like this: "It's true that Bill has a lot on his mind from work. And whenever a person is preoccupied, his performance is degraded. So, perhaps he did play poorly today (even though I didn't notice).". 3. Arguments and explanations are often the same set of propositions. As we saw above, it can difficult and sometimes impossible to tell between them. Knowledge of explanations, for why or how something is, can be used, on another occasion, to make an argument for the truth of a conclusion. For example, if extremely cold weather in Europe is explained by the movement of air from Siberia, on a future occasion the movement of air from Siberia can be used to argue that it is or will be extremely cold. On the other hand, however, not all arguments are based on an explanatory connection, and so not all arguments can be reconfigured as an explanation. One such type is an argument form authority. Consider the following example: The IPCC, a panel of experts from various countries, has stated that human activity has an impact on climate. So, that's how it is. In this passage, a speaker provides a reason for believing that human activity has an impact on climate, namely, that an international panel believes so. That is, the speaker provides a premise which might justify adopting the conclusion as a belief. This

21

premise, however, it does not explain why or how human activity impacts climate. It might thus be a justification, but it could not be used as an explanation. 4. Indeed, a justification based on an understanding of how the world works is more satisfying than one which appeals to the authority or expertise of others. Compare the following pair of arguments: Jack says traffic will be bad this afternoon. So, traffic will be bad this afternoon. and Oh no! Highway repairs begin downtown today. And a bridge lift is scheduled for the middle of rush hour. Traffic is going to be terrible! Even though the second is not intended as an explanation, the premises offered in the second passage justify the conclusion ("Traffic is going to be terrible.") with reasons that could be used in an explanation. Someone who accepts this argument will also have an explanation ready to offer if someone should later ask "Traffic was terrible today! I wonder why?". Although arguments based on explanatory premises are preferred, we must often, however, rely on other people for our beliefs, because of constraints on our time and access to evidence. But they (or at least someone at the beginning of the chain of testimony) should hold the belief on the basis of an empirical understanding. (See 4.5 for more on the issue of sources.) 2.4 Sentences & Propositions 1. The procedure so far for dealing with passages requires deciding whether or not a passage involves reasoning, and more specifically, whether it involves arguing or explaining, and then marking up the passage by underlining the conclusion or explainee and putting any flag words in parentheses. The next steps are to number all of the propositions (both premises and conclusion) and to bracket the premises. 2. When the premises and conclusion require no modifications, we can simply number the propositions as they appear in the passage. Consider the following passage: A rival politician on TV: It takes a despicable person to politicize the death of a young child. Smith has tried to tie young Molly's death to the President's policies. Smith is therefore despicable. In the context given, this is most likely an instance of arguing, since many people presumably are supporters of Smith. The speaker wants to convince the audience that 22

Smith is, in fact, despicable. In analyzing the argument into premises and conclusion, we see that the word "therefore" indicates the conclusion. You would thus put "therefore" in parentheses and underline "He is despicable.". Then you attempt to isolate the remaining propositions. This passage is simple, in that each sentence is a single, complete, proposition. Number and bracket them as follows: A rival politician on TV: (1) [It takes a despicable person to politicize the death of a young child.] (2) [Smith has tried to tie young Molly's death to the President's policies.] (3) Smith is (therefore) despicable. As was said in the previous section, flag words or phrases are not part of the propositions. Thus, "therefore" in the last sentence is not underlined. 3. If a proposition is repeated in a passage it gets the same proposition number in both places. Conclusions, in particular, often appear more than once. After the first appearance of the conclusion, further appearances do not add any new propositions to the analysis. This is also true for premisesrepeating a premise does not add any new information, and so we give them the same number. Consider the following example: A human resources director is arguing with the chief executive: (1) We should have an affirmative action policy. (Here's why.) (2) [Research has confirmed that employers do not review black job applications as thoroughly as applications from whites.] (3) [This leads black people to invest less in education and training, which only reinforces the prejudice of employers.] (4) [Affirmative action counteracts this vicious cycle by acting as an incentive for African-Americans to invest in education.] (So), (1) we should have an affirmative action policy. The (single) conclusion appears twice, at the opening of the argument and at the end. When a conclusion appears in the middle of a set of premises, it is often because the speaker or writer then goes on to provide additional reasons in support of the conclusion. 4. For our purposes, the words "statement", "claim" and "assertion" are all equivalent in meaning to "proposition". However, we will not use "sentence" as an equivalent for "proposition". A first reason for this is that some sentences are not propositions at all. We can distinguish a proposition from a non-proposition as follows: Whereas a proposition is either true or false, a non-proposition has no truth value (i.e., is neither true nor false). Consider, for example, the following sentences. All of them are sentences, but not all of them are propositions: 23

(i) (ii) (iii) (iv) (v)

Charles Darwin is dead. Get me a beer, por favor. Is Jack home from Baghdad yet? Ouch! If only the Lakers would win on Saturday!

(i) is either true or false, in that Darwin is either dead or not dead. (i), thus, is an proposition. But with (ii), (iii), (iv), and (v) things are different: none of these sentences is either true or false. (ii) is a request (and a less polite version would be a command), (iii) is a question, (iv) is an exclamation and (v) is a wish. However, propositions are sometimes disguised as non-propositions. In particular, rhetorical questions assume an answer to the question posed. This answer is an proposition, and the question can be understood as this proposition. Consider the following passage: After death, there is no more perception. Pain is only painful because it is perceived. So, why fear death? The second sentence is a question, but it is (what is called) a "rhetorical" question. That is, it is a question which is thought to have an obvious answer, and the speaker wants the audience to think of that answer, rather than the question itself. In this case, the assumed answer to the question is "There is no reason to fear death.". In analyzing this passage as an argument or explanation, add a note to your analysis, after you have numbered the original propositions and underlined the conclusion. For example: (1) [After death, there is no more perception.] (2) [Pain is only painful because it is perceived.] (So), (3) why fear death? (3) is a rhetorical question, equivalent to "There is no reason to fear death.". The conclusion, particularly when the speaker is arguing, is sometimes expressed as a command. For example, the conclusion of this argument might have been "Don't fear death!", which you would note as being equivalent to something like "One should not fear death.". 5. Each proposition must be complete, that is, it must make sense on its own. Consider the following version of the will-power argument: Some people have been able to give up cigarettes by using their will-power. Everyone can draw on their will-power. That's why it's possible for anyone who wants to give up cigarettes to do so.

24

The words "do so" at the end of the conclusion abbreviate the thought that those who use will-power can give up cigarettes. But there is no need for the speaker to repeat this to the listener. When you analyze the passage into separate propositions for each of the premises and the conclusion you must provide each proposition in full. Our analysis looks like this: (1) [Some people have been able to give up cigarettes by using their will-power.] (2) [Everyone can draw on their will-power.] (3) (That's why) it's possible for anyone who wants to give up cigarettes to do so. (3) "to do so" = "to give up cigarettes." There are many ways in which speakers will avoid repeating themselves. Most commonly, look out for pronouns (such as "I", "you", "they") and demonstrative adjectives (such as "this", "those"). Clarifying the meaning of the propositions involved is an important preliminary to ascertaining whether or not the premises are true. See 4.4 for more discussion of how unclear meaning can cause problems. 6. Another reason for insisting that "sentence" is not equivalent in meaning to "proposition" is that some sentences contain more than one proposition. This is important when analyzing an argument, since we want to list each proposition separately. Consider the following example: I would argue that since some people have used will-power to quit cigarettes, everyone could quit cigarettes. This is one sentence, but it contains two propositions. In this case, one is a premise and the other a conclusion. The flag word "since" indicates a premise, but the premise only goes as far as the comma. The words "Everyone could quit cigarettes." are the conclusion. A sentence containing multiple propositions does not pose a great problem for our analysis. You can simply insert numbers at the appropriate places in the sentence, as follows: (I would argue that) (since) (1) [some people have used will-power to quit cigarettes,] (2) everyone could quit cigarettes. Just as conclusions can be included in sentences which begin with a premise flag words, so too can an additional premise or premises be included after a flag word indicating a conclusion. Consider the following formulation of the argument about willpower: 25

Some people have been able to give up cigarettes by getting serious about their problems and using their will-power. So, since everyone could do this, there's no excuse for anyone who wants to give up cigarettes or lose weight but hasn't. The last sentence begins with a "So", which indicates a conclusion, but a "since" immediately follows, indicating a premise. The premise is "Everyone could do this". The conclusion is then resumed "(So) there is no excuse for anyone who wants to give up cigarettes but hasn't." The analysis looks like this: (1) [Some people have been able to give up cigarettes by getting serious about their problems and using their will-power.] (So), (since) (2) [everyone could do this,] (3) there's no excuse for anyone who wants to give up cigarettes or lose weight but hasn't. (3) "do this" = give up cigarettes 7. Multiple propositions are often joined using a conjunction. A simple example is the sentence "Jack and Gill went up a hill.". This sentence is a conjunction of two propositions and in addition the speaker avoids some repetition. This sentence contains two propositions: "Jack went up a hill." and "Gill went up a hill.". Consider the following passage: Boomers need to save more for retirement than those approaching retirement have done in the past. This is because they are living longer than the elderly ever have and medical care is more expensive than ever before. The numbering of the proposition does not follow the number of each sentence, for the second sentence contains two reasons. We thus number as follows: (1) Boomers need to save more for retirement than those approaching retirement have done in the past. This is (because) (2) [they are living longer than the elderly ever have] and (3) [medical care is more expensive than ever before.] Be careful: a sentence's use of the word "and" is not perfect indicator of the sentence's needing to be broken up. The "and" must conjoin propositions. Consider the following example: The only relevant difference between war plan B and war plan C is that B costs less than C. If a war plan costs less than another war plan, then we should go with the former over the latter. So, we should go with B over C. As always, begin by finding the conclusion. The word "so" makes it clear that "We should go with B over C." is the conclusion. Next move to the other sentences to look for the premises. Given that the first sentence makes use of the word "and", it is tempting to

26

think that it contains more than one proposition and, thus, that it needs to be broken up. This, however, is not the case. The analysis is: (1) [The only relevant difference between war plan B and war plan C is that B costs less than C.] (2) [If a war plan costs less than another war plan, then we should go with the former over the latter.] (So), (3) we should go with B over C. After all, the word "and" in (1) is not conjoining two distinct propositions and, thus, is not conjoining two distinct premises. There is no way to break up the sentence into two propositions, as the word "between" requires that the subject is between one thing and another thing. Other words and phrases can do the same job as "and" in conjoining more than one proposition into a single sentence. In English, the word "but" can often be used in place of "and". An example sentence using "but" is "Jack is tall but he is uncoordinated.". This sentence should be broken into the two simple propositions "Jack is tall." and "Jack is uncoordinated.". A comma can also be used. An example sentence using commas is "Jack went to the park with Jim, his leash, a tennis ball and some treats.". This proposition should be broken down into four simple propositions. Relative pronouns can also function as "and". Consider the sentence "Jack, who is home on leave from the war, is taking Jim for a walk.". This sentence contains two propositions and should be broken up into "Jack is home on leave from the war." and "Jack is taking Jim for a walk.". Sometimes another grammatical aspect of the proposition will bring with it a second, presupposed, proposition. An example can be derived from the question (posed by a lawyer at a trial) "Sir, have you stopped beating your wife?". If the witness says "No." because he has never beat his wife, he gives the appearance of continuing to beat his wife, while if he answers "Yes." he gives the appearance of having beaten his wife in the past, even if he does not do so in the present. Neither answer is palatable, but witnesses are restricted to answering either "Yes." or "No.". The trick is made possible because the question presupposes that the person on the stand used to beat his wife. (It is commonly called a "loaded" question.) Similarly, you can see that the related proposition "Smith has stopped beating his wife." can be understood as containing two propositions: that Smith used to beat his wife and that Smith currently does not beat his

27

wife. These can be separated if doing so sheds light on the structure of the logical argument. 8. Two important notes related to breaking up conjunctions: While propositions joined with "and" must be separated, propositions joined with "or" or joined together in an "if then " construction must be treated as one proposition. If you were to separate such sentences into two propositions you would change the meaning of the proposition. For example, "The rabbit ran either to the left or the right." cannot be rendered as "The dog ran to the left." and "The dog ran to the right." for the original proposition asserts only that the dog took one of the two paths, not that he ran to the left and to the right. The same problem occurs if you attempt to split up "If then " propositions. "If then " propositions and " or " propositions are two types of compound propositions. (We will be looking a lot more closely at these kinds of propositions in Part 3.) (Note also that a conjunction occurring within any part compound proposition should not be broken up. For example, "Either Gill will go first, or else Smith and Jones will go first.". The conjunction here is the second part of an "either or else " proposition and should be left alone.) Second, all of our examples of conjunctions have been of conjunctions in the premises or explainers. In 3.1 we will see examples of conclusions or explainees which are conjunctions and learn how to handle them. For now, leave them alone. 9. Finally, a general warning about how disordered and confusing passages can be. Although many of the passages you'll see in this book will be quite "clean", in practice, many are not, one consequence of which is that there are often some propositions and clauses which need not be included in an analysis of an argument. In general, then, your job is better described as extracting the premises and conclusion or explainers and explainee, rather than dividing the propositions into premises and conclusion or reasons and explainee. First, words or phrases or propositions which comment on the quality of the strength of the support that premises give to the conclusion are not part of any proposition and should be ignored or re-written. Consider the following passage: There's smoke coming from that chimney, and there would be smoke coming from that chimney if there were a fire in a fireplace in that house. Thus, in all probability, there is a fire in a fireplace in that house.

28

The phrase "in all probability" is the reasoner's comment on the strength of the support that the premises give the conclusion. Such comments are ignored in our analysis of the argument. Analyze as follows, placing the (3) that numbers the conclusion after this comment: (1) [There's smoke coming from that chimney,] and (2) [there would be smoke coming from that chimney if there were a fire in a fireplace in that house.] (Thus) in all probability, (3) there is a fire in a fireplace in that house. (We will revisit this point in 5.4.3.) Another cause of confusion is that speakers might seem to simply wander off and insert a tangent or parenthetical remark. Consider the following argument: Potatoes are vegetables. They're my favorite vegetable, in fact. And vegetables are good for you. So, potatoes are good for you. The fact that potatoes are the speaker's favorite vegetable will be immediately thought to be irrelevant to the support for the conclusion given by the other premises. If you are confident in this judgment, you can analyze as follows: (1) [Potatoes are vegetables.] They're my favorite vegetable, in fact. And (2) [vegetables are good for you.] (So,) (3) potatoes are good for you. If you are not confident, analyze as follows: (1) [Potatoes are vegetables.] (2) [They're my favorite vegetable], in fact. And (3) [vegetables are good for you.] (So,) (4) potatoes are good for you. Here is another example, this time in a dialogue: Al, a fireman, has been killed in a fire. Henry: Although the body is badly burned, I am sure this is the body of my friend Al. Bill: How do you know? Henry: These are the boots of his father, which his father gave to him after he stopped working in the coal mines. Bill: But anyone could have boots like that. Henry: No. These have a quite distinctive pattern on the sides. There is clearly an argument herethe conclusion is that the body is Al's body. But what are the premises? The reason for thinking that the body is Al's is that the boots are so distinctive that they could only be worn by Al. The information that the boots previously belonged to Al's father, who worked as a coal miner, seems irrelevant. If you are confident in this judgment, the argument would simply be this: "Al wore boots with

29

a distinctive pattern on the sides. This body has boots with that distinctive pattern on the sides. So, this is the body of Al.". Notice that you can discard information only when you are confident that the information is not needed in order to support the conclusion or explain the explainee. When arguments and explanations are complicated, it can be difficult to tell how, or whether, a proposition is involved. In these cases, it is usually a good practice to include all of the propositions in the passage in our analysis, even though it might turn out that they are unneeded. Notice that we have (again, as in 2.2.6) strayed into the territory of evaluation. To take the dialogue just above as an example, the reason you might throw out the information that the boots belonged to Al's father is because you are already thinking about how boots might be used to identify a body, and have thus moved from argument analysis to argument evaluation. Argument evaluation is considered in chapters 4 to 13. 10. When a passage requires a lot of work to analyze into its premises and conclusion and present each proposition clearly, it can be a better strategy to make a written list of the propositions, rather than numbering them in the passage and adding notes. At the moment, the passages we are looking at are fairly simple, but as we go on, they will become more complicated and this option will be used.

30

Chapter 3 Standard Form & Diagrams 3.1 Standard Form 1. Once you have distinguished all of the propositions involved in an argument or explanation, and expressed them fully, you can represent the reasoning in standard form or in a diagram. Standard form is sufficient for simple arguments and explanations, where the passage consists of reasons working together to support the conclusion or explain the phenomenon; more complex structures are better handled with a diagram. 2. In standard form, the reasons and the conclusion or explainee are set out as follows: (1) <proposition-1> (2) <proposition-2> . . . (n) <proposition-n> J/E ---------------------(n+1) <target proposition n+1> The propositions which provide the reasons (the premises or explainers) are numbered and listed. "<proposition-1>" stands for the first proposition; it is numbered (1) and written on the first line; and so on. Then, separate the reasons from the target proposition (the conclusion or explainee) by drawing a line under the reasons. To the left of this line, write either "J" for "justifies" (or more fully, <the premises> "are intended as a justification for believing" <the conclusion>) or "E" for "explains" (or more fully, <the explainers> "are intended as an explanation of" <the explainee>). Finally, the conclusion or explainee is written below the line. Consider the following passage: I know you don't like potatoes, Jack, but (1) [Potatoes are vegetables] and (2) [vegetables are good for you.] (Which means that), (3) potatoes are good for you. First, notice that the sentence "I know you don't like potatoes, Jack." has been left out of the analysis. It does, however, provide some context and tells us, by being opposite to the conclusion, that this passage is an argument. In standard form, the rest of the passage looks like this:

31

(1) Potatoes are vegetables. (2) Vegetables are good for you. J -------------------------------------(3) Potatoes are good for you. Here is another example: (1) [Research has confirmed that employers do not review black job applications as thoroughly as applications from whites, causing black people to invest less in education and training, which in turn reinforces the prejudice of employers.] (2) [Affirmative action counteracts this vicious cycle by acting as an incentive for African-Americans to invest in education.] (So), (3) we (Acme Inc.) should have an affirmative action policy. In standard form, this analysis of this passage would be written as follows: (1) Research has confirmed that employers do not review black job applications as thoroughly as applications from whites, causing black people to invest less in education and training, which in turn reinforces the prejudice of employers. (2) Affirmative action counteracts the vicious cycle (described in (1)) by acting as an incentive for African-Americans to invest in education. J ------------------------------------------------------------------------------------------------------(3) Acme Inc. should have an affirmative action policy. To repeat what was said in chapter 2, flag words and phrases and words which indicate the degree of certainty that the reasoner has about the relationship between the reasons and the conclusion or explainee are not included in the propositions and so are not included in the standard form. 3. Note that in standard form the target proposition always appears on the last line. This is not always the case in the original passage. Although in an argument the conclusion is supposed to "follow from" the premises, this does not mean that the conclusion will appear after ("following") the premises. In everyday English the conclusion can be found at any point of the argument, at the beginning, at the end, or anywhere in between. A reasoner will often put the conclusion first, as this is the most important claim that he wants the audience to hear or read. The same is true for explanations. The explainee will often not appear at the end of the passage. 3.2 Diagrams 1. To make a diagram, write all of the propositions (both reasons and target) down and number them. Do not include a horizontal line. Then use an arrow to point from the reason(s) to the target. 32

2. Sometimes a single passage contains multiple arguments or explanations for a single target. Consider the following passage: (1) Downloading music from peer-to-peer services should be prosecuted vigorously. (2) [If it is not, young people will not appreciate the creative talent of musicians.] Alternately, (3) [record companies will go out of business.] When it comes right down to it, (4) [it's simply a form of theft.] (2), (3) "If downloading is not prosecuted vigorously, . . ." (4) Downloading music from peer-to-peer services (1) is the conclusion. The main clue as to the relation of (2), (3), and (4) to (1) is the word "Alternately" prior to (3) and then the words "And when it comes right down to it", prior to (4). Each of these suggests not only that there are separate lines of support, but that the speaker thinks each of them is sufficient to justify the conclusion. There are, thus, three arguments, all for the same conclusion. In diagram form, these would be presented separately, each with (1) as the conclusion. In the first, (2) is the premise. In the second, (3) is the premise. In the third, (4) is the premise. Diagram in either of these ways: 2 J or 1 2 J 1 Similarly, separate explanations and arguments for the same target proposition can appear together in a single passage. For example, in chapter 2 you saw a different version of the following: Jones: Smith: Jones: Smith: Jones: (1) The reservoir is at a low level. It is? How do you know? (2) [I just walked by there this morning.] Huh. I wonder why. It's because of (3) [several releases to protect the down-stream ecology.] 1 3 J J 4 3 J 1 4 J

(2) I = Jones; there = the reservoir (3) There have been

33

Again, one option is that you could simply diagram the argument and the explanation separately, as follows: (1) The reservoir is at a low level. (2) Jones walked by the reservoir this morning. 2 J 1 (1) The reservoir is at a low level. (2) There have been several releases to protect the down-stream ecology. 2 E 1 But since they refer to the same target proposition you could also put them together, as follows: (1) The reservoir is at a low level. (2) Jones walked by the reservoir this morning. (3) There have been several releases to protect the down-stream ecology. 2 J 1 3. In the examples you have just seen, the speaker is careful to indicate that the different reasons put forward are each an independent argument or explanation for the target. Very often, however, speakers simply present a pile of reasons without making clear how many arguments or explanations there are. Consider the following passage, with the initial analysis already complete, and the propositions also written in full in a list: The headmaster of a school is speaking to proud students and their parents. (1) McKinley is an excellent high-school. (It owes its excellence to) (2) [its dedicated teachers], (3) [good leadership], (4) [modern facilities], and (5) [the support of parents.] (1) McKinley is an excellent high school. (2) McKinley has dedicated teachers. (3) McKinley has good leadership. (4) McKinley has modern facilities. (5) McKinley has the support of parents. Again, note that there is no horizontal line anywhere in the list of propositions, as there would be in standard form. In standard form, you would re-order the list to put (1) at the bottom, and place a line between this and the four propositions above it. This 3 E

and

34

is not necessary when generating numbered propositions for a diagram. The diagram (here together with the propositions) is as follows: (1) McKinley is an excellent high-school. (2) McKinley has dedicated teachers. (3) McKinley has good leadership. (4) McKinley has modern facilities. (5) McKinley has the support of parents. 2 3 4 5

E 1

Remember to write "J" (for "justifies") or (as in this case) "E" (for "explains") next to the arrow in the diagram. This diagram can be read as "2 and/or 3 and/or 4 and/or 5 is supposed to explain 1.", or for short, "All or some of 2, 3, 4 and 5 is supposed to explain 1.". The explanation in this example involves multiple reasons without any clues as to whether they form one explanation or many. That is, no connection or relationship between the reasons is indicated by the speaker. This is very typical, since many people do not give much thought to the structure of their arguments and explanations, but simply provide a variety of different reasons to justify or explain the target. Here is another example of such reasoning: (It's obvious why) (1) doctors are among society's most respected members. (2) [Doctors are paid well.] (3) [They are also known for their hard work] and (4) [they help people in times of extreme need.] This passage is best diagrammed as . . . (1) Doctors are among society's most respected members. (2) Doctors are paid well. (3) Doctors are also known for their hard work. (4) Doctors help people in times of extreme need. 2 3 4

E 1 This diagram can be read as "2 and/or 3 and/or 4 is supposed to explain 1.", or for short, "All or some of 2, 3 and 4 is supposed to explain 1.". Again, the speaker seems to be throwing out a number of different considerations, without really knowing what relationship, if any, the explainers have to one another. Here is a final example of a speaker who is simply throwing out reasons:

35

(1) [Ireland has spectacular scenery] and (2) [mild weather throughout the summer.] What's more, (3) [the dollar is strong against the euro right now.] (So), (4) Americans should consider Ireland for their summer vacation. Here the speaker presents three reasons to consider Ireland as a place to spend one's summer holidays. It is not clear whether the lines of reasoning in this argument are to be understood as together justifying the conclusion or whether we have three independent arguments. You thus would diagram using a split-tailed arrow, just as in the previous two examples. 4. A final possibility is that the speaker makes clear that the reasons in an argument are to be used together. Consider the following argument, with its preliminary analysis already completed: (1) [The new iPhone 3G allows you to access the internet using your phone.] When you add the (2) [extensive coverage of AT&T,] (3) it's the obvious choice. (2) = "The iPhone uses AT&T, which provides extensive coverage." (3) "it" = The iPhone 3G This passage, like those above, contains multiple. But here, the words "when you add" indicate that the speaker is not giving two arguments but only one, and that the two reasons must be combined. When diagramming, use a plus-sign between the numbers standing for the reasons. This argument is diagrammed as follows: (1) The new iPhone 3G allows you to access the internet using your phone. 1 + 2 (2) The iPhone uses AT&T, which provides extensive coverage. (3) The new iPhone 3G is the obvious choice. J 3 This diagram can be read as "(1) together with (2) is supposed to justify (3).". Note that the arrow points from the group of premises (in this case, (1) and (2)) but points at a single number. Here is a version of the McKinley high school example, this time as an argument, and with words which indicate that the support from the premises must be combined: A headmaster is speaking to prospective students and their parents: Many reasons combine to make (1) McKinley an excellent high-school. It has (2) {dedicated teachers,] (3) [good leadership,] (4) [modern facilities,] and (5) [the support of local parents.] (1) McKinley is (2), (3), (4), (5) McKinley has

36

There are four lines of support in this set of propositions, and the speaker uses the phrase "many reasons combine" to make clear that (he thinks) the conclusion follows because the school has many beneficial factors. This argument is diagrammed as follows: (1) McKinley is an excellent high-school. (2) McKinley has dedicated teachers. (3) McKinley has good leadership. (4) McKinley has modern facilities. (5) McKinley has the support of local parents. 2+3+ 4+5 J 1 5. The plus-sign is also used when the reasons together express a single justification or explanation. Some forms of reasoning are so intuitive that they can be easily identified as involving joined reasons. Further, because they are so obvious, speakers need not indicate specifically that the premises are to be joined. Consider the follow argument: I just heard that Jack got a dog, called Jim. (Since) (1) [Jim is a dog] and (2) [a dog has a tail,] I bet, (3) Jim has a tail. Think about the structure of the argument and in particular, the relationship between (1) and (2). (1) and (2) express a single line of support for the conclusion, based on Jim's being a dog. A clue that the two are part of a single line of support is that the idea of 'being a dog' acts as a connection between the two. The appearance in both premises of this idea is a clue to the fact that they go together to express a single line of support. The complete diagram, with its list of propositions, is as follows: (1) Jim is a dog. (2) A dog has a tail. (3) Jim has a tail. 1 + 2 J 3 This diagram can be read as "(1) together with (2) provides a single line of support intended to justify belief of (3).". 6. We have now seen examples of unstructured reasoning (or, a pile of reasons) and of combined reasons. A single passage can involve both of these types. Consider the following argument, which involves two sets of combined premises, but no indication of whether the two sets are separate arguments, or should also be combined:

37

(Of course tomatoes are fruit!) (1) [A lot of fruit is sweet,] and (2) [tomatoes are sweet.] What's more, (3) [apples are fruit] and (4) [tomatoes are about the same size as apples.] (So), (5) tomatoes are fruit. The words "What's more" suggest that the speaker is starting a new line of thought, but "what's more" doesn't make it obvious whether the two sets should be joined or added, or are two separate arguments for the same conclusion. When we diagram, use plus-sign to make clear that there are two premises in each set, but then use the split arrow to indicate the uncertainty about whether the two sets are separate arguments or should be combined. With the following key, the argument would thus be diagrammed as follows: (1) A lot of fruit is sweet. (2) Tomatoes are sweet. (3) Apples are fruit. (4) Tomatoes are about the same size as apples. (5) Tomatoes are fruit. 1+2 J 5 The first plus sign indicates that (1) and (2) form a connected line of support. The same is true of (3) and (4). Between the two pairs, however, we use the split arrow, to indicate that either one might be sufficient to convince us of the conclusion. 3.3 Reasoning With A Conjunction In The Target Proposition 1. The word "and" is often used to conjoin propositions. In 2.3 we advised that when an "and" appears in the premises (and is being used to conjoin two propositions) the two propositions should be split apart. What about when the target proposition (the conclusion or the explainee) is a conjunction? Consider the following passage: At the pet store: Labs aren't known as good guard dogs, but they do make great pets for young children. The reason is because they are gentle and friendly dogs. The conjunction in the first sentence does not pose a problem. We divide it into two premises, the first being that Labradors are gentle and the second being that they are friendly. The second sentence is also a conjunction, of the propositions "Labs do not make good guard dogs." and "Labs make great pets for young children.". How should we put this argument into standard form and diagram it? If the conclusion were a premise, we would simply divide it into its component propositions and write each of them down. Following this practice with the conclusion would give us this: 3+4

38

(1) Labs are gentle dogs. (2) Labs are friendly dogs. E ------------------------------(3) Labs do not make good guard dogs. (4) Labs make great pets for young children. But explanations and arguments only have one target. We cannot simply pick one either (3) or (4)because the speaker is not trying to explain (3), or (4), but both (3) and (4). The speaker will explain (3) and (4) if he explains (3) and explains (4). We must therefore break the passage into two explanations, as follows (in standard form): (1) Labs are gentle dogs. (2) Labs are friendly dogs. E ------------------------------(3) Labs do not make good guard dogs. and (1) Labs are gentle dogs. (2) Labs are friendly dogs. E ------------------------------(4) Labs make great pets for young children. In a diagram (using the proposition numbers as given in the standard form above) we draw two arrows from the reasons , one to each of the conclusions, and each labeled as an explanation, as follows: 1 E 3 2 E 4

(There is a bar under (1) and (2) because the words "The reason is " in the original passage suggest that the two reasons work together.) Recall from 2.3 that conjunctions can be expressed in a variety of ways and so sometimes the two conclusions which follow from the same set of premises might not be presented using "and". It is also possible that they might be presented in different sentences entirely. An arguer might use some variant, for example by drawing one conclusion and then saying "It also follows from these considerations that ." and go on to state another conclusion.

39

3.4 Compound Reasoning 1. Every argument or explanation has only one ultimate target. However, arguments or explanations can be made by arguing first for one target proposition (which we will call an interim target) and then using that proposition as a reason to argue or explain another target. These are compound or extended arguments or explanations. An argument or explanation is compound when one or more propositions functions as the target in one part of the reasoning and as a reason in another. 2. Consider the following example: Honey is produced by bees, which live naturally. As a result, honey is natural. Natural things are good for you. So, honey is good for you. This is a compound argument. In this argument, the speaker initially argues for (3) "Honey is natural.", and then adds (4) "Natural things are good for you." in order to conclude (5) "Honey is good for you." In this argument, (3) is both a conclusion and a premise. It is the conclusion following from (1) and (2) together, and it is a premise which, along with (4), supports (5). In standard form, extend as follows: (1) Honey is produced by bees. (2) Bees live naturally. ------------------------------------(3) Honey is natural. (4) Natural things are good for you. J -------------------------------------------(5) Honey is good for you. Keep the ultimate conclusion at the end and write the premises which most immediately justify it above it. The premises which justify line (3) are written above it. There are two sub-arguments: the first involves (1) and (2) justifying (3); the second involves (3) and (4) justifying (5). (3) is common to both, as the conclusion in the first and a premise in the second. Something very similar can be done with diagrams. 1+2

3+4 J 5

40

Notice that in neither the standard form or the diagram there is only one "J", which is next to the arrow pointing at the ultimate conclusion. There is no need to mark any other arrows, because they will all be the same. 3. Consider the following argument: Pre-natal genetic testing, even for fatal diseases, should be outlawed. This is because such testing will surely lead people to start testing for non-essential qualities, such as intelligence and height. The reason for this is that people cannot help but try to get an advantage over one another. This comes from our evolutionary background and the scarcity of mates. Each sentence is a single proposition. Assigning numbers to the propositions, we get: (1) [Pre-natal genetic testing, even for fatal diseases, should be outlawed.] (This is because) (2) [such testing will surely lead people to start testing for non-essential qualities, such as intelligence and height.] (The reason for this is that) (3) [people today cannot help but try to get an advantage over one another.] This comes from (4) [the competition for mates in our evolutionary past.] (2) "such" = "pre-natal genetic" (4) There has been In standard form, it is written thusly (notice that the word "surely" has been removed from proposition 2): (4) There has been competition for mates in our evolutionary past. -----------------------------------------------------------------------------------(3) People today cannot help but try to get an advantage over one another. ----------------------------------------------------------------------------------------------(2) Pre-natal genetic testing, even for fatal diseases, will lead people to start testing for non-essential qualities, such as intelligence and height. ------------------------------------------------------------------------------------------------------(1) Pre-natal genetic testing, even for fatal diseases, should be outlawed. Using the proposition numbers as given in the standard form, the diagram looks thusly: 4

2 J 1 41

(1) is the main conclusion, and is supported by (2). This is one of the sub-arguments. (2), in turn, is supported by (3). This is another sub-argument. Last, (3) is supported by (4). So in this kind of compound argument, the premise for the main conclusion is itself the conclusion of another sub-argument, and the premise in that argument is the conclusion of a third sub-argument, and the premise in that argument is the conclusion of a fourth sub-argument, and so on. 3.5 AnalysisSummary (So Far) 1. Let's pause, briefly, to summarize the ground we have covered so far. In 3.1 and 3.2, we analyzed reasoning structures involving only one conclusion. In 3.3 and 3.4, we have discussed how multiple propositions can be justified or explained by the same set of reasons, or a series of target propositions in an extended piece of reasoning. 2. You might notice that the passages are getting longer. Consider the argument in the following passage: (1) Smith would make an excellent choice as our candidate. (2) [He is a superior speaker,] as is clear from the fact that (3) [his performance at the debate last week was great] and (4) [he has performed well any time he has appeared on the Sunday political shows.] (5) [He is also a great fund-raiser](6) [he has raised over a million dollars in the week since the Pennsylvania primary.] (7) [He also uniquely appeals to a broad cross-section of the population.] (8) [He has polled well across all major demographics (except Hispanics and those in the upper fifth of income).] (2), (3), (4), (5), (6), (7), (8) He/His = Smith/Smith's It's long, but it shouldn't frighten you. This structure of the reasoning is fairly straightforward: There are two premise flags: "For" prior to the third proposition, and "For" prior to the sixth. The phrase "As is clear from the fact that" also functions as a premise indicators. There are no conclusion flag words. (1)the claim that Smith is the better candidateis the main conclusion. (2), (5), and (7)that he is a superior speaker, an excellent fund-raiser, and appeals to a broad cross-section of societyare intended to convince us of (1), and each is a separate line of argument and it is not clear whether all three are needed to convince us of the conclusion. (3) and (4) are supposed to justify (2), and might each do so independently. (6) is supposed to justify (5). (8) justifies (7). Diagram as follows:

42

5 J 1

3.6 Objections 1. Arguments and explanations often provoke objections. And the objections can then be supported with further evidence or can be countered by a rebuttal. You can represent objections in standard form and diagrams. But because any reasoning which involves objections or rebuttals is complex, analysis using a diagram will be prove more successful than using standard form. 2. As you know from chapter 2, any piece of reasoning involves reason(s) and a target proposition, and the reason(s) are supposed to justify or explain the target. There are correspondingly two ways to challenge an argument or explanation: one can argue that (i) even if one allows that the reasons are true, they do not, in fact, justify or explain the target, or (ii) one or more of the reasons is false, or (iii) an objector can pay no attention to the argument that has been given but simply argue that the (original) argument is bad by given reasons which (are intended to) show that the conclusion is false. (It is also possible (iv) to challenge an argument or explanation generally, e.g. by attacking the person who makes it, which does not specify whether the premises are false or the reasoning is poor. You will see an example of this in sub-section 5 of this section.) 3. Imagine that a speaker makes the following argument: (1) [Bill Gates does not own lots of gold.] (2) [If Bill Gates owns lots of gold, Bill Gates is rich.] (So,) Bill Gates is not rich. Consider an objection to the reasoning. Assume that the audience accepts both premises, but points out that the premises do not give us much reason to accept the conclusion. The audience would say something like "But there are other ways he could be rich besides owning lots of gold, such as owning lots of gems.".

43

This objection can be represented with a dashed arrow, pointing upwards, point at the arrow representing the reasoning, as follows: 1 + 2 J 3 4 (Note: you do not need to label arrows representing an objection with an "J" or an "E", because objections are all argumentative, as will be explained shortly.) A dashed arrow is used because all objections a represented by dashed arrows. The arrow points at the initial arrow because the objection is an objection to the reasoning. And, it points upwards because the arrows representing objections go in the opposite direction from the arrow involved in the argument or explanation that is being objected to. You might be tempted to add a fifth proposition, "The conclusion (3) does not follow from (1) and (2).". But it is not correct to include this, because (5) is shown by the diagram. This arrow is read as "challenges the justification (or explanation) pointed to". Here, (4) is a reason to doubt that (1) and (2) justify (3). Here is another example, this time using an explanation. Imagine that Jack says: (1) I was late to work this morning because (2) [the traffic was terrible]. But Gill replies, "But (3) [you left so early]! Even with bad traffic, you would have made it.". Here, Gill does not disagree that the traffic was bad; she only questions whether the bad traffic is a good explanation for Jack's lateness. Diagram as follows: 2 E 1 3 Again, note that the words "Even with bad traffic, you would have made it." are not given a number or included in the list of propositions or in the diagram. This is because these words merely make explicit the idea that Gill thinks Jack's explanation is insufficient, and the diagram shows this, by including (3) as an objection.

44

4. For an example of an objection to the truth of a reason, we turn to Monty Python's celebrated "Argument Clinic" sketch. At one point, Michael Palin's character makes an argument to John Cleese's character, who then responds, as follows: Palin: Cleese: (1) [If you are arguing, I paid.] (2) [You are arguing.] (So), (3) I paid. (4) [I could be arguing in my spare time.]

With this objection Cleese challenges the truth of the first premise. It could be false, he says, that arguing indicates that he was paid, since he could be arguing without having been paid. He is not challenging the connection between the premises and the conclusion; if premises (1) and (2) were true, the conclusion would also be true. To represent the objection in a diagram, draw a dashed arrow, again pointing in the opposite direction to the original arrow, but this time pointing at the premise whose truth is being denied: 1+2 J 4 3 Cleese's character might have explicitly added a proposition (5) stating that (1) is false. But even if this were explicitly stated, you would not add it to the diagram, since this is shown by the arrow challenging the premise and it would be redundant to include it as a proposition. 5. Objections to the truth of a reason or to the reasoning suggest that the original target has not been established, the original argument or explanation is not successful. This does not mean that the target is false. It only means that the current argument or explanation has not successfully justified or explained the target, at least according to the objector. Objectors, however, sometimes ignore the original argument and simply give reasons for believing that the conclusion or explainer is false. In the argument that Bill Gates is not rich because he does not have a lot of gold, the audience might simply object "It's false that he is not rich. He is; he owns lots of Microsoft stock.". If (4) is "Bill Gates owns lots of Microsoft stock.", it can be added to the diagram as follows:

45

1 + 2 J 3

4 In this diagram (4) points directly at (3), indicating that the speaker takes (3) to be false because of (4). Replies which directly argue that the conclusion is false imply that something is wrong with the original argument, but, because there is a definite argument that the conclusion is false, diagram by pointing the arrow at the conclusion. Note, yet again, that the words "It's false that he is not rich" are not included in the list of propositions or in the diagram. The diagram shows the objector's contention that the target is false. 6. In summary: there are three different types of objection. Objection to one or more of the reasons, i.e. that a premise is false, or that an explainer is false Objection to the reasoning; i.e. that even if the reasons are true, the conclusion is not justified by them, or that the explainee might not occur. Objection to the target: i.e. that the conclusion is false, or that the explainee did not occur. It can be difficult to tell the difference between an objection which argues that the reasoning is bad and an objection which attempt to show that the target is false. Speakers sometimes think (and say) that the target is automatically false because the reasoning is poor. But this is not correct; the target might be true for other reason(s). 7. Just above it was noted that there is no need to label objections with a "J" or "E" because all objections are argumentative. As was said in chapter 2, explanatory contexts are quickly turned into an argumentative ones. Here's a piece of dialogue from chapter 2, with some comments: Jack: Gill: Jack: Gill: Did you hear that the game is cancelled? I did. Do you know why it is cancelled? [A belief is shared; Gill asks for an explanation.] It's because of the heavy rain we had yesterday. I doubt thatthe new drainage system should be able to handle the rain. [Gill doubts that the explanation offered is correct.] 46

Jack:

Maybe it is malfunctioning

The diagram of this dialogue will begin with the explanation (1) The game is cancelled. (2) There was heavy rain yesterday. E 1 then Gill's challenge to the explanation is added: (1) The game is cancelled. (2) There was heavy rain yesterday. (3) The new drainage system should be able to handle the rain. 2 E 1 3 2

All objections are arguments and so, you don't it would be redundant to mark them with a "J" in our diagrams. An interesting example which makes this point well is a form of objection called explaining away an argument or explanation. Despite its name, it is a form of objection and thus a form of argument. For example: A politician speaks against government support for the struggling auto industry: My opponent argues that handouts to the auto industry must be continued because those companies provide jobs to many workers. But my opponent only says this because he receives a lot of contributions from Big Auto. (1) Handouts to the auto industry will be continued. (2) Auto companies provide jobs to many workers. (3) My opponent receives a lot of contributions from Big Auto. 2 J 1 3

The arrow from (3) against (2)'s support of (1) is an explanation being used as an argument. What the speaker does by giving the explanation is actually attack the argument (that auto companies deserve support because they provide many jobs). The speaker is in effect arguing as follows: "My opponent's argument is likely to be bad, because he is under the influence of the auto companies.". Note that because the speaker has not specified in what way the opponent's argument is bad (that is, he has not specified whether he thinks a reason is false, or the reasoning is poor) you use a split-headed arrow, pointing at both the number (2) standing for the reason and the arrow indicating its challenge to (1).

47

8. Once an objection is on the table, it can be given support, or a rebuttal made to it. Let us gradually build up an example, of Jack trying to persuade his friends, including Gill, to go to a certain movie. In diagram form, Jack's argument is: (1) Snakes on a Plane is an action-adventure movie. 1+2 (2) I [Jack] am definitely in the mood for an action-adventure movie. (3) Snakes on a Plane stars Samuel L. Jackson. (4) Samuel L. Jackson is a great actor. (5) We [Jack and the audience] should go to see Snakes on a Plane. 3+4

Gill then says, "But we can't go because (6) [the theatre Snakes On A Plane is showing at will take a long time to get to].". In a diagram: 1+2 J 5 3+4

6 Now imagine that, in response to Gill's worry, Jack asks for justification of the claim that the theatre will take a long time to get to. Gill in response says, "The theatre is 20 miles from here.". Let this be proposition (7). In a diagram, you can present (7)'s support for (6) by having a regular arrow from (7) to (6), but going in same direction as the arrow from (6), as if (7) and (6) were both pushing in the same direction, against the original argument. Diagram as follows: 1+2 J 5 3+4

48

Note that the while the upward dashed arrow from (6) challenges the original argument, the upward regular arrow from (7) to (6) supports (6). In sum: A regular arrow means "justifies what it points at" A dashed arrow means "challenges what it points at" The direction of each arrow depends upon its role in the overall argument. Thus, an objection to the reasoning in an objection will have a dashed arrow pointing at another dashed arrow, but going in the opposite direction. Consider the following diagram: 5 6

11 7+8 2+3 J 4 1 10 It is clear from this diagram, for example, that (9) objects to the truth of (3), while (7) and (8) are a joined reason objecting to the claim that (9) shows that (3) is false. (9) is a challenge to the main argument; (7) and (8) indirectly support the original conclusion (1). (Note that (11) points specifically at tail in the split arrow from (5) to (2). If it pointed at the bottom part of the arrow, it would challenge the combined support of (5) and (6).) 3.7 Analyzing Long Passages 1. With the inclusion of objections and rebuttals, the passages are beginning to get longer and more complex. Let's pause to make two important points. These two points are so important that they get their own section. The first is something that has been said already, but which bears repeating. Speakers presenting complex arguments or explanations often include phrases which tell the audience what the impact of a new proposition is. For example, a speaker might say "that (reason) is false, because of ." or "that (target) is not explained by (4) and (5), 9

49

because of (6).". There is no need to include these remarks about what the impact of the objection is as propositions in the diagram. The positioning of the arrows in the diagram will show what work the new proposition is doing. In the complex diagram above, you can see that (4) challenges the truth of (2) and that (7 + 8) rebuts (9)'s objection to the truth of (3). 2. Second, objections (and support for and rebuttals to them) often appear in passages delivered by a single speaker. In such cases, you should expect to see a (brief) summary of an initial argument or explanation, and then, in full propositions, the objection(s). The summary of the initial reasoning is often flagged with a phrase attributing the argument to some person(s) such as "the editorial in today's newspaper argues " or "my opponents argue " or "Dr. Cornmire explains that ". The difficulty here is that the speaker will (often) not present the argument or explanation being criticized one proposition at a time. It is our job, in such cases, to extract the information you need, put it into propositional form, and reconstruct the original argument or explanation. One way of telling when the summary has ended is to pay attention for the moment when the speaker switches to her own objection(s). Common phrases introducing objections are "however" and "but", and comments on the original argument such "they are wrong", "they have forgotten" and other forms of criticism, which also provide a segue to the speaker's own contribution. Such a phrase might also indicate whether the up-coming objection will challenge the truth of a premise or the strength of the reasoning. For example, "they have their facts wrong" indicates that the objection will be an objection to the truth of a reason. 3.8 Analyzing Very Long Passages 1. Long passages can be difficult to analyze because (typically) they involve many propositions, they are multiple reasons argument, and they involve objections and rebuttals. This section suggests that you add a step of looking for the large-scale structure of the passage before attempting to isolate the individual propositions. 2. Consider this editorial on getting Iraq to pay some of the cost of the U.S.'s operations. As always, you should begin by trying to isolate the conclusion of the piece. To do this, read the whole piece, paying particular attention to the headline (and the sub50

headlines, if any) and the start and end of the articles. Do not rely on the headline and sub-headlines. Often they merely serve to set the scene or the topic of the piece. The beginning and end of the piece are typically more reliable, but are inferior to reading the whole thing. Our specific example poses a bit of trouble, because of some variation. The conclusion of this argument seems to be (1) Iraq should bear more of the cost of U.S. operations in Iraq. but note that this is not exactly the conclusion suggested by the sub-headline, which references only security specifically, whereas the article mentions other areas. The last line of the article seems to do a better job of capturing the article's main thesis. 3. Having isolated a (working) conclusion, your next step would normally be to look for the premises. But it would be potentially confusing to approach an article of this length by numbering the propositions one by one from the beginning and assuming that the structure will reveal itself straightforwardly. There are too many propositions to keep in mind at once. Rather, you should attempt to summarize the main lines of reasoning in the article. (One way to do this is to pretend that you are giving a very brief summary of the article to another person.) Once you have identified the main lines of reasoning, it is often beneficial to insert a summary proposition for each one, as interim conclusions. Let's demonstrate these practices with our working article. Reading through the article, the main points are that (i) the U.S. is struggling economically, (ii) the war costs a lot, and (iii) Iraq has money to spare. (There are also propositions describing precisely what costs Iraq could cover. We'll leave these out. They are simply suggestions for how Iraq might contribute and logically come after the main conclusion, that Iraq should help. Their real purpose seems to be to introduce information about how expensive the war is.) These three can be considered as interim conclusions, which then go together to justify the conclusion. We can sketch the macro-structure of the argument as follows: (1) Iraq should bear more of the cost of U.S. operations in Iraq. (2) The war costs a lot. (3) The U.S. is struggling economically. (4) Iraq has money to spare. 2 + 3 + 4 J 1

51

4. Now we can list all of the specific propositions which are relevant to each interim conclusion. The propositions concerning cost to the US are: (5) (6) (7) (8) (9) The U.S. has spent more than $500B since 2003. The U.S. is paying $10B/month for fighting, reconstruction and training. The U.S. needs another $1B for rebuilding. The U.S. spends $90M/month to pay for other groups. The U.S. spends $153M/month to pay for fuel.

The propositions concerning Iraqi ability to pay are: (10) Iraq has the world's fourth largest oil reserve. (11) Iraq has made a $70B profit from oil. (We might also include "Iraq subsidizes gasoline for its citizens." if the implication is that some of that subsidy could be transferred to the US forces. But it's not clear that this is in fact what is implied. A proposition that a subsidy is deserved will, however, be added, below.) The propositions concerning US's struggling economy are: (12) The U.S. is running huge deficits. (13) U.S. consumers are suffering at the pump. (14) The U.S. is teetering on the brink of a recession. We add (14) here because, although it is introduced later as a response to some objections, it does not seem to respond to them particularly. So, we include it as part of the early thread on the state of the US economy. There are also two other considerations which seem to be unrelated to the three interim conclusions identified. The proposition (15) There are bills pending in Congress to make Iraq pay more. (16) U.S. forces deserve the same subsidy on gas as Iraqi citizens get. (15) seems to be added as an appeal to authority and/or popularity. The main argument is presumably sufficient to justify the conclusion, and these are thrown onto the pile. 5. Now we must figure out how the propositions in each group are related to each other and to the interim conclusion. No specific inter-relation seems to be indicated in the argument, and so we can simply add the propositions together. Using the numbered propositions above, the diagram at this stage looks like this:

52

(5

9) (10

11)

(12 13 14)

15

16

J 1 Let us now consider the reasons against: (17) Iraq's economy is shaky. (18) Contributing more might risk Iraq's agreements with IMF. (19) Contributing more might risk Iraq's debt forgiveness efforts. (20) The U.S. wants to retain efficiency. (21) The U.S. wants to retain control. (22) Iraq's oil revenue is a small fraction of what's needed. (17) and (22) are related to the idea that Iraq has the resources to contribute more funds, but they do not challenge either (10) or (11) specifically. So they challenge the move from (10) and (11) to (3). (18) through (21) do not challenge the truth of any of the premises; they present new information entirely and so are construed as (individually) challenge the main argument. 5 6 7 8 9 10 11 12 13 14

+ 17

3 22

15

16

1 18 19 20 21 We are now ready to evaluate the reasoning. As you can see from reading this analysis, long passages, and especially editorials, can be very messy and difficult. In a number of places above, it was hard to 53

say exactly how to incorporate some part of the article into the analysis. This is typical. Don't panic. Analyze the piece as best you can. It is likely that the article actually is unclear. It is a sign that you have done a good job if you have generated various questions in the course of your analysis. Critical reasoners very frequently end up with questions since they are spending more time and effort on what is said than the speaker!

54

Chapter 4 EvaluationIntroduction 4.1 Two Criteria 1. In this chapter the two basic criteria of good explanations and arguments are introduced and applied to the reasoning structures described in chapter 3. The bulk of the chapter then focuses on the first of the criteria (that the reasons must be true). At the end of the chapter, a general strategy which helps with second criterion (that the reasoning must be good) is introduced. This strategy applies to both arguments and explanations and will be discussed further in chapter 5, as it applies specifically to arguments, and in chapter 7, as it applies specifically to explanations. 2. In chapters 2 and 3 you have learned how to recognize and distinguish explanations and arguments and to analyze passages containing them into their constituent propositions (chapter 2) and to present the structure of the reasoning in a diagram or in standard form (chapter 3). Let us now turn to evaluation. 3. We want to have a firmer grasp of what we are looking for when we ask, of an argument, "Do these premises justify acceptance of this conclusion?", and of an explanation "Does this explanation explain the explainee?". Arguments and explanations have to meet two general criteria. The first is that the reasons must be true, or at least, accepted as true by you (the audience, the person doing the evaluation). The second is that the reasoning must be good. Here is a slogan to keep in mind: Check the reasons. Check the reasoning. If either one of these requirements is not met, the argument or explanation is unsatisfactory. If an argument meets these criteria, it is sound (or sometimes (properly) convincing or a good or well-reasoned argument). If an explanation is successful, we say that it is (properly) explanatory (or properly satisfying or a good or well-reasoned explanation). If, for example, you are given the following explanation "Democracy is just because it requires the consent of the governed.", you would first wonder if democracy really does require the consent of the governed. In doing so, you are checking the truth of the reason offered. You might also have questions about whether requiring the consent of the governed is enough to make a form of government just. Here, you are evaluating whether the explainer explains the explainee. Or to take another example: A friend says to Gill "It will rain this afternoon.". She is inclined to conclude that "It will rain this afternoon." but, since she is thinking 55

critically, she wonders whether she should believe that it will rain this afternoon based on this evidence. She thinks to herself, perhaps, "I'm quite certain I heard my friend say that it will rain this afternoonit wasn't windy or noisy and she spoke clearly. Plus, people typically don't just lie. What's more, I know and trust this person, which increases my confidence. My friend wouldn't claim that it will rain this afternoon if she didn't have good reason to believe so.". In the first sentence, Gill is checking the truth of the premise, that her friend said it would rain. In the rest of the paragraph (following the "plus") she is evaluating the support the premises give to the conclusion. Similarly, the worry "I'm not confident that he said it would rainI had a lot on my mind this morning and was pretty distracted." evaluates the truth of the premise and by itself would be enough make Gill lose confidence in the conclusion. The criticism "My friend did say it would rain, but he sometimes says things just to get attention. So, now that I think about it, even if he did say it would rain, I probably shouldn't take his word for it." is a criticism of the connection between premises and conclusion, and by itself this is enough to feel that the conclusion has not been justified. 4. To repeat: Both the reasons and the reasoning of any argument or explanation need to be checked and failure to satisfy either one of these two criteria is enough to reject the piece of reasoningthe conclusion has not been justified or the phenomenon has not been explained. Simply forcing yourself to evaluate the reasons and the connection between the reasons and the target is a tremendous step to becoming a critical reasoner. These tasks are difficult for humans to do (at all) and difficult to do well. Humans are more interested in the conclusion or explainee than in how it is justified or explained. Our laziness makes us inclined to accept any argument or explanation that is given, especially if it is complex and requires a lot of effort to follow. Perhaps the most difficult passages to evaluate critically are those arguments which have a conclusion which we already believe (on other grounds besides those in the argument) to be true. But a passage can contain a faulty argument even when we agree with the conclusion, and we should evaluate the strength of the connection carefully. Thinking that an argument is good or bad simply because one (already) accepts or rejects the conclusion is called mistaking the conclusion for the argument. There is a similar failure with respect to explanationsfailing to explore other explanations for a given

56

phenomenon because one already has an explanationbut it doesn't have an official name. 4.2 Getting Clear On The Meaning 1. An argument or explanation can only be successful if the reasons used are true, or at least are accepted as true, and justify or explain the target. In order to check each of these criteria, both the reasons and the target must have a clear meaning. Often, however, the meaning of one or more of the propositions will be unclear. 2. One difficulty is that humans will often point at reasons rather than giving the reason explicitly. Consider the following passage: (1) People cannot help but try to get an advantage over one another. (2) [This comes from our evolutionary background] and (3) [the competition for mates.] (2) "This" = (1) (3) "There has been " One problem you might have with this argument or explanation is that (2) refers loosely to "our evolutionary background". But this could mean any number of things. It might simply be what is mentioned explicitly in (3) (competition for mates) or to something else. It would be appropriate to wonder what, precisely, is meant. Speakers might alternatively point to a source of a proposition as a reason for believing it, without being specific. For example: (1) [Experts say that the although the economy has been recovering, it will enter a second or "double-dip" recession.] (So), (2) that's how it will be. (2) = "There will be a second recession." It is not clear how we would go about verifying whether the first proposition is true or false, since it is not clear who the "experts" referred to are. We might find some experts who say that a double-dip recession is imminent, but these might not be the experts that the speaker has in mind. Since we cannot confirm that the proposition is true, the argument fails the first criterion. It's possible and perhaps likely that speakers who use phrases such "experts say" or "everybody knows" do not, in fact, have any specific source in mind, but rather feel so sure of their belief that they assume that other people must agree with them.

57

3. Imprecise Language. Pointing at reasons in these ways are examples of imprecise language. Imprecise language is language which is not specific enough, as given, to evaluate, and you must (try to) make it precise before evaluating it. If you cannot supply a meaning, the language can be called vacuous. Imprecise language is often used for the very purpose of disguising the fact that a clear meaning is lacking. Imprecise language often conveys some emotional meaning, which the speaker hopes will cause the audience to accept the argument or explanation. The example from the previous sub-section, "experts say", lends a scientific air to what is said, but cannot be evaluated for truth until the audience knows which experts are meant. It is unreasonable of the speaker to assume that the audience will know which ones. A lot of ordinary language can be vacuous if it is not made explicit. For example, comparatives (such as "taller", "cheaper", and so on) are vacuous if not made explicit. Consider the following: NEW IMPUNITY CIGARSsmoother by far! An audience would be entitled to ask "Smoother than what?" (and also for a more precise understanding of "by far"see the sub-section on vagueness, just below). Advertisers will often say that the product is better than "a leading brand" or "many other brands", but these comparisons need to be made explicit if they are to be evaluated for truth. 4. Language Used As A Shield And Weapon. In general, words which emphasize an emotional component over a literal meaning are called euphemisms (if the emotion is positive) and dysphemisms (if negative). Such words can be used in propositions with the hope that the audience will respond to the emotional content. Euphemisms try to make something sound better than it really is. Imprecise scientific language is often used to give the impression of authority or sophistication. Speakers hope that audiences will not only fail to understand the technical term, but be impressed by the fact that the speaker is using a scientific term that they do not understand. For example, food fashion changes every few years; recently, anti-oxidants and omega-3 have been all the rage. Consumers are encouraged to make purchases based on these features, though it's not likely many consumers could evaluate the truth of a claim such as "This products contains anti-oxidants." or know why this might be

58

healthful. The product, of course, tells the consumer that it contains anti-oxidants, and the buzz around the phrase suggests that it is important to one's health. Here are other examples of what, to most people, will be phrases which conjure up good feelings (which helps convince them to accept the conclusion or explanation) but which lack meaning and so make checking the first criterion impossible: family values; moving forward; much-needed change; fulfill the promise of a generation; working Americans; green initiative; unbeatable prices; organic; naturally flavored; old fashioned; home-style Considering also the following examples: fuel-injection technician; commodity relocation; freedom fighter; vertically challenged; full-figured; passed on; between jobs; pre-owned; sales associate; executive assistance; down-sizing; enhanced interrogation; transfer tubes; creation science; climate change A "fuel-injection technician" is in fact someone who pumps petrol/gasoline at a filling station, but the words "fuel-injection" and "technician" are both intended to make the audience think that the job is quite sophisticated and perhaps even glamorous. The opposite of a euphemism is a dysphemism. These make things seem worse than they actually are. Consider the following examples: tree-hugger; snail-mail; death tax; anti-life; grammar Nazi; death-trap Like metaphor and simile (in the next sub-section), euphemism and dysphemism are perhaps not vacuous, if a meaning can be given to them, but they add an extra step to the evaluation process. When confronted by a passage with a proposition which includes a euphemism or dysphemism, the proposition must be re-written. 5. Metaphor and Simile. In an effort to be entertaining as well as informative, humans often use colorful language when speaking. Unfortunately, as far as evaluating the truth of propositions is concerned, these are a distraction. Metaphors and similes perhaps stimulate the brain more than plain language, and do so in ways which make for convincing arguments and satisfying explanations, but they can make the propositions they appear in difficult to grasp and therefore to judge. Consider this following examples: Jack refused to let Jim off the leash to chase squirrels because he has a heart of stone. The new iPhone is flying off the shelves. Visit your local Apple store today!

59

Life is like a box of chocolates. There's no reason to give up because of one setback. Jack cannot literally have a heart of stone and mp3 players do not fly off shelves. So, if these propositions are taken literally, they are false. Similarly, there are many significant ways in which life is unlike a box of chocolates. However, each will be easily recognized as a metaphor or simile. But what exactly do they mean, in nonmetaphorical terms? Perhaps "Jack is mean-spirited.", "The new iPhone is selling well.", and "A variety of positive and negative events happen in the course of life.". Metaphors and similes are perhaps not vacuous, since a meaning can often be given to them, but they add an extra step to the evaluation process. (Metaphor and simile are related to analogy, for which see 9.4.) 6. Vagueness & The Continuum Fallacy. Vagueness is a special kind of imprecise language, which depends on a concept which varies by degree. The word "bald", for example, is vague. It clearly applies to a person who has no hair, and it clearly does not apply to a person who has a full head of hair. But for some people, whether or not it applies is unclear. In short, there are borderline cases. In order to deal with a premise which contains a vague term, we need to make the term precise. In the case of "bald" (in propositions such as "Jones is bald."), we might say just how many hairs the thing in question has, or, more realistically, give at least a more precise statement of the degree of baldness, such as "Jones is totally bald." or "Jones is bald on top, but has hair on the sides." or "Jones is about as bald as Winston Churchill was.". 7. The continuum fallacy (or sorites paradox) exploits vague terms. It is a form of argument which takes advantage of the fact that vague terms do not have clear dividing lines at any point between the extremes but vary only by degree, in order to argue, fallaciously, that two quite different states share some property. Such arguments have the following form: (1) There is a continuum c. (2) Some thing on one end of c has property p. (3) Moving one increment along c cannot result in a change from p to not p. -----------------------------------------------------------------------------------------------(4) Some thing on the other end of c has p. The mere fact that there is no sharp line between things having p and things not having p is supposed to give us good reason for thinking that there is no difference in terms of 60

p between things on one end of c and things on the other endso that since the things on the left have p, so do the things on the right. Here is an example: A person having exactly 1 penny is not significantly different in wealth from a person having exactly 2 pennies, and a person having exactly 2 pennies is not significantly different in wealth from a person having exactly 3 pennies, . . ., and a person having exactly 99,999,999,999 pennies is not significantly different in wealth from a person having exactly 100,000,000,000 pennies. Thus, since a person having exactly 1 penny is not rich, a person having exactly 100,000,000,000 pennies is not rich. There is good reason for thinking the premises are true, but there is not good reason for thinking the conclusion is true. In fact, there is good reason for thinking the conclusion is false. The reasoning, thus, must be bad. 8. Weasel Words. A weasel word (or phrase) is used to qualify a more striking claim. These are often vague words. The hope on the part of the speaker is that the audience will not pay attention to the weasel words because their meaning is not immediately obvious and focus only on the rest. For example, NEW WEIGHT-AWAY HELPS YOU LOSE THE POUNDS!! "Helps" is a weasel word. The speaker is hoping that the audience simply connects the product with "losing weight", perhaps because it is difficult to give a definite meaning to the word "helps". Here is another example: LOSE UP TO 10 POUNDS A WEEK WITH NEW WEIGHT-AWAY!! "Up to" is a weasel phrase. The advertiser is hoping that the audience thinks only of "lose" and "10 pounds a week". Again, "up to" could be anywhere between 0 and 10. Since no precise meaning is given, the brain might instead fix on "10", which is precise and optimistic. (A common form of weaseling in advertising is to present a striking claim prominently, and then qualify it in the "fine print" which is, literally, difficult to read. For example: LOSE 10 POUNDS IN TWO WEEKS WITH NEW WEIGHT-AWAY!!*
(*In conjunction with a moderate diet and regular exercise.)

The large print giveth; the small print taketh away. This might be interpreted as exploiting an ambiguity in the word "with".)

61

9. Ambiguity & The Fallacy of Equivocation. Ambiguous words are imprecise because they have multiple precise meanings. The word "pen" is ambiguous: it can be used to refer to a tool for writing, it can be used to refer to an enclosure for animals, and it can be used to refer to a penitentiary. The same goes for "has" in the sentence "Hannibal often has his friends for dinner.". It can be used to say that Hannibal often eats dinner with his friends, and it can be used to say that Hannibal often eats his friends for dinner. In general terms, a word, phrase, or sentence is ambiguous just in case it has multiple meanings. In order to evaluate it, we must resolve the ambiguity. Ambiguities can arise syntactically. Some (often amusing) mistakes occur when dependent clauses appear to modify an inappropriate entity. A classic example is "Wanted: A piano by a local woman with wooden legs.". Or again: "Man's arm severed, 3 others critically injured in crash near Midway." (Chicago Sun Times, July 19, 2009); "The student described how the relationship escalated from Facebook flirtations to sexual intercourse during a courtroom appearance." (Huffington Post, July 16th, 2009). 10. When the same word is used with different meanings in different propositions, the fallacy of equivocation is being committed. Consider the following argument from the abortion debate: (1) A fetus is a human being. (2) A human being has a right to life. --------------------------------------------(3) A fetus has a right to life. This argument commits the fallacy of equivocation, since the phrase "human being" must be understood in different ways in premise (1) and premise (2) in order to make them true. "Human being" can be used to refer to being biological human being or to a person. Using either meaning consistently throughout the argument produces a valid argument but makes one of the premises false. Let's consider each meaning, one at a time. If we take "human being" to mean "biologically human", we get the following argument: (1) A fetus is a biological human. (2) A biological human has a right to life. -------------------------------------------------(3) A fetus has a right to life. If we take "human being" to mean "person", we get the following argument:

62

(1) A fetus is a person. (2) A person has a right to life. -----------------------------------(3) A fetus has a right to life. The second premise in the first argument is false (it is thought by many), and so is the first premise in the second argument (it is thought by many). On the other hand, if "human being" is used ambiguously in the premises, so that it is used to refer to being biologically human in the first premise and to being a person in the second, then, although the premises are true, they do not serve as good evidence for the conclusion: (1) A fetus is biologically human. (2) A person has a right to life. ---------------------------------------(3) A fetus has a right to life. Both premises are true, but the support is weak. Each premise presents a different line of support, and neither one, nor both added together, is particularly convincing. In general terms, an argument is an instance of the fallacy of equivocation just in case (1) it makes use of an ambiguous word, phrase, or proposition, (2) the premises are true only if that word, phrase, or proposition is used ambiguously, and (3) the argument's reasoning is good only when the ambiguousness of that word, phrase, or sentence is disguised. 4.3 Sources 1. Getting clear on what a proposition means is the first step in evaluating it. Since the propositions involved as reasons in arguments and explanations can be about anything at all, knowing whether they are true or false requires knowledge of the relevant topics. For example, suppose as a reason for some target a speaker asserts "The Allied forces suffered more fatalities in World War II than the Axis forces.". In order to say whether this proposition is true, you need to be knowledgeable about World War II. If you lack this knowledge, the argument or explanation the proposition is a part of will not be satisfactory. Whenever you cannot accept a reason as true, for whatever reason, you then say that the argument or explanation of which it is a part is defective because you do not believe the reasons to be true.

63

In this chapter and this book we are not be able to discuss the truth or falsity of premises concerning specific topicsconsult your perception, memory, your intelligence or the requisite sources. However, it's worth thinking, in a general way, about the sources of belief and the qualities which make them trustworthy. 2. Notice that when thinking about the truth of the premise "My friend said it would rain.", evidence is presented as to why the premise "My friend said it would rain." might be truethat it wasn't windy or noisy and he spoke clearlyor might not be true"I wasn't really paying attention.". In each case, an argument is being made to support or deny the truth of the claim. The argument in support goes like this: It wasn't windy or noisy when my friend said "It will rain this afternoon." and he spoke clearly. So, he did in fact say "It will rain this afternoon.". But now this argument can be evaluated. Was there really no wind, or noise and did he really speak clearly? And, even if it wasn't windy or noisy and he did speak clearly, is that enough to make it true that she correctly heard what he said? Well, perhaps evidence can be added which will convince her that it really wasn't windy. But whatever reasons are provided for that conclusion, those can then be questioned! Where does it end? At some point, you stop asking for evidence. In general, you stop when you find a reason that you can be confident in. A typical stopping point is experience, either your own direct experience or the experience of others, which they testify to. But even your own experience can be challenged. If the claim is "It is raining" and your reason for believing this is that one feels drops of water from above, you could ask yourself "But am I really feeling drops of water from above? Perhaps my skin is prickling, as a reaction to heat or something I ate. Perhaps I am dreaming or hallucinating.".) Such concerns are the topic of this section. 3. A source must be an expert. Expertise is checked by reliability. The most common reliable source of beliefs is sense-experience (preferably one's own). Though it might sound odd to say it, most people are "experts" when it comes to perceiving the world. Some people, however, have senses that are defective in various ways, such as being short-sighted, or hard of hearing, and some lack the sense altogether, such as being blind or deaf. Immediate sense-experience is preferred. But even your own immediate experience can also be suspect, however, if you are under the influence either of something that affects you physiologically (alcohol, drugs, etc.) or of a bias that affects you psychologically (such being angry, prejudiced, or influenced by some strong 64

desire). In such cases, your experience might be faulty. Optical illusions can also thwart the senses. Immediate sense-experience is what you rely on as you move around and interact with the world. Beyond this, however, you must also rely heavily on memory, though memory can be unreliable, even for quite recent events. (It is also possible to invent false memories and deliberately create them in people.) Unmediated sense-experience is also preferable. The devices and instruments that are used to communicate information can have their own problems and introduce doubt simply by being an extra step in the process. For example, some people were skeptical that the pictures of the astronauts from Apollo 11 on the moon were real. The doubt was possible because they were watching the pictures on television, rather than being on the moon within sight of the events. Immediate and unmediated sense-experience, however, doesn't take you very far in your attempt to understand the world. You rely greatly on other people for information and theories and on instruments and machines to provide the data you need in order to generate and test your theories about how the world works. Since this information or these data are then passed on to you via your senses, it is possible that you can simply misread or mishear what the other person wrote or said or what reading the instrument was showing. The light might be poor, there might be loud noise, and so on. And again, you might misremember what was written or said or shown. It is also possible that the person or the instrument is malfunctioning. In the case of a person, "malfunctioning" would be a failure on her part to properly perceive or remember her sense-experience. When a person possesses a large set of beliefs about a particular subject and this set includes not only many points of observational data but also beliefs relating different experiences is an expert in the more typical sense of the word. Practically everyone is competent to report on her sense-perception (what was seen, heard, smelled, etc.) and so an expert on these things, but people have expertise which goes beyond immediate and remembered sensation. Non-experts can nonetheless judge experts in terms of reliability: an expert produces goods or makes predictions which can be verified by the senses (whether directly or with instruments) and build a record of reliability (or not). An expert cook, for example, will be able to produce a certain taste in a dish. 65

4. A source must be unbiased. Even people or sources you can normally trust can be overcome by particular influences. For example, people will lie or distort what they say in order to protect their reputation or achieve something good for themselves. A particular form of bias is when the source benefits from having you (the audience) believe what it says. When a source has a stake in getting you (the audience) to accept the belief, it is not neutral. 5. What a source says must be consistent with judgments given by unbiased experts in the subject being discussed. One way of telling that a sourcewhether your own experience, or another's experience, or the explanation of an experthas gone awry is that it contradicts other beliefs. For example, if you seem to see pink elephants, you will likely reject the belief "There are pink elephants in front of me.", since it contradicts the strongly-held existing belief that there are no such things. Instead, you will suspect that you are dreaming, or hallucinating. More realistically, even when a proposition comes from an expert, but the matter is controversial among experts, it is unwise to accept the claim. 6. In sum, whether a source is oneself, another person, or a scientist in a lecture or journal, the source must be an expert in the relevant field (that is, competent to judge and previously reliable) unbiased (that is, have no stake in getting the audience to accept the judgment) and what the source says must be consistent with other judgments produced by unbiased experts 4.4 Reason Substitutes 1. We now turn to the issue of checking the reasoning, rather than the truth of the reasons. We begin with simple passages or dialogues that contain really bad reasoning. People hold some beliefs without reasons. And people often forget the reasons for their beliefs. And even if they have reasons, it can be difficult work to produce them in an organized fashion. Further, people don't like to have their beliefs questioned. And yet, people like to appear to have reasons and when pressed will often say anything other than admit ignorance. In this section we begin our discussion of evaluating arguments and explanations by examining some "reasons" that are better interpreted as ways to avoid giving reasons or to appear to have reasons.

66

2. A first tactic speakers can use is to assert that there is no need to make the reasons explicit. This attitude is exhibited by use of phrases such as "It's obvious that " or "There are too many reasons to enumerate ". Speakers might even use abusive phrases such as "Only a fool could fail to know that ". (Such phrases are often accompanied by assertive body language and a change in vocal intonation.) These kinds of phrases are also used as (pseudo) objections, meaning that they are used to assert that the original argument is bad but do so without offering any reasons for rejecting it. Consider the following: Gill: The cruelty done to animals in factory farms is terrible. Since most of our meat comes from factory farms, we should start reducing our meat consumption right away. Jack: That's the dumbest argument I've ever heard. I think you forgot to turn on your brain this morning. Note that Jack hasn't really offered any reason for thinking Gill's argument is unsound. All he has done is assert that it is unsound in a rather abusive manner. 3. The related phenomenon of Shifting the Burden of Proof occurs when a person putting forward a proposition insists that the objector should provide reason(s) against it, and refuses to offer any reason(s) for the initial proposition. Gill: Jack: Gill: We should go to Ireland for our summer holiday this year. Oh yeah? Why's that? Well, why shouldn't we?

Perhaps because she cannot produce reasons of her own, Gill thinks that it's Jack's job to convince her that they should not go to Ireland. In fact, the responsibility lies with her, since she is the one putting forward the new proposition (that they should holiday in Ireland). 4. Other types of reason substitute are not as easy to spot as dismissing the need to provide reasons. Indeed, they are often found convincing in practice. One way to appear to give reasons is to repeat one's claim in slightly different words. (This is a very basic version of a fallacy called begging the question or circular reasoning. Begging the question occurs when the target, or some aspect of it, is used as a reason, or is assumed by the reasons.) Consider the following example: Henry: LeBron is the best player in basketball. Bill: Oh yeah? Why's that? Henry: There's simply no one out there who comes close.

67

Henry hasn't really given a reason to support his claim. All he has done is repeat the claim using different words. 5. Accusing someone of hypocrisy is a way to avoid engaging with an argument that a speaker has given. This is also called the "You do it too!" or "Look who's talking!" response. Smith: You need to give up the smokes, Jones. Smoking cigarettes is terrible for your health. Causes lung cancer. Jones: You're one to talk! I saw you puffing away at the bar last night. Jones has charged Smith with hypocrisyhis actions don't match his words. But this only shows that Smith is unable to abide by his own argument, not that the argument is bad. (Similarly, speakers can be accused of having previously believed the opposite of what they are now arguing for. But changing one's mind is not a sign that the present argument is bad.) The lesson in all these cases is: Listen carefully. Make sure you get a reason(s). 6. A final strategy is probably the most common, perhaps because it straddles the border between reason substitute and viable reason: Offer some reason, any reason. Speakers know that it takes some work to pay attention to an argument or explanation and, consequently, they know that audiences might default to the easier task of making sure that the speaker sounds as though she is offering a reason. A speaker, accordingly, might throw out something as a reason. The longer and more complicated the reason is, the more convinced the (typical) audience might be that the speaker really does have a reason. Politicians are experts at this. Random reasons sound better, of course, when accompanied by confident gestures and tone. They also, remarkably, gain some weight merely from the use of flag words. In an experiment conducted by Ellen Langer and colleagues, people standing in a queue for a photocopier were approached by someone hoping to join the queue in front of them. The person joining the queue said one of the following three things: A B C Excuse me, I have five pages. May I use the Xerox machine? Excuse me, I have five pages. May I use the Xerox machine because I have to make some copies? Excuse me, I have five pages. May I use the Xerox machine because I am in a rush?

Remarkably, the rates of acceptance were almost the same for B (93%) and C (94%), even though B does not offer a reason for jumping the queue, since anyone who

68

wants to use a copier wants to make copies. It is thought that word "because" was somehow sufficient to persuade the people already in lineoption A worked (only) 60% of the time. (Notice that option B also fails to add any information. Someone who entertains the proposition "You should let me join the queue for the photocopier ahead of you." is already entertaining the proposition "I [the speaker] have some copies to make.". A person would only want to join the queue for photocopying if she had copies to make. The argument thus begs the question.) 4.5 Evaluating The Various Reasoning Structures 1. With a little care, the basic questions'Are the reasons true?' and 'Do these reasons justify or explain the target proposition?'can be applied to any of the reasoning structures you saw in chapter 3. 2. For any argument or explanation whose structure is diagrammed with a single regular arrow, such as just one reason or combined reasons, ask the basic questions. 3. For any argument or explanation whose structure is diagrammed with a splittailed arrow (a pile of reasons) you must remember that it is possible that a sub-set of the reasons is sufficient. It is possible that the argument or explanation will still be good even if objections undermine one or more (but not all) of the reasons. 4. Conclusion Conjunction structures are evaluated by evaluating each argument or explanation separately. 5. Compound structures are evaluated by evaluating each stage of the reasoning in turn, each of which must do its job. This means that if any of them are rejected, the whole must be rejected. Consider the following example (the numbers in the diagram follow the numbers in the standard form): 1+2 (1) Honey is a fruit. (2) Some fruits are sweet. -----------------------------(3) Honey is sweet. (4) Sweet things are good for you. -----------------------------------------(5) Honey is good for you.

3+4 J 5

69

The diagram can be read as: (1) and (2) together justifies (3), which, together with (4), justifies (5). In evaluating this argument, we must examine both the support given by (1) and (2) to (3) and by (3) and (4) to (5). You can see that the reasoning in the first subargument is weak: (2) says that some fruit is sweet and the first premise does not give us any reason to think that honey is one such fruit. The reasoning in the sub-argument involving (3), (4) and (5) is strong. But since (3) is being used as a reason for accepting (5), the argument as a whole gives us no good reason for accepting (5), which is the main conclusion. For soundness, the reasoning in the argument must be strong throughout. 6. Passages involving objections (and support for and rebuttals to them) are evaluated in the same manner as compound structures, except that sub-structures are sometimes working against one another. Take a simple case in the abstract, which shows a challenge to the strength of the connection between the reasons and the target: 1+2 J/E 3 4 The initial argument/explanation will considered good if (1) and (2) are true and the connection between (1 + 2) and (3) is tight, and some fault is found with (4)'s challengeeither (4) is false or it does follow from the truth of (4) that the original connection is weak. When rebuttals to objections are added you must continue the process. For example, if a rebuttal (5) is added, challenging the truth of (4), as follows, 1+2 5 J/E 3 4 you must check this specific piece of reasoning, that is, you must ask whether (5) is true and whether it would follow from the truth of (4) that (5) is not true. (And note that if (5) is effective in showing that (4) is not true, it has thereby helped you in the evaluation of (4)'s impact on the original reasoning.)

70

The messiest type of structure to evaluate is a pile of reasons (using the splittailed arrow) both for and against, as in the following diagram: 2 J/E 1 6 7 8 In this case, you must examine each reason and the work it does in justifying or explaining or objecting, and decide whether the target proposition is justified or explained despite whatever weight there is to the objections. 4.6 Adding Warrants 1. In this section we introduce the strategy of looking at the warrant(s) in an argument or explanation, and adding one if absent. This strategy applies to both arguments and explanations and so more examples will be provided in chapters 5-7. In chapter 3 we noted that a single line of reasoning can be expressed in more than one proposition. Compare the following two passages: Jim is a dog. So/That's why he has a tail. and Jim is a dog. Most dogs have tails. So/That's why he has a tail. Both passages contain the proposition "Jim is a dog.". This by itself might be enough to justify belief of the conclusion, but if you (as the audience) did not understand how Jim's having a tail was evidence that made his being a dog likely and said "So?", the speaker could add the proposition, "Most dogs have tails." in order to make the connection between the reason and the target. 3. A warrant is any proposition which makes the reasoning explicit; that is, a warrant explains how to move from the specific reason(s) to the target. Each type of reasoning has its own warrants. In this chapter, however, we will speak only generally about warrants. Speakers might add the relevant warrants, but if they are missing from an argument or explanation, you can add a warrant(s). 4. Inserting a warrant when one is absent is a helpful strategy for thinking clearly about justification or explanation, because the proposition makes explicit something 71 3 4 5

you need to think about when determining whether there is a strong connection between the specific reason(s) and the target. Any proposition added must (i) either justify (in an argument) or explain (in an explanation) the target proposition and (ii) be true. If no proposition is available which will do both, the argument or explanation is bad. 5. Warrants need to be added your analysis, whether in standard form or diagram form. Let's work through another example. The argument "Abortion kills the fetus. So, it is murder." in standard form is: (1) Abortion kills the fetus. ------------------------------(2) Abortion is murder. And it is diagrammed as follows: (1) Abortion kills the fetus. (2) Abortion is murder 1 J 2 You might try to evaluate the reasoning by adding the proposition "Killing the fetus is murder." as a warrant and evaluating its truth. The existing standard form would be modified as follows: (1) Abortion kills the fetus. (3) Killing a fetus is murder.* J -----------------------------------------------------------------------------(2) Abortion is murder. The new proposition is added anywhere above the line. In this case, it has been added off to the side and the line has been extended. It might also have been added above premise (1), or, if there had been sufficient space, between (1) and the line. Add the premise to the key, as proposition (3), and place an asterisk at the end, to indicate that it has been added by us. Write "+ 3", and add a set of parentheses, to "1" in the diagram. (Recall from chapter 3 that we use the plus-sign to group together the propositions which work together in a single line of support.) There is no need to recenter the arrow on the plus sign, if it is trouble to do so. The original diagram becomes (1) Abortion kills the fetus. (2) Abortion is murder. (3) Killing a fetus is murder.* 1+3 J 2 72

This additional premise makes for a strong connection between premises and conclusion, but its truth might be thought dubious. And so, the proposition we have added here fails the second condition for adding warrants. So, you must try to find another way to make the connection between premise (1) and the conclusion. (We will come back to this issue in the next section.) 6. Note that you might need more than one proposition in order to make explicit the conditions under which the evidence will count as strong support for the target. Consider the following argument: Smith's finger-prints were found on the stolen computer. So, he stole it. How would you make the connection between finger-prints and theft more explicit? You might do so in two stages: first, you might add that finger-prints uniquely identify a person, and so it follows that Smith touched the computer, and second, that Smith's prints only count as good evidence of theft if he had no good reason to touch the computer. The argument can be diagrammed as follows: (1) (2) (3) (4) (5) Smith's finger-prints were found on the stolen computer. Finger-prints uniquely identify a person. Smith touched the computer. Smith had no good reason to touch the computer. Smith stole the computer. 1+2 3+4 J 5 In this argument, the specific information is that Smith's prints were found on the computer, and there are two steps taken in order to reach the conclusion that he committed the crime. First, an additional premise is added ("Finger-prints uniquely identify a person.") and the interim conclusion "Smith touched the computer." is drawn. This conclusion then acts as a premise in support of the ultimate conclusion. Dividing the argument into two parts helps you see why the argument is weak. The finger-prints do justify the claim that Smith handled the computer, but they do not justify the claim that he stole itperhaps there are other prints; perhaps he has a reason for touching it. In the following dialogue, Gill forces Jack to be (over-)explicit: Jack: Gill: Jack: Gill: Jack: Let's go see a movie. Sounds good. What should we see? We should see Snakes On A Plane. Oh yeah? Why? It stars Samuel L. Jackson. 73

Gill: Jack: Gill: Jack:

So what? We've enjoyed all of his movies up to now, so we'll enjoy this one too. But does that mean we should see it? Sure! We should see a movie we'll enjoy!

Gill forces Jack to explain how the fact that "Snakes On A Plane stars Samuel L. Jackson." supports the conclusion that they should go to see it. As a warrant, Jack offers a further premise, that they have enjoyed previous movies which starred Samuel L. Jackson, from which he concludes that they'll like Snakes On A Plane. The connection to the original premise is not yet made, since the original says that they should see the movie, whereas Jack has progressed only as far as the claim that they'll enjoy the movie. To completely close the gap, he adds the premise "We should see a movie we'll enjoy.". In a diagram: (1) Jack and Gill should go see Snakes On A Plane. (2) Snakes On A Plane stars Samuel L. Jackson. (3) Jack and Gill have enjoyed any movie starring Samuel L. Jackson that they have seen previously. (4) Jack and Gill will enjoy Snakes On A Plane. (5) Jack and Gill should see a movie they will enjoy. 2+3 4+5 J 1 Jack first added a connecting proposition which explained how he was reaching the conclusion that they will enjoy Jackson's latest movie, and he then added a connecting proposition to take him from this to the conclusion. The original argument has been expanded into two sub-arguments, and each of them can be evaluated separately. In this case, Gill might agree that all of Jack's premises are true, but can hopefully see that the combination of (4) and (5) might not be enough to justify (1) there might be other movies that they will enjoy even more. 7. In a passage with multiple reasons, each line of reasoning can be made more explicit by adding a warrant, in order to think about the weight that each line of reasoning gives to the target. Compare Jones saw Smith commit the crime. Smith, further, cannot provide an alibi. Together, these two pieces of evidence prove that Smith committed the crime. with Jones saw Smith commit the crime. Eyewitnesses are usually reliable. Smith, further, cannot provide an alibi. Failure to produce an alibi is a cause for

74

suspicion. Together, these two pieces of evidence prove that Smith committed the crime. The second version is diagrammed as: (1) (2) (3) (4) (5) Jones saw Smith commit the crime. Eyewitnesses are usually reliable. Smith cannot provide an alibi. Failure to produce an alibi is a cause for suspicion. Smith committed the crime. 1+2 J 5 3+4

Again, the point is that the lines of reasoning are made more detailed by adding a premise which connects the other reasons to the conclusion and which therefore helps us evaluate the strength of support given to the conclusion. 8. We end this section with a special case. Sometimes an apparent lack of connection in an argument or explanation is caused by variation in wording. Some variation in wording is to expected in natural speech or writing; arguers add life to their arguments by varying the words they use, even though they intend to refer to the same things. In making sure that the premises are linked, do not add an additional proposition; rather pick a single wording for the propositions involved. Consider this slight variation of argument about unemployment and crime: Unemployment is currently rising. When unemployment rises, crime rates tend to rise. So, crime rates will soon go up. In this case, the variation is between words that are equivalent ("rises" and "go up") and you simply have to pick one. However, often the words which vary are not exactly equivalent. For example, consider this argument: Jack has the hots for Gill. Any girl Jack likes, Jack asks out. So, Jack will ask Gill out. The speaker switches from talking about Jack's having the hots for Gill (in the first premise) to his liking Gill (in the second). The premises will work together to support the conclusion if you connect "has the hots for" and "likes". But although close in meaning, they are not the samea person could like someone without having the hots for the other person. It might be that the reasoner meant to use exactly those words, and is relying on the audience to supply a premise which connects them ("If one person has the hots for another, the one person likes the other.") or, more likely, he simply varied his language. In that case, you must pick whichever we think is most appropriate 75

(perhaps the arguer really meant to say "has the hots for" both times, since Jack doesn't just ask out girls he likes), or, as a default, pick the broader term (in this case "likes") or come up with a third word which splits the difference (perhaps "is attracted to"). Using "likes", the initial analysis looks as follows: (1) Jack has the hots for Gill. (2) Any girl Jack likes, Jack asks out. (So), (3) Jack will ask Gill out. (1) "has the hots for" = "likes" And in standard form: (1) Jack likes Gill. (2) Any girl Jack likes, Jack asks out. J ------------------------------------------(3) Jack will ask Gill out. However, you must use discretion when faced with variation, since what might appear at first glance to be mere variation could in fact be an important distinction. For example, "knows" and "believes" are often used equivalently, but in some contexts (as when arguing about whether a person is responsible for some important event), they might be crucially different. 4.7 Sincerity & Charity 1. With respect to evaluating the truth of the premises or explainers, both those which appeared in the original passage and any added by you, note that the reasoner might take a premise to be true but the audience might take it to be false, and vice versa. For example, non-believers might attempt to convince believers on religious grounds. If the object of arguing or explaining is simply to get the audience to accept a proposition as true or accept some proposition as a good explanation, the reasoner can use propositions which the audience accepts as true. If on the other hand, the object of these activities is to build shared meaning between reasoner and audience, the reasoner should use reasons which both she and the audience believe. 2. If the speaker provides a poor piece of reasoning, what we might call the principle of sincerity demands that you take the reasoner at his word, even if this makes it easier to show that the argument or explanation is bad. Consider the example we introduced in the previous section: Abortion kills the fetus. So, abortion is murder.

76

Perhaps, when prompted by the audience, the speaker adds "Killing is murder." as a warrant. The premise is now connected to the conclusion, but the additional premise is false. You must nonetheless take the reasoner at his word and show him that the premise is false and, so, the argument is not a good one. Another way in which you (the audience) must respect the speaker is that you should not change or throw out any of the propositions that are presented. Consider the following: If it is raining heavily, the game is cancelled. The game is cancelled. So, it is raining heavily. You will see (especially after reading this book) that the reasoning in this argument is weak (since there are other common reasons for cancelling games) and will be tempted to reverse the first premise and say that the arguer "must have meant" to say "If the game is cancelled, it is raining heavily.". However, this would mean getting rid of one of the premises given explicitly. That is, you would throw away the first premise ("If it is raining heavily, the game is cancelled.") and add another one ("If the game is cancelled, it is raining heavily."). In cases like this, it is better simply to point out the weakness of the original argument. 3. When the speaker is not present you have to take over the argument or explanation as your own. One thing you can do, however, is to try think of information about the speaker that will help add the missing proposition(s). For example, you might know something about the speaker's other views which would help you construct the explanation or argument in the way that he would have wanted. To return to the argument that abortion is murder because it kills the fetus, if the arguer has been known to sincerely express the beliefs that fetuses are innocent and that killing innocents is murder, you might fill in the argument with those premises. 4. If you do not know how the arguer might have made the reasoning explicit, you can take up the argument for itself and try to make it explicit. To repeat from the previous section, inserted warrants should (i) make a strong connection between the given reason(s) and the target so as to either justify (in an argument) or explain (in an explanation) the target and (ii) be true. To supply a principle that is false or makes the reasoning poor and then claim that this is the speaker's reasoning commits a version of the straw man fallacy and violates the principle of charity. (The basic version of the straw man

77

fallacy is to misrepresent an original argument in a way that makes it easy to attack. We, however, are considering cases where the given argument is incomplete.) We might supply a false or dubious premise because we do not want to accept the conclusion or explanation, perhaps because it challenges a cherished belief. This kind of reasoning has been biased by wishful thinking or self-interest.) 5. As an example, let us suppose that you are left with Abortion kills the fetus. So, abortion is murder. In considering the argument, you try to add premises that are true and that provide a strong connection. If you cannot think of any premises which satisfy both of these requirements, the argument is unsound. There are a number of possibilities. We've already considered the idea that all killing is murder, but this will be considered false by practically everyonethere are types of killing that are not considered murder (most widely, killing in self-defense). Perhaps then, you should weaken the principle to "Most killings are murder"? Or is it (more specifically) because "(All/Most) Killing fetuses is murder."? Perhaps the premises to be supplied are "Fetuses are innocent." and "Killing innocents is murder.". You can try each of these options in turn, in order to construct the strongest argument or explanation possible. We will provide more examples of adding warrants in future chapters. So far, we have simply said that the warrant must "make a line of reasoning explicit", between the given reason(s) and the target. Chapter 5 begins our discussion of what this means, more precisely, and poses the question of how we arrive at these warrants, which is taken up in the subsequent chapters.

78

Chapter 5 Basic Evaluation Of Arguments 5.1 Soundness 1. The propositions in an argument can be separated into two groups: the premise(s) and the conclusion. The premises are supposed to be sufficient reason for thinking that the conclusion is true; they are supposed to be sufficient evidence for the truth of the conclusion; they are supposed to justify the conclusion, establish it, prove it, demonstrate it, and so on; the conclusion is supposed to be justified by, follow from, inferred from (and so on) the premises. To repeat the slogan from chapter 4, you must check the reasons, and check the reasoning. When applied to arguments, the two basic questions to ask are "Are the premises true?" and "Does the premise(s) provide strong justificatory support for the conclusion?". An argument is sound when you accept (i) that the premises are true and (ii) that the premises give strong justificatory support to the conclusion. An argument is unsound when either (i) at least one of the premises is false or (ii) the inference is weak. (We will use "sound", "good" and "convincing" as synonyms, and also "fallacious", "bad" and "unconvincing".) Evaluation of the truth of the premises was covered (as much as is possible) in chapter 4. In this chapter we focus on the second criterion, namely, evaluation of the reasoning, of whether or not the premises justify the conclusion, of the justificatory inference. We will distinguish cogent (and incogent) reasoning from valid (and not valid) reasoning. We will start with validity and then move to cogency. 5.2 Validity 1. Here are a variety of ways in which the concept of validity has been expressed: (i) valid = if the premises were true, then the conclusion would have to be true (ii) valid = it is impossible to consistently both (i) accept the premises and (ii) reject the conclusion (iii) valid = it is impossible for the premises to be true and the conclusion false (iv) valid = it is impossible to imagine a scenario (even fictional) in which the premises are true and the conclusion is false (v) valid = the conclusion is true in every imaginable scenario in which the premises are true (vi) valid = it is impossible to write a consistent story (even fictional) in which the premises are true and the conclusion is false (vii) valid = the conclusion follows conclusively from the premises

79

Note first that although many of these formulations use the words "true" and "false" or "truth" and "falsity", they are not telling you to evaluate the truth or falsity of the premises and conclusionthat is a separate task, which we discussed in chapter 4. Validity does not concern the actual truth or falsity of the premises. Rather, when you determine whether the reasoning in an argument is valid or cogent you suppose that the premises are true, and then ask whether, if true, they would make the conclusion true. That is, you are evaluating the connection between the premises and conclusion, not the premises or conclusion themselves. You might think that they are false, or just not know whether they are true or false, and the inference (the reasoning) could still be valid. (The same will be true of cogent arguments: an argument can be cogent even though you believe the premises are false or are ignorant about whether they are true or false.) In general, then, the actual truth or falsity of the premises, if known, do not tell you whether or not an argument is valid or cogent or incogent. This is important to keep in mind because people naturally but mistakenly use their opinions as to the truth or falsity of the premises and as a guide to whether or not the reasoning is valid. (The same is especially true about the conclusion, if the audience already has an opinion about the conclusion. What is known as the fallacy of mistaking the conclusion for the argument occurs when people assume that because they take the conclusion to be true, the reasoning is good and even that the premises are true, or when they assume that because a conclusion is false, the reasoning is bad and even that the premises are false.) Here is an argument with a false premise, but which has valid reasoning: (1) Every Irishman drinks Guinness. (2) Smith is an Irishman. J -------------------------------------------(3) Smith drinks Guinness. The first premise is, in the real world, false. And yet the inference in the argument is valid: if the premises were true, the conclusion would have to be true. The argument's reasoning is valid, even though a premise is false. Now consider an argument with reasoning that is not valid: An economic stimulus package will allow the U.S. to avoid a depression. Since there is no economic stimulus package, the U.S. will go into a depression. This reasoning is not valid since they do not definitively justify the conclusion. To see this, assume that the premises are true and then ask, "Is it possible that the conclusion

80

could be false in such a situation?". There is no inconsistency in taking the premises to be true without taking the conclusion to be true. The first premise says that the stimulus package will allow the U.S. to avoid a depression, but it does not say that a stimulus package is the only way of avoiding a depression. Thus, the mere fact that there is no stimulus package does not necessarily mean that a depression will occur. Here is another example: If the U.S. economy were in recession and inflation were running at more than 4%, then the value of the U.S. dollar would be falling against other major currencies. But this is not happeningthe dollar continues to be strong. So, the U.S. is not in recession. Taken as an argument, the conclusion is "The U.S. economy is not in recession.". The conclusion does not follow necessarily from the premises. The premises entail that either (i) the U.S. economy is not in recession or (ii) inflation is running at more than 4%, but they do not entail just (i). For all the premises say, it is possible that the U.S. economy is in recession but inflation is less than 4%. So, the argument does not necessarily establish that the U.S. is not in recession. 2. Whenever you conclude that the reasoning in an argument is not valid, you must then go on to ask whether it is cogent or incogent. There are thus three possible results of the evaluation of the premises' support for the conclusion: the reasoning is valid, cogent or incogent. Arguments that are cogent or incogent are understood not to be valid, and there is no need to state this explicitly. 3. As we will see in the next section, most (successful) arguments have cogent rather than valid reasoning and most (attempted) arguments aim at cogency rather than validity. This is because most arguments deal with the natural world and our knowledge of the world is imperfect. There are a number of contexts, however, in which validity is possible. Science has uncovered some fixed truths about the world, and when these are used as the warrant(s) in an argument they can make the argument valid, or very nearly so. Another context in which validity is achieved is when an argument has a warrant restricting the domain to a limited range of options and giving fixed meanings to entities, such as rules and regulations attempts to be, such as grammatical rules, logical rules, the legal code or the tax code. A final possibility is that an arguer can suppose that a relationship holds definitely, and this can be, rightly or wrongly, in any domain at all;

81

these arguments will be valid, but might well rest on a false premise. Examples of each of these follow. First, consider the following passage, which involves a scientific law: Jack is about to let go of Jim's leash. The operation of gravity makes all unsupported objects fall toward the center of the Earth. Nothing stands in the way. Therefore, Jim's leash will fall. In standard form, the argument is represented as follows: (1) Jack is about to let go of Jim's leash. (2) The operation of gravity makes all unsupported objects fall toward the center of the Earth. (3) Nothing stands in the way of the leash falling. J ------------------------------------------------------------------------------------------------------(4) Jim's leash will fall toward the center of the Earth. In this argument, the justificatory support given to the conclusion by the premises is as strong as it can be. That is, if you pretend that they are true or accept them "for the sake of argument", you would necessarily also accept the conclusion. Or, to put it another way, there is no way in which you could hold the premises to be true and the conclusion false. This argument has valid reasoning. (However, you might be worried that a strong wind might spring up, or some other odd event. So, perhaps the inference is not valid.) Here is an example in which the context is an artificial codethe tax code: A tax credit for energy-efficient home improvement is available at 30% of the cost, up to $1,500 total, in 2009 & 2010, ONLY for existing homes, NOT new construction, that are your "principal residence" for Windows and Doors (including sliding glass doors, garage doors, storm doors and storm windows), Insulation, Roofs (Metal and Asphalt), HVAC: Central Air Conditioners, Air Source Heat Pumps, Furnaces and Boilers, Water Heaters: Gas, Oil, & Propane Water Heaters, Electric Heat Pump Water Heaters, Biomass Stoves. This rule describes the conditions under which a person can and cannot take a certain tax credit. Such a rule can be used to reach a valid conclusion that the tax credit can, or can not, be taken. As another example of an argument in an artificial situation with limited and clearly defined options, consider a Sudoku puzzle. The rules of Sudoku are that each cell contains a single number from 1 to 9, and each row, each column and each 9-cell square contain one occurrence of each number from 1 to 9. Consider the following partially completed board:

82

The following argument can be used to argue that, in the first column, a 9 must be entered below the 7: The 9 in the first column must go in one of the open cells in the column. It cannot go in the third cell in the column, because there is already a 9 in that 9-cell square. It cannot go in the eighth or ninth cell because each of these rows already contains a 9, and a row cannot contain two occurrences of the same number. Therefore, since there must be a 9 somewhere in this column, it must be entered in the seventh cell, below the 7. The reasoning in this argument is valid: if the premises are true, then the conclusion must be true. Logic puzzles of all sorts operate by artificially restricting the available options in various ways. This then means that the conclusions arrived at (assuming the reasoning is correct) are necessarily true. A final possibility for valid arguments occurs when the arguer uses premises which are supposed, falsely, to be universal in scope or falsely describe limited options or 100% definite connections. We'll consider these at the end of 5.4. 5.3 Cogency 1. The reasoning in an argument is cogent when the premises give very strong, though not conclusive, justification for the conclusion. Or, to put it another way, the premises, assuming they are true, would make the truth of the conclusion very likely, though not necessary. If the premises only weakly support the conclusion, such that you are not prepared to believe the conclusion, the argument is incogent. Whereas there are only two options with respect to validityvalid or not valid cogency and incogency are a matter of degree. The degree of confidence you have in the

83

conclusion on the basis of the premises offered can vary widely. Consider the following arguments: (a) (1) 92% of Republicans from Texas voted for Bush in 2000. (2) Jack is a Republican from Texas. J ------------------------------------------------------------------------(3) Jack voted for Bush. (1) Just over half of drivers are female. (2) There's a person driving the car that just cut me off. J --------------------------------------------------------------------(3) The thing driving the car that just cut me off is female. Note that the premises in neither (a) nor (b) guarantee the truth of the conclusion. Thus, neither (a) nor (b) is valid. For all the premises in (a) say, Jack is part of the 8% of Republicans from Texas not voting for Bushperhaps, for example, Jack soured on Bush, but not on Republicans in general, when Bush served as governor. Likewise for (b). In the majority of arguments, if the premises succeed in convincing you of the conclusion, they convince you that the conclusion is very likely to be true (the reasoning is cogent), rather than that the conclusion must be true (the reasoning is valid). Neither argument has valid reasoning, but there is a big difference between how much support the premises give to the conclusion in (a) and how much they do so in (b). The premises in (a), assuming they are true, give us very strong reasons to accept the conclusion. This, however, is not the case with (b): if the premises in (b) were true then the conclusion they would give only weak reasons for believing the conclusion. (a) is cogent, while (b) is incogent. In a cogent argument, the premises, assuming they are true, would make the conclusion very likely to be true. (a)'s reasoning is cogent. The degree of support given to the conclusion by the premises in (b) on the other hand is quite weak and is not enough to give you sufficient confidence in the truth of the conclusion. This reasoning is incogent. That is, when we assume the premises to be true, the premises do not make the conclusion probably true. 2. There is no firm way to say when an conclusion is very likely to be true, or when it is not. For example, consider the argument about whether Jack, a Texas Republican, voted for Bush. If 92% of Texas Republicans voted for Bush, the conclusion, if the premises are granted, would very probably be true. But what if the number were 84

(b)

85%? Or 75%? Or 65%? Would the conclusion very likely be true? Similarly, the argument in (b) involves a percentage greater than 50%, but this does not seem sufficient. At what point, however, would it be sufficient? In order to answer this question, go back to basics and ask yourself: "If I accept the truth of the premises, would I then have sufficient reason to believe the conclusion?". If you would not feel safe in adopting the conclusion as a belief as a result of the argument, then you think the argument is incogent, that is, you do not think the premises give sufficient support to the conclusion. Note that the same argument might be incogent in one context but cogent in another, because the degree of support needed changes. For example, if you merely have a deposit to make, you might accept that the bank is open on Saturday based on your memory of having gone to the bank on Saturday at some time in the past. If, on the other hand, you have a vital mortgage payment to make, you might not consider your memory sufficient justification. Instead, you will want to call up the bank and increase your level of confidence in the belief that it will be open on Saturday. 3. Most arguments (if successful) have cogent rather than valid reasoning. This is because they deal with situations which are in some way open-ended or where our knowledge is not precise. In the example of Jack voting for Bush, we know only that 92% of Republicans voted for Bush, and so there is no definitive connection between being a Texas Republican and voting for Bush. Further, we have only statistical information to go on. This statistical information was based on polling or surveying a sample of Texas voters and so is itself subject to error (as we'll discuss in chapter 6). A more precise version of the premise might be "92% 3% of Texas Republicans voted for Bush.". 5.4 Validity & Cogency Contrasted 1. At the risk of redundancy, let's pause to consider a variety of examples of valid, cogent and incogent arguments, already in standard form. (a) J (1) David Duchovny weighs more than 200 pounds. ---------------------------------------------------------------(2) David Duchovny weighs more than 150 pounds.

85

The reasoning in (a) is valid. It is valid because of the number system (here applied to weight): 200 is more than 150. It might be false, as a matter of fact, that David Duchovny weighs more than 200 pounds, and false, as a matter of fact, that David Duchovny weighs more than 150 pounds. But if you suppose or grant or imagine that David Duchovny weighs more than 200 pounds, it would then have to be true that David Duchovny weighs more than 150 pounds. (b) (1) Armistice Day is November 11th, each year. (2) Halloween is October 31st, each year. J ---------------------------------------------------------(3) Armistice Day is later than Halloween, each year. This reasoning is valid. It is valid because of order of the months in the Gregorian calendar and the placement of the New Year in this system. (c) (1) All men are mortal. (2) Professor Pappas is a man. J -----------------------------------(3) Professor Pappas is mortal. As written, this argument's reasoning is valid. If you accept for the sake of argument that all men are mortal (as the first premise says) and likewise that Professor Pappas is a man (as the second premise says), then you would be have to also accept that Professor Pappas is mortal (as the conclusion says). You could not consistently both (i) affirm that all men are mortal and that Professor Pappas is a man and (ii) deny that Professor Pappas is mortal. If a person accepted these premises but denied the conclusion, that person would be making a mistake in logic. This argument's validity is due to the fact that the first premise uses the word "all". You might, however, wonder whether or not this premise is true, given that we believe it to be true only it rests on experience of men in the past. This might be a case of over-stating a premise, which we mentioned earlier and will discuss in a little more detail in the next section, on warrants in arguments. (d) (1) In 1933, it rained in Columbus, Ohio on 175 days. (2) In 1934, it rained in Columbus, Ohio on 177 days. (3) In 1935, it rained in Columbus, Ohio on 171 days. J ----------------------------------------------------------------(4) In 1936, it rained in Columbus, Ohio on at least 150 days.

86

The reasoning in this argument is cogent. The premises establish a record of days of rainfall that is well above 150. It is possible, however, that 1936 was exceptionally dry, and this possibility means that the reasoning does not achieve validity. (e) J (1) The Bible says that homosexuality is an abomination. ---------------------------------------------------------------------(2) Homosexuality is an abomination.

This argument is an appeal to a source. As described in 4.5, to evaluate the reasoning you should think about whether the source is reliable, is biased, and whether the claim is consistent with what other authorities on the subject say. For many people, the argument fails the first of these criteria, but some people might think that most of what the Bible says is true or even that everything the Bible said is true. Even for these people, however, there are reasons to suspect bias. And other experts in the field disagree. (e) is incogent. (f) (1) Some professional philosophers published books in 2007. (2) Some books published in 2007 sold more than 100,000 copies. J --------------------------------------------------------------------------------(3) Some professional philosophers published books in 2007 that sold more than 100,000 copies. This reasoning is incogent. Both premises use the word "some" which doesn't tell you a lot about many professional philosophers published books and how many books sold more than 100,000 copies in 2007. This means that you cannot be confident that even one professional philosopher sold more than 100,000 copies. (g) (1) Lots of Russians prefer vodka to bourbon. J -------------------------------------------------------(2) George Bush was the President of the United States in 2006. No one (in their right mind) would make an argument like this. It is presented here as an example only: it is clearly incogent. It's hard to see how the premise justifies the conclusion to any extent at all. In sum, (a), (b) and (c) are valid arguments, whereas (d) is cogent, and (e), (f) and (g) are incogent. 2. Let's summarize (yet again). This chapter focuses on the quality of the reasoning in an argument, that is, the strength of the justificatory support given by the

87

premises to the conclusion. When you evaluate this justificatory support, you assume that the premises are true. Then, if the premises would mean that the conclusion cannot be false, the reasoning is valid. If the support is strong, with only a small possibility that the conclusion could be false, the argument is cogent. If the support is weak, the argument is incogent. And as was stated in chapter 4 and at the beginning of this chapter, sound arguments, in addition to being either valid or cogent, have true premises. Thus, you can find fault with an argument on at least two distinct fronts. First, you can fault the truth of the premises: one can point to a premise, and argue that it is false. Second, you can criticize the alleged support relation between the premises and conclusion: one can argue that it is incogent. But notice, each kind of criticism leaves the other untouched. In particular, note that the fact that the falsity of one (or more) of the premises does not mean that the argument is incogent. Whether the premises are true, on the one hand, and whether there is a tight connection between the premises and conclusion, on the other hand, are separate issues. 3. We'll end this summary by advising you to ignore what the arguer thinks about the strength of the support. When making an argument, a speaker will often add a word or phrase which tells the audience that the speaker thinks the conclusion follows necessarily or probably from the premises. There are a potentially infinite number of such words or phrases. The following are some words or phrases which indicate that the arguer thinks that the conclusion follows necessarily and the argument is valid: it must be the case that . . . necessarily . . . certainly . . . it can be deduced that . . . The following are some words or phrases which indicate that the arguer thinks that the conclusion follows probably and the argument is cogent: it is probably the case that . . . it is highly likely that . . . in all likelihood . . . Consider the following argument: We've observed lots of swans, and in lots of different places, and each swan we observed was white. So, we can conclude with certainty that all swans are white.

88

The words "with certainty" make it clear that the arguer takes the reasoning to be valid. But notice, the arguer is incorrect in thinking this: for all the premises say, there could be other swans, which were not observed, and which were not white. In general, ignore what the arguer thinks about the strength of his own argument. It is up to you to decide whether the reasoning is valid, cogent or incogent. Don't allow yourself to be influenced by what the arguer thinks about the strength of the connection between premises and conclusion. (The above is also true, with obvious changes, for explanations. Phrases which might be used are "It's obviously because ", "It can only be because " and so on.) 5.5 Adding Warrants To Arguments 1. In 4.6 we introduced the idea of inserting a warrant(s), if absent, in order to make explicit the move from the reasons to the target proposition. Here are two examples: (i) (ii) Potatoes are vegetables. So, they [potatoes] are good for you. Unemployment is rising. So, crime rates will increase.

In (i), the premise mentions vegetables, while the conclusion mentions 'being good for you'. In evaluating the inference you could add the warrant "All vegetables are good for you.". In (ii), the warrant would connect rising unemployment and rising crime rates. In standard form, the arguments now look like these: J and (3) Unemployment tends to lead to an (1) Unemployment is rising. increase in crime.* J ---------------------------------------------------------------------------------------------------(2) Crime rates will increase. The modified diagram for the first argument, here with the proposition list, is: (1) Unemployment is rising. (2) Crime rates will increase. (3) Unemployment tends to lead to an increase in crime.* 1+3 J 2 89 (1) Potatoes are vegetables. (3) Vegetables are good for you.* -------------------------------------------------------------------------------------------(2) Potatoes are good for you.

When adding a warrant(s), you attempt to add a proposition(s) that is true and which makes the reasoning at least cogent. If you cannot think of a warrant that is both true and makes the argument at least cogent, it is likely that the argument is bad. The warrant added to the first argument in fact makes it valid, because it says that all vegetables are good for one, and is plausibly true. The warrant added to the second argument makes it cogent, but you might be skeptical about its truth. If you had added "Unemployment always leads to an increase in crime." the argument would have been made valid, but the premise would be false or at least dubious. It is very often the case that a warrant which would make an argument's reasoning valid is also false, and that you should instead add a warrant that will make the reasoning cogent. 2. It often happens that humans act as if they had conclusive proof of their conclusions by using an overly strong warrant. Compare the following arguments: (a) (1) All dogs have tails. (2) Jim is a dog. J -------------------------(3) Jim has a tail. (b) (1) Almost all dogs have tails. (2) Jim is a dog. J ----------------------------------(3) Jim has a tail. The sole difference between the two arguments is that where (a) has "all" in its first premise, (b) has "almost all". This difference makes (b) cogent while (a) is valid. However, although the degree of support in (a) is stronger than in (b), the premise in (a) is false: not all dogs have tails, and so argument (a) fails the first criterion of argument evaluation, that the premises must be true. Here is another example, involving a different type of reasoning. In it, Jack tries to limit the available options: One of us needs to take out the trash right before lunch. And since I (Jack) am busy until lunch, you will have to take it out, Gill. In standard form:

90

(1) At least one of Jack or Gill needs to take out the trash before lunch. (2) Jack is busy until lunch. J -----------------------------------------------------------------------------(3) Gill will take the trash out. With these premises, Jack's reasoning is valid: if you imagine or suppose that the premises are true, then the conclusion would also have to be true. For, if the trash must be taken out and by either Jack and Gill, and Jack is unavailable, then the task must fall to Gill. But notice that Jack's argument now fails the first criterion of evaluation, that the premises must, in fact, be true. Are there only these two options? There might be other people who can take it. This kind of premise is called a false choice and the argument in which it appears is a false dilemma. Here is another example: Jack's keys are not by the door. Either Jack's keys are by the door or else they are in his coat pocket. Therefore, they must be in his coat pocket. The second proposition states the available options, but if it is possible that Jack's keys are in some third place, the premise presents a false choice. 3. One poor way to form a warrant is to use the "If , then " construction. The premise(s) are entered after the word "if" (conjoined by "and" if there is more than one premise) and the conclusion is entered after the word "then". Consider the following example: Young people these days are distracted by mobile phones. I predict that their performance in school will fall. In order to evaluate the support given to the conclusion in this argument you might insert the connecting premise "If young people these days are distracted by mobile phones, then their performance in school will fall." and in standard form the argument would look like this: (1) Young people these days are distracted by mobile phones. (2) If young people these days are distracted by mobile phones, then their performance in school will fall.* J --------------------------------------------------------------------------------------------(3) The performance of young people in class will fall. To the argument "Unemployment has risen steadily over the last 12 months. So, crime rates will continue to rise.", you might use the "if , then " strategy to add the premise "If unemployment has risen steadily over the last 12 months, then crime rates will continue to rise.".

91

A warrant constructed by using "if , then " along with the actual premise(s) and conclusion has the benefit of making the argument valid. Notice that in the argument just above, about mobile phones, the conclusion must be accepted, if you accept the premises. However, making the reasoning valid might come at the expense of the truth of the premises, as we were discussing just above (sub-section 2). A further problem with this strategy is that inserting a connecting premise that uses exactly the propositions in the other premises only asserts what you already know, namely that the premise is supposed to justify the conclusion. It doesn't say anything further about the relationship between the premise(s) and conclusion which might ease your worries about the connection. Nonetheless, "If , then " propositions of this kind can be a useful starting place when thinking about the connection between reasons and the target proposition. (The "If , then ." construction is called material implication and is important in deductive logic. See chapters 11, 12 and 13.) 4. Another type of warrant between premise(s) and conclusion is a conditional proposition expressing a rule which says something about the types of item mentioned in the premise(s) and conclusion. The general principle in the tax code example, from above, would be "Any person who satisfies conditions x, y, z can take a deduction.". This premise refers to people generally, and not any specific person. Jack can then argue that, since he satisfies conditions x, y, z, he can take a deduction.". Similarly, consider the Sudoku board shown earlier:

The rules that govern Sudoku apply to all parts of the board generally. They can then be applied to any specific part. For example, in arguing that a 9 must go below the 7 in the first column, the particular information involved is the current state of the 92

board, (and in particular that there is a 5, a 6, an 8, a 4, and a 7 in the first, second, fourth, fifth and sixth cells of the first column, respectively, and that there are 9s in other columns on the third, eight and ninth rows). The warrants are the rules of Sudoku, that each cell contains a number from 1 to 9; each row, each column and each 9-cell square contain one occurrence of each number from 1 to 9, (though the proposition that each 9-cell square contain one occurrence of each number from 1 to 9 is not necessary for this specific argument). These rules are then applied to the specific cell in question. 5. When it comes to arguments about real-world items rather than artificial codes or puzzles, the warrant usually involves words such as "most" or "tends to" rather than "all" or a definitive "is". That is, the warrant is not universal but is instead nearuniversal. We have seen an example of this already: to the simple argument "Jim is a dog. So, he has a tail." we added "Jim is a dog. Most dogs have tails. So, Jim has a tail.". Notice the difference between this and "All dogs have tails.". This proposition would make the reasoning valid, but it is false. "Most dogs have tails." makes the argument cogent, and so the conclusion is supported strongly, and this proposition is also true. 6. How universal or near-universal propositions generated? In the case of conventions such as the tax code or logic puzzles, they can be created arbitrarily or in response to some need. Most warrants, however, are generated from experience of the world. Chapters 6, 7, and 8 describe how we gather and evaluate data in order to generate general propositions which can be used in arguments and explanations.

93

PART 2

INDUCTION & SCIENTIFIC REASONING

Chapter 6 Induction 6.1 Introduction 1. This chapter begins our discussion of induction. Induction is the process of justifying quantified, categorical generalizations such as "All dogs like hot dogs." and "92% of Canadian adults are owners of a mobile phone." They are categorical generalizations in that the subject in each is some category or class or type of thing, rather than a specific case or instance of that type. For example, "Jim loves chasing squirrels." is about a specific dog, Jim, while "Most dogs love chasing squirrels." is about dogs in general. To say that a proposition is quantified means that it specifies what proportion or percentage of instances of the type have the predicate. "Nine out of ten dentists brush with Oral-B toothbrushes." tells us that the percentage of dentists who use an Oral-B toothbrush is 90%. If the quantity is "All" or 100%, or "None" or 0%, the proposition is a universal generalization. 2. Universality is rarely the case. As was described in chapter 5, it is seen mostly in conventions (the tax code, the legal code, rules of puzzles, etc.) though also in some scientific propositions (a.k.a. laws). What we more often get is a proposition describing a probabilistic relatione.g. if F is present, G is present in (e.g.) 90% of cases. We are happy if the frequency of joint appearance or non-appearance is very high or nearuniversal. 3. As also mentioned in chapter 5, speakers often exaggerate their warrants in order to ensure that the justificatory or explanatory power of the passage is as strong as possible, even though this means that the connecting proposition is false. Consider Smith's words in the following explanatory passage: Smith: I had an excellent pizza yesterday. Jones: I'm glad to hear it. Why was it excellent? Smith: I went to Adriatico's. They always make a great pizza. Here, Smith explains why the pizza was excellentit was made at Adriatico's, where the pizza, he claims, is always great. It's not likely that the pizza is great every single time; he is over-stating the case for emphasis. One of the reasons in Smith's explanation, then, is strictly speaking false. Note that he need not have used such a strong proposition. His explanation would have been satisfactory if he had said that the

95

pizza is almost always great, or that the pizza has been great any time he has been at that restaurant in the past. 4. General propositions are generated using the process of inductive generalization, as described in the next section. Once established, a general proposition connecting Fs and Gs can be employed in an argument, along with a particular instance of the first type of thing mentioned (F) to justify the belief that the second (G) is also present. Thus, this chapter also discusses how generalizations can be employed in arguments. (Chapter 7 then goes on to describe the kind of generalizations that are used as warrants in explanations.) 6.2 Inductive Generalization (IG) 1. Consider the following scenario: Jack shakes a large opaque basket filled with 4,000 black and red cubes, reaches in without looking, and grabs 500. He counts the reds, sees that he has 450, and then on this basis infers that roughly 90% of the cubes in the basket are red. Jack's inference is an instance of inductive generalization (IG) (or sometimes simply induction), and in standard form it looks like this: (1) Cube1 Cube500 are all cubes in the basket. (2) 90% of the 500 cubes examined are red. J ---------------------------------------------------------(3) Roughly 90% of the 4,000 cubes in the basket are red. (Important Note: In coming up this analysis of the argument, the propositions in the passage were not numbered in the manner used in chapter 2. Rather, the relevant information from the passage was extracted. Which information is important is about to be explained.) This argument concerns a sample of cubes (500 of them) and a population (of 4,000). The population is all the cubes in the basket, and the sample is the cubes Jack looked at. Each of the two premises summarizes 500 pieces of information (namely that each of the 500 cubes were in the basket and the color of each) and you could write 500 premises in place of or in support of each premise. (In place of premise (1) you could write "Cube 1 is a cube in the basket. Cube 2 is a cube in the basket. ". In place of premise (2) you could write "Cube 1 is red. Cube 2 is red. Cube 3 is black. ".) But that would be a lot of writing, and so a summaries are often used when the sample is large.

96

From the fact that 90% of the cubes in the sample are red, Jack infers that roughly 90% of the population cubes are red. Thus the inclusion of generalization in the name "inductive generalization". The conclusion moves beyond the specific cubes which were examined to speak about the cubes in the basket generally. Arguments which move from instances of co-appearing features to a general statement are inductions. Here is the general form of IG: (Why the conclusion is numbered (5) will be explained shortly.) (1) In case1 casen, F is present. (2) In % of case1 casen , G is also present. J ---------------------------------------------------------(5) In roughly % of cases of F, G is also present. "F" and "G" stand for two types of thing. The sample is numbered from 1 to n. The percentage-sign (%) stands for a proportion, either a fraction or a percentage or a quantifying word or phrase such as "All", "Most", "A majority of", "Some", and so on. The word "roughly" (or some equivalent word) appears in the conclusion because it is improbable that the percentage of Gs in the population is exactly the same as the percentage of Gs in the sample. IG can be used whether F and G are both present, or both absent ("not instantiated") (e.g. when water is absent, life is impossible), or F is absent and G present (e.g. Clark Kent is absent when Superman is present) or F is present and G absent (e.g. a vaccine prevents an illness). In each case, the form of the argument would be slightly different. In the absence-absence case, for example, the general argument would look like this: (1) In case1 casen, F is absent. (2) In % of case1 casen , G is also absent. J ---------------------------------------------------------------(5) In roughly % of cases of not F, G is also absent. And similarly for the other possible combinations of presence and absence. 2. One common way of understanding "roughly" (in the conclusion) is as the margin of error. A typical value for the margin of error is 3, which is added to the percentage that the survey of the sample discovered. Thus, for example, a survey might find that 55% of a sample of American (U.S.) adults approve of the President's performance, the conclusion states that 55% 3% of the population of American adults approves.

97

The following table shows how the margin of error decreases as the sample size increases (assuming that the sample is chosen randomly and the population is large). To achieve a confidence interval of 3, you need a sample of just over 1000 (1067). It is worth memorizing a few of these entries so that you can quickly judge claims in newspapers and magazines. In particular, with a sample of 100, the margin of error is (roughly) 10, at 400 it is 5, at 1000 it is 3 (and at 2400 is 2). These are in bold. Sample Size 50 100 200 300 400 500 600 700 800 900 1000 1500 2000
(@ 95% Confidence Level, with 50%-50% split)

Margin of Error 13.86 9.8 6.93 5.66 4.9 4.38 4 3.7 3.46 3.27 3.1 2.53 2.19

(Note that the figures in the table above are at the 95% confidence level, as is traditional for statistical arguments. In chapter 5.3 we noted that arguments might in practice be considered cogent at confidence levels less than 95. Further, in the above table, "50%-50% split" means that the percentage of the sample having the property in question (e.g. "will vote for the Democratic candidate") is 50%. As the value moves to either extreme (e.g. 10%, 90%), the chance of errorthat the sample's response will fail to reflect the true percentage (within the margin of error)diminishes, and a smaller sample can be used.) 3. Instances of IG's basic argument form (premises (1) and (2) and the conclusion (5)) are often found in real-world arguments, but are incogent as they stand. We know from experience that generalizations based on samples often go wrong. Our experience has taught us that this type of argument only provides strong support for a conclusion when the sample is large and the sample is unbiased. Imitating chapter 4 (4.6) and 5 (5.5), where we inserted warrants, you should thus insert the warrants which make explicit the qualities that inductive generalizations must have if the inference is to be

98

cogent, and then ask yourself whether or not these premises are true. Two warrants are required to make the argument cogent. In generic form, these are: (3) The sample is large enough.* (4) The sampling method yields an unbiased sample.* (Or: The sampling method yields a sample representative of the population.*) If a passage gives details of the size of the sample and method used to select it, you can include these details instead of the generic premises above. In the case of Jack's 500 cubes, the third premise would be "The sample500 cubesis large enough.". Often, however, we are not given details of the size of the sample. And similarly for the second warrantthe details of how the sample was obtained are not mentioned. In such cases, add the generic premises, so that the argument as it stands is cogent, and then when you evaluate the argument, say that you cannot be sure that the premise(s) is true and declare the argument unsound. Instances of IG are sound only if both additional premises are true, that is, if the sample is known to be large and unbiased. 4. Let us consider each of the two warrants in turn. The first point is simple: size matters. An instance of IG is cogent only if the sample is sufficiently large; if the sample is not large enough then the move from the premise to the conclusion is hasty (from which we get the name hasty generalization). Consider, for illustration, the following scenario: Jack randomly generates local (seven-digit) phone numbers using a ten-sided die and calls each one until he gets three answers. He asks the person in question what his (i.e., the person's) favorite color is. The first person says "Blue.", so does the second one, and the third says "Green.". On this basis, Jack then infers that roughly two thirds of the people listed in the phone book like blue the best. As an instance of IG, Jack's reasoning runs as follows: All 3 people interviewed were people in the phone book. 2/3 of the people interviewed like blue more than any other color. The samplethree peopleis large enough.* The sampling methodrandom generation using a dieyields an unbiased sample.* J ---------------------------------------------------------------------------------------(5) Roughly 2/3 of the people in the phone book like blue more than any other color. Premise (3) is false; the sample is small, compared to the population. That two-thirds of the sampled people like blue more than any other color does not make it highly likely (1) (2) (3) (4)

99

that roughly two-thirds of all the people listed in the local calling area like blue more than any other color. If only one of the three people Jack called had not preferred blue, his result would have been very different1/3, as opposed to 2/3. Since a change in the answer of small number in the sample can change the result by a large amount, he cannot be confident in the argument. The following table gives a crude estimation of the sample size needed for each size of population: Population Size 50 100 200 300 400 500 600 700 800 900 1000 1500 2000 5000 10,000 50,000 100,000 250,000 500,000 1,000,000 10,000,000 100,000,000 Higher Values (Random) Sample Size
(For Margin of Error of 3)

% 96 92 84.5 78 72.75 68.2 64 60.43 57.1 54.22 51.6 41.6 34.8 17.6 9.64 2.09 1.056 .43 .213 .1066 .01067 .001067

48 92 169 234 291 341 384 423 457 488 516 624 696 880 964 1045 1056 1063 1065 1066 1067 1067 1067

Notice that, for small populations, a great percentage of the population must be (randomly) sampled in order to be 95% confident that the true rate is within 3 percentage points of the response rate. For example, suppose you want to know what percentage of the 50 people in a room will vote for the Social Democratic candidate. In order to be 95% confident that the percentage you get is within 3 points either direction of the actual value, you would need to sample 48 of the 50 people. On the other hand, you can achieve this same level of accuracy for the entire population of adults in the U.S. (about 230 million people, according to 2008 figures) by sampling only 1067 people. 100

(For a full treatments of margin of error and sample sizes, instruction in statistics is required. See the Wikipedia page on statistics for freely available resources.) 5. Size, however, is not enough by itself. Some samples are inadequate even though they are large. Here is an example: Over a span of ten years or so, Jack visits thousands of different track runners in lots of different parts of North America, and times each such person in the mile. As it turns out, 65% of them run it in less than five minutes. He then notes that they're people living in North America and, thus, that together they're a sample of the population of people living in North America. On this basis, he infers that roughly 65% of all people living in North America run the mile in less than five minutes. Even though Jack's sample is very large, it is certainly not representative. It is blatantly biased: being a couch potato, for example, is relevant to running the mile in less than five minutes, but being a couch potato is not represented in the sample (i.e., there are no couch potatoes in the sample); the same goes for being a senior citizen. But Jack's sampling method is to talk to people who run track. Hence, that 65% of the sampled people living in North America run the mile in less than five minutes in no way gives us good reason for thinking that roughly 65% of all people living in North America run the mile in less than five minutes. The lesson, then, is that the sample must be unbiased or representative. A sample is unbiased or representative when, for every property relevant to G, the percentage of things in the sample having the property is roughly equal to the percentage of things in the population having the property. 6. There are numerous kinds of sampling methods. They do not fare equally well with respect to representativeness. One kind is called "self-selecting sampling" and another kind is called "random sampling". Here is an example involving self-selecting sampling: The local television station is running a special on the creationism-evolutionism controversy, with the question at issue being whether creationism should be taught alongside evolutionism in public schools. To get a sample of viewers, the newsperson asks, "Should creationism be taught alongside evolutionism in public schools? Call 123-4567.". The sampler (in this case the newsperson) makes a request to the members of the population (in this case the viewers) for a sample, and for each such member, the member selects, or does not select, him- or herself for the sample.

101

In contrast, in random sampling the sampler determines which members of the population are in the sample. The sampling is random in that the sampler determines this randomly (e.g., by picking names from a hat), so that every member of the population has an equal chance of being in the sample. In terms of bias, there is a potential problem with self-selecting sampling. Suppose the persons on one side of an issue are more passionate about the issue than the persons on the other side. Given this, it is highly likely that the first group would be over-represented in a self-selecting sample and, thus, that the sample would be biased. After all, the more passionate someone is about an issue, the more likely he is to sacrifice the time needed to select herself for the sample. Suppose, for example, that in the creationism-evolutionism scenario above, 73% of the callers say that creationism should be added to the curriculum. This would in no way give us good reason for thinking that roughly 73% of all viewers think that creationism should be added to the curriculum, for there would be good reason for thinking that the sample is biased. There would be good reason for thinking that the viewers in support of adding creationism are upset about the status quo (in contrast to the ones not in support of adding creationism), and so are significantly overrepresented in the sample. In contrast to self-selecting sampling, random sampling precludes bias due to differences in passion. The sampler selects the sample randomly, and thus entirely independently of differences in passion. (Random sampling is employed in experimental studies, which will be discussed in chapter 8.4.) 7. The fallacy of misleading vividness occurs when a striking case(s) is given greater weight than it ought. It typically, though not always, involves a sample size which is extremely small. Consider the following scenario from Nisbett, Borgida, Crandall, and Reed: Let us suppose that you wish to buy a new car and have decided that on grounds of economy and longevity you want to purchase one of those solid, stalwart, middle class Swedish carseither a Volvo or a Saab. As a prudent and sensible buyer, you go to Consumer Reports, which informs you that the consensus of their experts is that the Volvo is mechanically superior, and the consensus of the readership is that the Volvo has the better repair record. Armed with this information, you decide to go and strike a bargain with the Volvo dealer before the week is out. In the interim, however, you go to a cocktail party where you announce this intention to an acquaintance. He reacts with disbelief and alarm: "A Volvo! You've got to be kidding. My brother-in-law had a Volvo. First, that 102

fancy fuel injection computer thing went out. 250 bucks. Next he started having trouble with the rear end. Had to replace it. Then the transmission and the clutch. Finally sold it in three years for junk." ('Popular Induction' in Kahneman, Slovic & Tversky (1982).) What should you do? To buy a Saab instead of a Volvo in light of this new information would be to commit the fallacy of misleading vividness. Nisbett, Borgida, Crandall, and Reed explain: The logical status of this information is that the N of several hundred Volvoowning Consumer Reports readers has been increased by one, and the mean frequency of repair record shifted up by an iota on three or four dimensions. The statistics with the new case are almost identical to the statistics without it, and so, given that prior to getting the new case you had good reason for preferring a Volvo over a Saab, after getting the new case you still have good reason for preferring a Volvo over a Saab. To let the new case dissuade you from buying a Volvo would be to let minor but vivid anecdotal evidence outweigh major but boring statistical evidence. Thus the name "fallacy of misleading vividness". Borgida and Nisbett put this thought experiment to the test, testing the relative effectiveness of, on one hand, unexciting but comprehensive statistical summaries and, on the other hand, non-comprehensive but vivid testimonials. The subjects in the study, students at the University Of Michigan, were split into three groups. The subjects in the first group, Group 1, read a statistical summary of all the 5-point course evaluations from the previous quarter (where the five points are (5) excellent, (4) very good, (3) good, (2) fair, and (1) poor). For example: Course 1: mean = 4.3 (26 evaluations) Course 2: mean = 3.9 (73 evaluations) Course 3: mean = 4.8 (48 evaluations) Course 4: mean = 4.5 (125 evaluations) The subjects in the second group, Group 2, heard for each course between one and four student testimonials, which included 5-point course evaluations, and which on average were equal to the ratings the people in Group 1 looked at. So, in terms of the example above, the one to four testimonials on Course 1 averaged to 4.3, the one to four testimonials on Course 2 averaged to 3.9, and so on. Borgida and Nisbett give the following example: While there's a lot of material to cover, it's all very clearly laid out for you. You know where you are at all times, which is very helpful in trying to get through the 103

course. It's a very wide and important field of psychology to become introduced to. But the reason I rated it very good instead of excellent is that the material isn't particularly thought-provoking. (Borgida & Nisbett (1977)) The subjects in the third group, Group 3, neither read nor heard any course evaluations. The subjects in all three groups were then given a list of classes, and asked to mark the classes they would likely take in the future. Given that the subjects in Group 1 had the information from all of the evaluations, and given that the subjects in Group 2 had the information from only a few of the evaluations, the subjects in Group 1 should have followed the recommendations more so than did the subjects in Group 2. But according to Borgida and Nisbett, things were just the opposite: It may be seen that the face-to-face method had a much larger impact on course choice. Subjects in that group were much more inclined to take recommended courses and much less inclined to take nonrecommended or unrecommended courses than control subjects. In contrast, the base-rate method affected only the taking of unmentioned courses. The non-comprehensive but vivid information (i.e., the testimonials), then, had a much bigger impact than did the pallid but comprehensive information (i.e., the statistical summaries). 8. The need for a representative sample comes from the fact that you rarely make inductions without any other knowledge of the items involved. On the contrary, almost all inductions (and almost all reasoning, generally) takes place against a vast array of background knowledge, and some of this knowledge is pertinent to the reasoning under consideration. Induction simply notes the frequency with which two items appear together, but you often know something more explicit about the relation between the two, or about the relation between other factors and the two you are interested in. In the case of the correlation between North American adults and being able to run a mile in under five minutes, your knowledge of the relationship between (e.g.) being a couch potato and the ability to run a five-minute mile allows you to say that Jack's sample is not representative. The requirement that you use any available knowledge is sometimes called the total evidence rule, since it demands that you consider all that you know about the subject that bears on the items involved in the argument. In the case of IG (and IP, below), the total evidence rule requires that you think of any type of thing which you

104

know to be related to the target (in the conclusion) and ensure that these types of things are represented in the sample in the same proportion as they appear in the population. You will see the total evidence rule again when we discuss the other forms of argument in this chapter (instantiation syllogism, induction to a particular). To fail to abide by the total evidence requirement is to commit the fallacy of suppressed evidence. But what if (in the scenario about North Americans and being able to run a mile in five minutes) you know nothing about running, track runners, or fiveminute miles? A general question raised here concerns how much work you need to do, besides looking at the form of the argument, to assure ourselves that we can trust the argument. The answer, in general, is that the amount of work you should do depends on how important it is that the conclusion is true. If the stakes are high (e.g. someone's life is at stake) then you will only think the argument is cogent and sound when you have conducted exhaustive research into the matter. If the conclusion is less important, you might be inclined to adopt it on weaker evidence, even though you are more likely to be committing yourself to a false belief. This issue applies to all of your reasoning and all of the additional premises that will be presented for the various argument forms. In what follows, a smaller set of premises and the conclusion will be presented first, followed by the additional warrants which are required in order for the argument to be cogent, along with a description of what to look out for when evaluating the truth of the premises. As you read the descriptions of each basic argument form, the additional premises required for cogency, and the worries about the truth of the premises, think about the level of confidence being demanded. 6.3 Instantiation Syllogism (IS) 1. Once IG has been used to justify a general proposition linking Fs with Gs, that proposition can be used or inserted as a warrant supporting conclusions about entities which you know to be members of the first class. As discussed in chapter 5, the percentage must be high if the argument is to be cogent. Consider the following arguments: (1) Arjen is Dutch. (2) Roughly 66% of Dutch people can speak English. J ----------------------------------------------------------------(3) Arjen can speak English. 105

and (1) The cube Jack is drawing now is a cube in the basket. (2) Roughly 90% of the cubes in the basket are red. J ----------------------------------------------------------------------(3) The cube Jack is drawing is red. and (1) A raven on Henry's roof can be heard. (2) Roughly 100% of ravens are black. J --------------------------------------------------(3) The raven on Henry's roof is black. While the first premise of the first argument is true, it is not sufficient to make the argument cogent. The quantity in the first premise of the third argument is universal, the first premise in the second argument is not, though it is high. The second and third arguments are cogent, with the third being the strongest of the three. In general terms, instantiation syllogism (IS) looks thus: (Why the conclusion is numbered (4) will be explained shortly.) (1) In case1, F is present. (2) In roughly % of cases which are instances of F, G is also present. J ------------------------------------------------------------------------------------(4) In case1, G is present. As before, "F" and "G" stand for two types of thing. The percentage-sign (%) in the premises stands for a proportion, either a fraction or a percentage, or for a quantifying word or phrase, such as "All", "Most", "A majority of", "Some", and so on. In real-world passages, the word "roughly" (or some equivalent) might not appear in the first premise. If the premise is a product of IG, "roughly" will (or at least, should) be included, but people often over-state their propositions by leaving it out. It is also possible that the proposition has not been generated by induction. If there are only 500 cubes and you have looked at them all, you can say that 100% are red with complete confidence. 2. As we noted with IG, the proposition expressing a generalization might express a pattern between presence-presence, presence-absence, absence-presence, or absence-absence. In the case of presence-absence (that is, when % is 0% or some low percentage, as in "5% of Ghanaians play basketball.",) the conclusion will assert an absence, as follows:

106

(1) In case1, F is present. (2) In roughly % of cases which are instances of F, G is also present. J ------------------------------------------------------------------------------------(4) In case1, G is not present. Here is an example of absence-absence. Note that in this case both (2) and (3) express an absence (in this case, not passing the course): (1) Smith has not passed the final exam. (2) 90% of students not passing the final exam do not pass the course. J ---------------------------------------------------------------------------------------(3) Smith has not passed the course. 4. An instantiation syllogism is cogent only if the percentage in the generalization is universal or near-universal (close enough to 100 or to 0). There is no definite proportion at which instances of IS cease being cogent and are incogent. You must use your judgment whatever the proportion, just as you must do when words ("Many", "Most", "Some", etc.) are used. This is because (as has been mentioned in 5.3.2) the importance of the context can raise or lower the percentage level required. A 90% chance of rain might be enough to call off a family picnic, but a 90% chance of surviving an otherwise lethal operation might be thought to be not high enough. 5. Since few generalizations are universal, it is possible that there is another, stronger, argument which contradicts and stronger than the original argument. Consider: (1) Tiger Woods is a U.S. tax-payer. (2) 90% of U.S. tax-payers will net less than $100,000 next year. J ------------------------------------------------------------------------------(3) Tiger Woods will net less than $100,000 next year. The percentage is quite high and by itself might give you confidence in the conclusion. However, Woods is not just any old U.S. tax-payer with respect to netting less than $100,000 next year: he is a world-class golfer, and a generalization about the wealth of world-class golfers could be used to make a stronger argument about his wealth, that it is greater than $100,000. In order to be cogent, then, instances of IS require the following warrant: (3) Casei is believed to be a typical instance of F with respect to G.* (I.e. There is no feature(s) of casei that would strongly suggest that it is not a typical F with respect to G.*)

107

Or, in other words, you must abide by the total evidence rule (which we saw above in the discussion of the additional premises need for cogency of IG), as it applies to instantiation syllogisms. In the Tiger Woods argument, although you might not know what his net income is, there are other things you might know about him which suggest that he is not a typical American with respect to income. (Again the question arises which we first brought up with respect to sample size in IG: How much work are you expected to do, short of finding out Woods' actual income, in making sure that there are no other features of Woods which would suggest an income in the top 10% of earners?) 6.4 Induction To A Particular (IP) 1. Sometimes an arguer will draw on experience of one type of thing frequently appearing with another in order to draw a conclusion not about the population as a whole, as in IG, but about a particular additional member of the class. Consider the following argument: We asked 10 students at State whether or not they binge drink (drink more than 4 units of alcohol in 2 hours). 9 said that they were binge drinkers. So Smith, who's a student at State, is probably a binge drinker. This argument mentions a sample (10 students) and reports on what proportion of them are binge drinkers90%. So far, the argument looks like IG. But note that the conclusion is not about the population of State students. Rather, it is about a particular student, Smith. This argument is an instance of induction to a particular (IP) (also known as simple induction). The general form for IP is as follows: (Why the conclusion is numbered (7) will be explained shortly.) (1) In case1 casen, F is present. (2) In % of case1 casen , G is present. (3) In casen+1, F is present. J ---------------------------------------------(7) In casen+1, G is present. As in IG and IS, the percentage-sign (%) stands for a proportion, either a fraction or a percentage, or a quantifying word or phrase, such as "All", "Most", "A majority of", "Some", and so on. The argument can only be cogent (at best), even if % is "All" or 100%. This is because premises (1) and (2) concern only a sample of Fs, not every F. The sample size

108

can be as small as one other member of the class of Fs (besides Fn+1) but typically there will be more, since an induction based on only one additional entity is almost certainly weak (unless the population of Fs is completely uniform). As with IG, the information in the first premise might be presented in detail. For example, instead of saying simply that 9 of 10 students are binge drinkers, you might be given information about each of the students individually. Each of the 10 will be described as a State student and nine will additionally be described as binge drinkers. 2. For cogency of an instance of IP , three additional warrants must be true: (4) The sample is large enough.* (5) The sampling method yields an unbiased/representative sample.* (6) Casen+1 is believed to be a typical instance of F with respect to G.* These are the same warrants as for IG ((4) and (5)) and IS (6). 3. IP and IG are similar in that they involve premises which give information about observed items (e.g. "This raven is black."), but they differ in their conclusions: the conclusion in IP is about a particular entity, while the conclusion of IG is a statement about the class in general (that all ravens are black, or that 90% of ravens are black.) Notice that the argument does not involve a statement such as "In % cases of F, G is also present.". This is because IP does not (it is alleged) involve universalization. This is a suspicious claim, since the co-appearance of F and G in the first n cases can only be applied to the n+1th case if Fn+1 is perceived as being of the type F. IP might thus be thought of as 'IG followed immediately by IS'. The first two premises concern a sample, as does the first two premises of IG, while the third premise and the conclusion concern a particular, as in IS. Consider the following example: 9 of 10 State students interviewed admitted to binge drinking. So, Jack, who is also a State student, is probably a binge drinker. And compare it with the following: 9 of 10 State students interviewed admitted to binge drinking. So, 90% of all State students binge drink. So, Jack, who is also a State student, is probably a binge drinker. This (second) version can be analyzed as follows: (1) 10 students are students at State. (2) 9/10 of the sample of State students are binge drinkers. J -------------------------------------------------------------------------(3) Roughly 9/10 of the population of State students are binge drinkers.

109

(3) Roughly 9/10 of the population of State students are binge drinkers. (4) Jack is a student at State. J ------------------------------------------------------------------------------------------(5) Jack is a binge drinker. The move from (1) and (2) to (3) is an instance of IG; the move from (3) and (4) to (5) is an instance of IS. Both versions of the passage have the same first premise ("7 of 10 State students interviewed admitted to binge drinking.") and they end with the same (ultimate) conclusion, that Jack is a binge drinker. However, IG+IS explicitly draws an interim conclusion about the broader population (of State students).

110

6.5 A Summary Of Argument FormsIG, IS, IP "F" etc. are types of thing "%" is a proportion Inductive Generalization (IG) (1) In case1 casen, F is present. (2) In % of case1 casen , G is also present. (3) The sample is large enough.* (4) The sampling method yields an unbiased sample.* J -----------------------------------------------------------------(5) In roughly % of cases of F, G is also present.

Instantiation Syllogism (IS) (1) In case1, F is present. (2) In roughly % of cases which are instances of F, G is also present. (3) Case1 is believed to be a typical instance of F with respect to G.* J -------------------------------------------------------------------------------------(4) In case1, G is present.

Induction to a Particular (IP) (1) In case1 casen, F is present. (2) In % of case1 casen , G is present. (3) In casen+1, F is present. (4) The sample is large enough.* (5) The sampling method yields an unbiased/representative sample.* (6) Casen+1 is believed to be a typical instance of F with respect to G.* J --------------------------------------------------------------------------------------(7) In casen+1, G is present.

111

Chapter 7 Evaluating Explanations 7.1 Truth Of The Reason(s) 1. When explaining, a speaker will offer a reason(s) as an explanation of how or why some phenomenon or state of affairs came, or will come, or comes in general, into being. In an explanation the reasons are called the explainers and the target proposition is called the explainee. The first criterion for a good explanation is that all of the reasons must be true. Consider the following explanation: Jack is in a bad mood because he didn't get much sleep last night. If, however, Jack has told us that he had a good night's sleep and you do not see him exhibiting any signs of tiredness such as yawning or resting his head, you will reject the explanation because the reason offered is inconsistent with, and made unlikely by, other data. This example contains only one explainer ("Jack didn't get much sleep last night"). Speakers might be more explicit in describing how the specific explainer relates to the explainee and include a warrant(s) that connects the given explainer(s) to the explainee. If a warrant(s) is included it must also be true. If not, it should be added, as an aid to thinking about the reasoning. 7.2 Correlation 1. As with the evaluation of the reasoning in arguments, evaluation of the reasoning in an explanation means asking whether or not there is a strong connection between the explainer(s) and the explainee. With respect to an explanation, we evaluate the explanatory connection between the explainers and the explainee (rather than, as with arguments, the justificatory connection). The connection is expressed in the warrant(s). Many speakers will provide only a specific explainer. For example, Gill might explain Jack's bad mood by saying "Jack did not get much sleep last night.". This explanation can be made explicit by adding a general proposition which asserts that lack of sleep is a factor in people's moods. Similarly, if Gill sleeps in and explains her behavior by saying "It's Saturday." a proposition can be inserted making explicit how the fact that today is Saturday explains

112

her sleeping late, perhaps by adding two propositions: that she does not generally work on Saturdays, and, that when does not work, she likes to sleep in. In artificial situations such as the tax code and Sudoku games, the warrant can be stipulated by convention. (For example, Jack might explain why he took the deduction by pointing to the rule in the tax code.) But the majority of explanations concern items in the natural world and their warrants must be discovered from experience. What makes a warrant explanatory, rather than justificatory? 2. One kind of proposition that cannot be used as an explainer for how or why some state of affairs comes to be is a proposition about a source, such as another person or an expert or popular or traditional belief. (These types of proposition were discussed in 4.4.) For example, the fact that the weather forecaster says it will be sunny this afternoon might give one grounds to believe that it will be sunny, but how or why it will be sunny remains unexplained. Rather, one would talk about atmospheric conditions. Here is another example, in standard form: (1) The textbook by Jones says that water is H2O. (2) Jones is an expert in chemistry. (3) That water is H20 is uncontroversial amongst experts in chemistry. J ----------------------------------------------------------------------------------------(4) Water is H2O. These premises do a fine job of justifying the conclusion. However, if someone were to offer these reasons as an explanation, it would be unsatisfactory: (1) The textbook by Jones says that water is H2O. (2) Jones is an expert in chemistry. (3) That water is H20 is uncontroversial amongst experts in chemistry. E ---------------------------------------------------------------------------------------(4) Water is H2O. How or why water is H2O is not explained by the appearance in Jones's text-book of a statement that it is. (Reading such a statement might explain why someone believes that water is H2O, but neither the statement nor anyone reading the statement explains why or how water is H2O.) An argument's purpose is to get us to accept some proposition as true, while an explanation's purpose is to provide an understanding of what or why the content of the proposition is the case. In short, the reasons for how or why some thing is are also reasons for believing that it is, but not always vice versa. (Some people believe in magic, that is, they believe that saying or thinking some thing can make it so. For example, "I failed to get through all of the traffic lights because 113

I jinxed it when I got through the first four." or "The universe exists because a god willed it into existence.". For such people, the distinction between justification and explanation might in large part vanish.) 3. Another kind of connecting proposition that cannot on its own be used in an explanation is the kind of generalization that is generated by IG. In chapter 6 we laid out the structure of IG, including the propositions about the size and selection of the sample, so that we could evaluate arguments which conclude that a certain percentage of one type of thing (F) is also another type of thing (G). We were particularly interested in universal or near-universal relations, since these can be used to make arguments about particular instances (using IS). But such propositions, even when correctly justified (by IG), do not by themselves make good explanations. For example, imagine that roughly 100% of professional football players (F) have eyebrows (G). Being a pro football player is related to having eyebrows, and can be used to justify a belief that a certain player has eyebrows (using IS), but it does not explain having eyebrows. 4. But what, then, must speakers (attempt to) do when they put forward an explanation? Being a pro football player does not explain having eyebrows because roughly 100% of non-football players have eyebrows too. The percentage is similar whether a person is a football player or not; being a football player does not make it any more, or any less, likely that the person has eyebrows. For an explanation to be good, it must involve (whether implicitly or explicitly) a proposition expressing a correlation between F (the explainer) and G (the explainee), and not just a relation. F is correlated with G if the percentage of cases of F in which G is also present is significantly different (statistically speaking) than when F is absent. A correlation, therefore, involves a double use of IG, the first time to establish the percentage of the population of Fs that are Gs, the second time to determine the rate of non-Fs that are Gs. If the rates are different (statistically speaking), F and G are correlated. Each of these must be based on a large and unbiased sample. As in chapter 6, we have to make the appropriate changes to the understanding of correlation when dealing with an presence-absence, absence-presence or absenceabsence. In the case of presence-absence, for example, if x% of Fs are not Gs, F and G will be correlated when, in addition, a lower percentage of not-Fs (that is, cases where F

114

is not instantiated) are not Gs. And so on for the other combinations of presence and absence. 5. Consider the relationship between being a dog and loving hot dogs. Is being a dog (F) correlated with loving hot dogs (G)? First we ask whether being a dog is related to loving hot dogs. Based on the sample of dogs we have seen eat hot dogs, the percentage of dogs who love hot dogs is roughly 100%. Now we think about non-dogs who love hot dogs. Notice a problem: the class of non-dogs is infinitely largeit includes everything that is not a dog. Finding a large, unbiased sample will be difficult. As a solution, we can consider a sample of things that are similar or analogous to dogs, and determine whether each loves or does not love hot dogs. For starters, loving hot dogs requires the capacity to eat hot dogs, so we are probably more interested in comparing dogs with things that eat, and so we can present hot dogs to various things that eat and see if they love hot dogs. We might further limit the group of non-dogs to some larger class of interest, such as 'animals', and then ask, "Do animals that are not dogs love hot dogs?". The percentage of non-dogs who love hot dogs is, let's say, 20% 10. We might visually represent the correlation at the heart of this explanation in a correlation diagram, as follows: 0% Dogs Non-Dogs |-------------Love Hot Dogs-------------|100% |-------------------------------------|| |----| |

In this diagram, the plus-minus sign () is placed at the center of the range, while the vertical lines mark the extremes of range, in accordance with the margin of error. It can easily be seen that the percentages are significantly different. Being a dog is therefore correlated with loving hot dogs and makes a good explanation. To take a ridiculous explanation as another example, consider someone who says: Jack is in a bad mood because he drank a lot [half a gallon] of water at lunch. This fails as a good explanation because there is no correlation between drinking water and being in a bad mood. Some people do drink large amounts of water and are later in a bad mood, but this is coincidence. We know it is coincidence because the

115

percentage of people who do not drink water at lunch but are later in a bad mood is the same. 0% |---------------Bad Mood---------------|100% Drink ++ Water |----|| Non- ++ Water |----|| 6. In summary, an argument establishing a correlation, which we will call "Correlation" and abbreviate with "Corr.", requires the following premises: (1) In % of cases where F is present, G is also present. (2) In % of cases in which F is absent, G is also present. (3) The two percentages are significantly different. --------------------------------------------------------------------(4) F and G are correlated. Evaluating the truth of (1) and (2) means checking the size and selection of the sample each is based upon, as in chapter 6. Checking (3) depends on the meaning of the word "significant" which we will talk about shortly. Here is an example involving a near-universal absence-absence relation: Jack: Gill: Why didn't you go camping with Henry? Because I couldn't get time off work.

Are 'not going camping' and 'not being able to get time off work' correlated? Not getting time off is a universal or near-universal barrier to going camping. That is, the percentage of people who do not have time off work and are not going camping is about 100%. Not many people who do have time off (that is, who do not not have time off) go camping, but there are some, and the percentage of those not going camping is significantly different. Represented visually: 0% Non-Time-Off Time-Off | Not Going Camping |100% |------------------------------------|| |---------------------------------||

This (camping) example shows that one percentage can be significantly different than another even though numerically the two are quite close. And so we turn now to the issue of the "significant" difference between the percentages. 7.3 Different Strengths Of Correlation 1. Like cogency and incogency (in chapter 5), correlations are of varying strengths (and the explanations based on them are more or less satisfying). We can

116

sensibly talk, for example, about finding a second correlation and explanation that is better than the pretty strong correlation/explanation we currently accept. And, like cogency, we can never be certain that have the best correlation/explanation. 2. The strongest explanations involve correlations in which the percentages for the relation of F to G and non-F to G are 100% and 0%, respectively. That is, the presence of an F (at least in our experience so far) always means the presence of a G, and the absence of an F is always accompanied by the absence of a G. 0% F Non-F | G |100% |------------------------------------|| ||

Consider the following absence-presence relation between vitamin C and scurvy: the percentage of people who have gone without vitamin C for a few months and have scurvy is 100%. It is also the case that the percentage of people have recently ingested vitamin C and have scurvy is 0%. The correlation between vitamin C and scurvy, then, is perfect, at least based on our experience up to this point. A perfect correlation means that the explanation is reversible. That is, either F or G can be the subject of the proposition. For example, note that both "100% of people lacking vitamin C have scurvy." and "100% of scurvy sufferers lack vitamin C." are true. Let's work through another example: Socrates is mortal. Why? The first explanation we might give is "Because he is human.". This is true, and will allow us to predict that he will die, since every human being is mortal. But note that it is not only humans that are mortal. 'Being a living thing' is thus more strongly correlated with mortality. Indeed, "living thing" and "mortal" are reversible. And so a better explanation of "Socrates is mortal." is that he is a living thing. Perfectly correlated pairs (where the presence of F means G is present, and the absence of F means G is absent) are the most prized explanations, which give us the potential for the most control over our environment. We struggle to find such perfect correlations, and until we do so, we experience some degree of dissatisfaction with our level of understanding. And even when we find them, it is still possible that they will have to be revised later. 3. In the vast majority of cases, however, while we might find one universal or near-universal relation (e.g. the presence of an F tells us that a G is present), the reverse relation (e.g., in the presence-presence case, the relation between the absence of F and 117

the presence of a G) is not zero. For the purposes of using F as an explanation for G, however, this is acceptable (though not ideal). Indeed, explanations which involve even one universal or near-universal relation between F and G and a statistically significant different percentage of G when F is absent are quite strong. 4. How close can the percentages be and still count as significantly different? In fact, they can be significantly different even though they are, in numerical terms, very close together. Consider the mosquito-malaria case: the percentage of people who have not been bitten and do not have malaria is about 100%; the percentage of people have been bitten and do not have malaria is (let's say) about 99.9%. The fact that very many people are bitten by mosquitoes but do not contract malaria makes it extremely difficult to see that not being bitten by a mosquito is correlated with not contracting malaria. On the other hand, if a sample is small, even a very large numerical difference between the percentage might not be significant. The term "significant" is a term from statistics. To understand fully the meaning of the word, and when it can be applied with respect to correlations, instruction in statistics is required. 5. Finally, it should be mentioned that our knowledge some topics is so poor that we abandon the condition that at least one of the relations must be universal or nearuniversal, and settle instead for correlation, that is, for a statistically significant difference between rates. A correlation, to repeat, only requires a significantly different percentage. It is quite possible that neither of these is (near-)universal. Above we gave an example involving professional footballers and eyebrows and suggested that while there is a relation between them, there is no correlation. Let's change that example and consider the relationship between professional footballers and expensive motor cars. The percentage of footballers with expensive cars is (let's say) 90%. What is the percentage of non-footballers with expensive cars? Clearly, the percentage is not zero, since many other people are wealthy enough to buy an expensive car. However, since the vast majority of people cannot afford an expensive car, the percentage is quite low and certainly much lower than 90. Let's say it is 20%. Thus 'being a professional footballer' and 'owning an expensive motor car' are quite strongly correlated. As another example, consider that only 16% of smokers contract lung cancer, but this is significantly higher than the 1.3% of non-smokers who do. We thus say that 118

smoking explains (or: is an explanatory factor in) lung cancer. This is not a terribly satisfying explanation, but it is currently the best we have. See 8.3 and 8.4 for another way of describing the differences between what the pairs of percentages tell us about the relationship between two types of thing, and for more on contributory factors as opposed to stronger explanations. 7.4 The Present-Present Fallacy 1. Another way of thinking about how correlation, and not just relation, is needed for an explanation is to talk about the present-present fallacy. To do this, we have to take a step back to the stage just prior to thinking about the percentages of Fs that are Gs and of non-Fs that are Gs. When we collect a sample in order to determine whether or not there is a correlation between F and G, there are four possible combinations of F and G: that both are present; that the first is present and the second is not; that the second is present and the first is not; that neither is present. These can be presented in a table like the one below, where an asterisk (*) means present and a dash (-) means absent. F * * G * * -

When we are trying to work out what the (present-present) relationship between two types of thing is, if we find only cases of the first and fourth lines (both are present or both are absent), the evidence suggests that the two are correlated. If we find cases of the first, (fourth,) and third lines, but not the second, the evidence suggests that the first is related to the second but not the second to the first. (That is, All/Most Fs are Gs, but not all/most Gs are Fs.) If we find cases of the first, (fourth,) and second lines, but not the third, the evidences suggests that the second is related to the first but the first to the second. (That is, all/most Gs are Fs, but not all/most Fs are Gs.) If we find cases of all types, the evidence suggests that there is no relation, either between F and G, or G and F. (In such situations we often offer randomness as an explanation (for example, why streaks appear in coin-tosses or in scoring success, or the apparent impact of criticism

119

see Gilovich (1991) chapter 1) even though such "explanations" are largely an admission that we do not have an explanation.) Think of the "Dogs love hot dogs." example again. Imagine that Henry notices that Jim, who is a dog, loves hot dogs and he wonders if the two are related and correlated. He expands his sample by feeding hot dogs to other dogs. All of these other dogs love hot dogs. In fact, he cannot find a dog who does not love hot dogs. (These are instances of line 1 of the table above.) And there are obviously lots of animals who aren't dogs that don't love hot dogs. (These are instances of line 4 of the table.) There are some non-dogs who like hot dogsvarious of Henry's human friends, for example. These non-dog data (instances of line 3 of the table above) show that not all hot-dog lovers are dogs. 2. The point to focus on is that in order to check for a correlation, we need to try to get data for all four possibilities. Here is another example: Jack, suppose, is a bit worried about his dizzy spells, since his friend Gill thinks they might be the result of brain tumors. Jack thus goes to the library, and in a short time comes across an interesting study, the results of which are laid out in a 4-cell table as follows: Dizzy Spells Present | Absent Present | 160 | 40 | |+| Absent | 40 | 10 |

Brain Tumors

160 of the 250 patients have both brain tumors and dizzy spells (upper left cell), 40 have dizzy spells but no brain tumors (lower left), 40 have brain tumors but no dizzy spells (upper right), and 10 have neither brain tumors nor dizzy spells (lower right). What should Jack conclude about whether with his dizzy spells are correlated with brain tumors? Most people would say that this information supports the claim that brain tumors are correlated with dizzy spells. Moreover, in support of this assessment most of these people would point to the fact that the present-present box is the biggest: there are lots of cases where people have dizzy spells and brain tumors. To reason in this way by looking only at the number of present-present cases, or comparing this number to the others in absolute rather than proportional termsis incorrect, and commits the present-present fallacy.

120

Look at the information slowly. What percentage of people with dizzy spells have brain tumors? Of the 200 patients with dizzy spells, 160 have tumors: that is, 80% have dizzy spells. Of the 50 without dizzy spells, 40 have tumors: again, 80%. The data, then, in no way support the claim that there is a correlation between dizzy spells and brain tumors. F is (positively) correlated with G only if the presence of F increases the chances of G, so that the probability of G given F is significantly greater than the probability of G when F is not the case. 3. Jan Smedslund, a researcher from Norway, tested some nurses with a very similar scenario (Smedslund (1963)). Smedslund presented the nurses with cards on 100 patients: the first card said whether the first patient has symptom A, and whether he has disease F; the second card said whether the second patient has A, and whether he has F; etc. In all, there were 37 A/F cards, 33 non-A/F cards, 17 A/non-F cards, and 13 non-A/non-F cards. In table form, the information on the cards looks like this: Symptom A Present | Absent Present | 37 | 33 | |+| Absent | 17 | 13 |

Disease F

Smedslund then asked the nurses whether the information on the cards supports the claim that the disease causes the symptom. Of the 70 people with the disease, 37 have the symptom: that is, 53% have the symptom. Of the 30 people not having the disease, 17 have the symptom: 57%. The probability of having the symptom given the presence of the disease, then, is quite close to the probability of having the symptom given the absence of the disease, especially given the small sample size. So, the correct answer to Smedslund's question is that the information on the cards does not support the claim that the disease causes the symptom. But according to Smedslund, only 7% of the nurses got the right answer. 86% said that given the information on the cards, it is likely that the disease causes the symptom. 7% could not decide on an answer. Moreover, when asked to explain their answers, most of the nurses giving the wrong answer pointed to the fact that the A/F cards outnumber each of the others.

121

4. When we collect data informally, we usually collect only present-present instances. Imagine that in interviewing 10 heroin addicts, we find that all but one of them used marijuana prior to using heroin. We might formulate the following argument: J (1) 90% of heroin users had previously used marijuana. --------------------------------------------------------------------(2) Later heroin use is correlated with smoking marijuana. But in order to determine whether or not heroin use is correlated with smoking marijuana we need to the rate of smokers among non-users. But we have collected a sample of heroin users. We have no information about non-heroin users. This sample is no good because we have started with heroin users and looked among them for commonalities. In short, we only have enough information to fill in two cells of a table, as follows: Marijuana Use Present | Absent Present | 9 | 1 | |+| Absent | ? | ? |

Later Heroin Use

Whereas a full table might look like this: Marijuana Use Present | Absent Present | 9 | 1 | |+| Absent | 90 | 10 |

Later Heroin Use

The information we currently have is like finding that 90% or more of the heroin users drank milk as an infant. It is possible that non-users drank milk at the same rate. In order to assert a correlation, we need two percentages: the percentage of Fs that are Gs, and the percentage of non-Fs that are Gs. And, in order to calculate these two percentages, we need all four pieces of data. In order for marijuana use and later heroin use to be correlated, it is not enough that heroin use always or frequently occurs with prior marijuana use (as would be justified by IG in chapter 6). In addition, the percentage of smokers must be significantly different (higher, for a positive correlation; lower for a negative correlation) among users than among non-users.

122

7.5 Correlation & Causation 1. Explanations include, or ought to include, a claim about correlation. So, the general form of an explanation (Expl.) should mention both the explainee and the explainer and a claim about correlation. It has the following generic form: (1) In case1, F is (also) present. (2) F and G are correlated.* E ----------------------------------------(5) In case1, G is present. There are two addition propositions that must be added: (3) G does not occur prior to F.* (4) There is no common explanation of F and G.* Humans often assume that since two states are correlated, the explanation is complete. Correlation is a large part of establishing an explanation, but not even a strong correlation guarantees an explanation. Or, to use a very common slogan, "correlation is not causation". A better slogan would be "causation is not just correlation". Correlation is necessary for causation, but is not the whole story. This book has not used "cause" up to this point; it has used "reason". From this point on, however, it will also use "cause" for the reason, as well as "effect" for the explainee. In these terms, we can ask: Why is correlation not sufficient for causation? 2. The first reason is that explanatory principles cannot involve an explainee which happens prior to the reason, or, to put it in causal terms, the effect cannot happen prior to the cause. Correlation need say nothing about time. It speaks in a timeless present tense, saying that F and G appear together, where this could mean that F is prior to G, or G to F, or both are simultaneous. Note that it is not necessary that the effect (G) happens after the cause (F), only that it cannot happen prior to the cause. It is possible that two states might occur at the same time. Consider the following: Jack is so likeable because of his generosity and self-deprecation. In this example, the qualities which are the causes of Jack's likeability occur simultaneously with Jack's likeability. The same is true when we explain what something is. For example, the explanation that water is H2O does not require that it is H2O prior to being water. (In these cases, however, it might be more natural to speak of constitution rather than causation.)

123

The most frequent use of "cause" is in explaining how or why something comes to be and in such cases the cause will occur prior to the effect. Telling which comes first, and so which is the cause and which is the effect, is not always easy, however. When the cause is hidden and is observed after the effect, it is easy to think that G is the cause of F. For example, the movement of trees might be thought to cause wind, or speeding ambulances might be thought to cause accidents (because we did not see the accident prior to following the ambulance). More realistically, the symptoms of a disease might be thought to cause the underlying condition (because the disease itself is hidden within the body). The idea that symptoms are effects of underlying diseases is actually a fairly sophisticated one, since the symptoms are the first thing to come to our attention. 3. Another issue concerning how we think about causes is that two states can be mutually reinforcing. To say that "F causes G." does not rule out that "G causes F.". For example, a hive of bees increases and decreases in number as plants increase and decrease in number, because bees are fed by plants and plants are pollinated by bees. Or again, research in subjective well-being (i.e. happiness) has shown that extraverted people are more likely to be married, but it's not clear whether extraversion makes marriage more likely or marriage makes extraversion more likely, or whether the two are perhaps mutually reinforcing. In such cases, it is not clear whether we should say that the two are mutual causes, or that neither is a cause of the other. Even if it is not false to say (e.g.) that an increased number of plants caused the population of bees to expand, a fuller explanation would mention their mutual dependence. 4. The most frequent problem concerning correlation and causation is that there might be some third, unmentioned, state that both states have in common which is responsible for both. For example, new tire-chains might be thought to cause motor accidents, since more accidents happen when tire-chains are in use. But in fact snowy weather is the cause of both. Or again, number of medical symptoms are correlated with one another but a single underlying condition is responsible for them. Put generally, the issue here (and with mutual causes) is one we are familiar with by now: the total evidence rule. Our explanations and theories do not exist in a world limited only to the evidence being explicitly mentioned. Rather, they appear alongside our current understanding of how the world works. When we discover a correlation between tire-chains and accidents, we are immediately skeptical of a causal connection 124

on account of our knowledge of what tire-chains are and when they are used. Connecting propositions of all kinds must be consistent with our existing understanding of the world. (See also 8.6's discussion of explanatory scope.) In general, these issues are addressed by (i) gathering a large number of cases and continuing to look for (and if possible, deliberately bring about) cases where one of the states is present without the other, and (ii) trying to think of alternate causes and collecting data on those. However, since there might always be another, unobserved, case where the two states come apart, or another cause which we have not thought of yet, we can never be certain that we have determined the cause. 5. Explanatory or causal statements are (when true) more useful than statements of correlation and are preferred for that reason. But speakers will often make a causal claim without presenting any evidence that they have thought about mutual causes or third causes. This is particularly true in media reports of studies; they use the language of causation even when the researcher claims only correlation. For example, in the article below, notice how the headline, and the first paragraph's use of the word "influence", suggests causation, while the sub-head more modestly refers only to an "association", which is all that is warranted by the study. Does Rap Put Teens at Risk? Study: Association Found Between Video Viewing Time and Risky Behaviors By Sid Kirchheimer WebMD Medical News March 3, 2003 Teens who spend more time watching the sex and violence depicted in the "reel" life of "gangsta" rap music videos are more likely to practice these behaviors in real life, suggests one of the first studies to specifically explore how rap videos influence emotional and physical health. After studying 522 black girls between the ages of 14 and 18 from nonurban, lower socioeconomic neighborhoods, researchers found that compared to those who never or rarely watched these videos, the girls who viewed these gangsta videos for at least 14 hours per week were far more likely to practice numerous destructive behaviors. The nature of explanation is a difficult and controversial subject. See Woodward's (2009) entry in the Stanford Encyclopedia for more and a bibliography.

125

7.6 A Summary Of FormsExplanation "F" etc. are types of thing "%" is a proportion Correlation (Corr.) (1) In % of cases where F is present, G is also present. (2) In % of cases in which F is absent, G is present. (3) The two percentages are significantly different. ------------------------------------------------------------------(4) F and G are correlated.

Explanation/Cause (Expl.) In case1, F is (also) present. F and G are correlated.* G does not occur prior to F.* There is no common explanation of F and G.* E ----------------------------------------------------------(5) In case1, G is present. (1) (2) (3) (4)

126

Chapter 8 More About Discovering Correlations 8.1 Introduction 1. The examples of explanations we have seen so far have been quite simple, in two ways. The first is that they have involved a relationship between two types of thing (F and G). Many explanations, however, involve more than one type of thing. The second is related to the first. Our examples have mostly involved at least one (near-)universal relationship between the two things. "Lack of vitamin C causes scurvy." and "Being a dog explains loving hot dogs.", for example, each involve a correlation with at least one universal relationship. But as was mentioned briefly in chapter 7, correlations need not involve any universal relationship; all that is required is that the percentages be significantly different. And so, and you shall see in this chapter, they can often involve correlations that are not universal or near-universal. This chapter will introduce the concept of an "INUS condition", which explicitly acknowledges the existence of additional factors, and it then surveys the ways in which science tries to identify causes in the face of this complexity and in the absence of universal (cor)relation. 8.2 Sufficient Condition, Necessary Condition 1. Consider an objection to an explanation, in line 4 of this dialogue: Jack: Gill: Jack: Gill: Jack: Did you hear that the game is cancelled? I did. Do you know why it is cancelled? It's because of a flooded pitch, caused by heavy rain we had yesterday. I doubt thatthe new drainage system should be able to handle the rain. Oh yeah. I forgot about that.

Gill's challenge is accepted by Jack (in line 5) because drainage systems are relevant to his understanding of whether or not rain will flood a pitch. Thus the simple explanation of heavy rain must be made more complicated by taking the possibility of a drainage system into account. "Heavy rain explains a flooded pitch." becomes "Heavy rain, in the absence of a drainage system, explains a flooded pitch.". The explanation has become more complicated: it now has two conditions, the presence of heavy rain and the absence of a drainage system. The more complicated theory, however, is more strongly correlated with flooded pitches. Ideally, the explanation would explain all

127

instances of flooded pitches, so that we could say "If these criteria(i) heavy rain, (ii) no drainage system, (iii) , (iv) are satisfied, then, flooded pitch.". Similarly, "Lit matches cause fires." might be developed into a fuller explanation describing how flammable material is required, and oxygen, and that there aren't any sprinkler systems, and so on. At an even greater level of detail, the explanation might state that matches provide heat which raises the temperature of the material such that it begins a process of rapid oxidization (where an explanation of oxidation involves a lot of explanations of chemical properties). 2. Each of the individual factors or contributing causes can be understood as an INUS condition. In order to explain what is meant by "INUS condition", it will be helpful to introduce the terminology of "sufficient condition" and "necessary condition". A universal relation of the sort we have been discussing in chapter 6 (and 7) can be understood as assertions that one (type of) thing is either a sufficient condition, a necessary condition, or both, for some other (type of) thing. Consider some examples of propositions expressing a universal relationship, such as "All dogs have tails.", and "Everything Smith says is true.". These say that if something is a dog, or is an utterance of Smith's, then the thing in question has a tail, or is true. Propositions of these sorts express a sufficient condition. They tell us that one thing (being a dog, being an utterance of Smith's) is sufficient for another (having a tail, being true). Using the letters "F" and "G" to stand for the two types of item in question, to say that one type of thing (F) is a sufficient condition for (or simply, is sufficient for) another (G) means that if an instance of F is present, an instance of G is present. Expressed very briefly, "F is sufficient for G" means "if F is, G is". 3. To say that one thing is a necessary condition for (or simply, is necessary for) another means that if the first (F) is not the case, the other (G) is not. For example, "If one has not been bitten by a mosquito, one does not have malaria.". In this proposition, being bitten by a mosquito is necessary for contracting malaria. Or again, if having wings is a necessary condition for being a bee, and wings are not present, then a bee is not present. (Here is another example, though from a conventional situation: if 120 credit hours are a necessary condition for graduation, then if a student does not have 120 credit hours, then she cannot graduate.) Expressed very briefly, "F is necessary for G" means "if F is not, G is not".

128

4. In sum: "F is sufficient for G" means "if F is, G is". "F is necessary for G" means "if F is not, G is not". Note that these are two separate relationships, and so there are four possible combinations: it is possible that one type of thing could be sufficient but not necessary for another, or it could be necessary but not sufficient, or both necessary and sufficient, or neither. For example, being a dog is sufficient but not necessary for loving hot dogs, while completing a certain number of courses is necessary but not sufficient for graduation. And many things are not related with any frequency and are neither necessary nor sufficient; being green is neither necessary nor sufficient for being a coat. The connection between lack of vitamin C and scurvy provides an example of one thing being necessary and sufficient for another. If a person goes without vitamin C for several months, she will develop scurvy, and, if a person does not lack vitamin C, she will not have scurvy. 8.3 Contributing Factors As INUS Conditions 1. With the distinction between necessary and sufficient conditions in place, we can now go on to explore a technical but hopefully very intuitive understanding of contributing factors or partial causes. 2. Consider the following scenario: Gill has been suffering of late from occasional diarrhea, abdominal pain and bloating. She wonders what the cause might be. She looks for a cause that might explain the illness. She notices that on each occasion that becomes ill, she has previously eaten wheat in some form. From this constant conjunction of the two she generates the proposition "Wheat (in wheat products) make me ill.". Based on this limited evidence alone, Gill believes that eating wheat is the cause of the illness. She means that wheat is sufficient to bring about the illness and that it is necessary for the illness to occurif she does not eat wheat products, she does not experience these symptoms. Gill might then expand her investigation, to attempt to explain what causes these symptoms generally (that is, in other people beside herself). To expand her investigation, Gill will add the obvious fact that other people eat the same wheat products without becoming ill. In the light of this information, Gill suspects that it is not 129

eating wheat by itself that causes her illness, but wheat combined with some other factors, which are present in her but not in other people. As a result, she no longer thinks that eating wheat is individually sufficient to cause her illness, but that a set of factors is jointly sufficient. Her explanation has become more complex. 3. Let's pause this investigative story for a moment to consider joint sufficiency. Here is an example of joint sufficiency, involving a conventional situation: Suppose that, in order to graduate from a school, various conditions must be met: Students must take a certain number of courses, with a certain average score, submit an application for graduation, and so on. Individually, each of these is a necessary condition for graduatingif any is absent, the student cannot graduate. Further, none of them is individually sufficient, so that, for example, just having the requisite number of credit points will not ensure graduation. However, jointly (or: conjunctively), they form a set of conditions that is sufficient for graduation. The elements of a jointly sufficient set are restricted to the conditions that are a necessary part of the set. That is, in the graduation cases, if any one of the set is absent, graduation cannot take place. In Gill's case, eating a wheat product is a necessary part of the set of which it is a partif all of the other factors were present but she did not eat some wheat product, she would not become ill as a result of the other factors. 4. Let's return to Gill's investigation. She has moved from thinking wheat is individually sufficient for the illness to the idea that it along with some other factors is jointly sufficient. Based on the evidence so far, she might still believe that this set is necessary for the symptoms she experiences. But let us now suppose that Gill goes on to find cases where (either in herself or in other people) the same symptoms arise without the prior consumption of wheat products. Since at least one other cause or set of causes is sufficient to cause the illness, wheat products cannot be necessary for the illness. (Sometimes we do find that a certain cause is necessary. Think of growing plants. We notice, over many, many cases, that in every single case where a plant is successful, it has had water. There is thus a strong case for thinking that water is (individually) necessary for flourishing plants.) Gill's final position on the relationship between wheat and the illness is that (i) wheat is not sufficient by itself, but that (ii) wheat is a necessary part of set of factors which are sufficient, and that, since there are other sets of factors which are sufficient, (iii) the set wheat-plus-other-factors is not necessary. 130

5. Consider another example: A fire broke out last Tuesday at 3:00 a.m. in the new house on the corner, but was extinguished before the house had been completely destroyed. Experts investigated the cause of the fire, and in the end concluded that it was caused by an electrical short-circuit in the kitchen. In saying that the short-circuit in the kitchen caused the fire, the experts are not saying that the short-circuit was a necessary condition for the house's catching fire last Tuesday at 3:00 a.m. To say that some cause is a necessary condition for an effect means that without the presence of the cause, the effect would not take place. The experts know from experience that other thingsshort-circuits elsewhere in the house, deliberate arson using match and gasoline, a lightning strikecould have led to a fire. Moreover, they are not saying that the short-circuit by itself was sufficient to cause the fire. To say that a cause is sufficient condition for an effect means that the presence of the cause would bring about the effect. The experts are well aware that the fire would not have broken out if there had not been any flammable material in the kitchen, or if there had been an efficient automatic sprinkler in the kitchen. So, the experts think that the short-circuit caused the fire, yet they do not think it was necessary for the fire, or sufficient. Rather, a set of factors was, together, sufficient for the fire. 6. When we say that one thing is a cause (or a contributing factor or a partial cause) of another we often mean that it is a necessary part of a group of factors that is jointly sufficient, and that this group is not the only jointly sufficient cause. Using the language of sufficient and necessary conditions, we can also say that one thing is an INUS condition for another. We owe the idea to J. L. Mackie, who discusses the example of "the short-circuit caused the fire" as follows: there is a set of conditions (of which some are positive and some negative), including the presence of inflammable material, the absence of a suitably placed sprinkler, and no doubt quite a number of others, which combined with the short-circuit constituted a complex condition that was sufficient for the house's catching firesufficient, but not necessary, for the fire could have started in other ways. Also, of this complex condition, the short-circuit was an indispensable part: the other parts of this condition, conjoined with one another in the absence of the short-circuit, would not have produced the fire. The shortcircuit which is said to have caused the fire is thus an indispensable part of a complex sufficient (but not necessary) condition of the fire. In this case, then, the so-called cause is, and is known to be, an insufficient but necessary part of a condition which is itself unnecessary but sufficient for the result. The experts are saying, in effect, that the short-circuit is a condition of this sort, that it occurred, that the other conditions which conjoined with it form a sufficient condition were 131

also present, and that no other sufficient condition of the house's catching fire was present on this occasion. (Mackie (1965) p. 245, emphasis Mackie's.) The short-circuit is the cause of the fire because, first, a short circuit in the kitchen is an INUS condition for a fire, that is, it is an Insufficient but Necessary part of a joint condition which is Unnecessary but Sufficient for the fire: the short-circuit, plus the presence of flammable material such as dust and hair, plus the absence of a sprinkler, plus jointly make up a sufficient, but not necessary, condition for fires; and the shortcircuit is a necessary component of that sufficient but not necessary condition, in that the presence of dust and hair, plus the absence of a sprinkler, plus are not together sufficient for fires. Second, there was in fact a short-circuit in the kitchen, as well as, third, the other factors which, together with a short-circuit, are sufficient for a fire. Fourth, no other jointly sufficient condition for a fire was present. That is, if the shortcircuit (with its additional factors) had not occurred, the fire would not have occurred. On this understanding, the highlighted state of affairs (F) is only one part of a constellation of factors which together cause the resulting state of affairs (G). If one or more of these other factors is absent in a specific case, F will not bring about G. Thus, it is a fallacy to think that F does not cause G if there is even one case in which F is present but G is not present. Consider the following example: J (1) Al smoked three packs a day for ninety years and never got lung cancer. -----------------------------------------------------------------------------------------------(2) Smoking doesn't cause lung cancer. The fallacy here is a failure to remember that many causes are INUS conditions. Despite Al's health, smoking cigarettes might still be a cause of lung cancer in conjunction with other factors, some of which were absent in Al's case. 7. Since all of the factors of the joint condition are INUS conditions, all of them are causes of the fire. For example, the dust and hair (the flammable material) is not sufficient by itself for a fire and is a necessary part of the joint conditionhad it not been there the overheating from the short circuit would have had no effect. Why then do we not call it the cause of the fire? One answer, which applies to cases such as the short-circuit, is that the cause is the state of affairs which has most recently, prior to the effect in question, come into being. The short-circuit was a new state while the dust and hair had been present long before. Thus, the cause of the house burning down was the short-circuit, and in general, the cause is the most recently occurring INUS condition in

132

the only jointly sufficient condition present prior to G. In a sense, the other conditions are taken out of the equation and the specific cause (the short circuit) is considered as individually sufficient and individually necessary. (For further discussion of the INUS theory of causes, and other theories, see Hitchcock's (2002) entry in the Stanford Encyclopedia on probabilistic causation.) 8.4 Randomized Experimental Studies 1. Unfortunately, the full set of conditions which make up a jointly sufficient condition is often not explicitly spelled out. Rather, speakers will refer to the contributing factor they have knowledge of, and then gesture towards the other conditions by saying "etcetera" or some similar word. For example, "If these criteria heavy rain, no drainage, not indoors, etceteraare satisfied, then, flooded pitch.". The "etcetera" also goes by the name ceteris paribus"all other things being equal", or "assume normal conditions". The other conditions are often not spelled out because they are not known. Not knowing what the other factors are prevents us from arriving at a universal relation or correlation. This means that we cannot use claims about partial or contributing causes in IS. If we think "Smoking causes lung cancer." we might only mean that smoking in the presence of a variety of factors, some of which are unknown, is sufficient for cancer. If we then consider Gill and find that she smokes, we cannot safely infer that she has or will develop lung cancer; we do not know what the other factors are that are necessary to form a set which is sufficient for lung cancer, and so, we do not know Gill has them. We cannot, therefore, cogently conclude that she will develop lung cancer. 2. Partial causes are nonetheless correlated with the explainee. As described in chapter 7, two things can be correlated without being universal or near-universal. To establish a correlation requires showing that whenever an instance of F (the suspected reason or cause) is present, an instance G (the explainee or effect) is more likely to be present than when F is absent. Consider the relationship between smoking cigarettes and the later development of lung cancer. It is not the case the smoking will, by itself, bring about lung cancer, as only about 16% of smokers develop lung cancer. However, we can see (assuming that we have a large, unbiased sample) that smoking is positively correlated with developing lung cancer, since the chance of developing it as a smoker is higher than for 133

non-smokers: 1.3% of non-smokers develop lung cancer. Smoking raises the chance of developing lung cancer. This leads us to say, when we speak appropriately, that smoking is only "a cause" or a "contributing factor" or an "INUS condition" of developing lung cancer. What we mean when we say that smoking cigarettes is a cause of, or a partial cause of, or a contributing factor for, or an INUS condition for, lung cancer is therefore weaker than the sense of "explanation" or "cause" in chapter 7, which involved a universal or near-universal relationship. 3. The possibility of multiple factors, some of which we are ignorant of, poses a difficulty for gathering an unbiased sample. A secure way to determine a correlation when there are likely to be many (unknown) factors is by a randomized experimental study. A randomized experimental study is designed to counteract the possibility that the sample selected is biased towards some other factor that we do not know, and this, rather than the factor we are interested in, might be affecting the prevalence of G. A randomized experimental study has four key steps. First, the scientist gets a big unbiased sample of the population. Second, the scientist randomly divides the sample into two groups, called the "experimental group" and the "control group" respectively. Third, the scientist exposes the experimental group, but not the control group, to F, which is the suspected cause. Fourth, the scientist looks for differences between the two groups in terms of G, which is the suspected effect: when and only when there are significant differences, the scientist concludes that F is correlated with G. Since the two groups will be alike because selected randomly, any difference in the outcome (G) can reliably be attributed to exposure to the suspected cause, F. Note, however, that a large, random, sample from a single population might still not be representative of a yet larger population. For example, a large, random sample of swans from Europe will not turn up any black swans. And psychology and other social science experiments have been criticized for basing samples exclusively on undergraduate populations and then generalizing to all adults in the nation or on samples of WEIRD adults and then generalizing to all adults. Consider, for example, the following excerpt from a report by the Congressional Office Of Technology on a study in Canada in the 70s: The positive feeding experiments were conducted over two generations. Rats of the first generation were placed on diets containing saccharin at the time

134

of weaning. These animals were bred while on this diet, and the resulting offspring were fed saccharin from the moment of conception until the termination of the experiment. Each animal of the second generation was examined for cancers at its death or at its sacrifice after 2 years on the experiment. Each experiment had appropriate control groups that did not ingest saccharin. Compared to control animals, the saccharin-fed animals showed an excess of bladder tumors. These differences were sufficiently convincing to lead to the conclusion that saccharin caused cancer in rats. (From Giere (1997), p. 223) The results, in table form, are as follows: Generation 1st 2nd Dose 0% 5% 0% 5% Cancer 1/74 7/78 0/89 14/94

Consider, for illustration, just the 1st generation rats. Presumably they were a unbiased sample of the population of rats, and presumably the researchers divided them into the experimental groupthe group on diets with 5% saccharinand the control groupthe group on diets with 0% saccharinvia a random process. And employing the method of concomitant variation, notice that there is a big difference between the two groups in terms of cancer: 9% of the rats in the experimental group contracted cancer, compared with only 1.3% of the rats in the control group. 4. The saccharin study is typical of randomized experimental studies in which the scientist concludes that F causes (or more cautiously, is correlated with) F: some of the subjects in the control group have G, thus showing that F is not necessary for G; and some of the subjects in the experimental group do not have G, thus showing that F is not sufficient for G. And yet the scientist concludes that F causes G, even though he is well aware that F is neither necessary nor sufficient for G. That causes are INUS conditions makes sense of this quite nicely, for the results in most randomized studies in which the scientist concludes that F causes G warrant the claim that F is an INUS condition for G. Suppose, for example, that there are just three sufficient conditions for getting lung cancer: (1) smoking cigarettes + A + B (where A and B are certain physiological characteristics) (2) working in a coal mine + C + D (3) smoking cigarettes + H + I 135

(Note that smoking cigarettes is an INUS condition for lung cancer.) Suppose further that a researched gets a big unbiased sample of humans, and that we randomly divide it into two groups. And suppose, finally, that a difference is then introduced: the subjects in the first group are made to smoke cigarettes. What would the results look like, given these suppositions? Well first, it would be highly likely that (i) the groups have roughly the same number of subjects with A and B, (ii) the groups have roughly the same number of subjects who work in a coal mine and have C and D, and (iii) the groups have roughly the same number of subjects with H and I. They also have roughly the same number of subjects with other, unknown, factors. After all, the dividing was done randomly. Moreover, (given that this is so) there would be a lot more subjects in the experimental group with lung cancer than subjects in the control group with lung cancer. For since the subjects in the experimental group smoked cigarettes while the subjects in the control group did not, the subjects in the experimental group with A and B would get lung cancer, but the ones in the control group would not; and for the same reason the subjects in the control group with H and I would get lung cancer, but the ones in the control group would not. (The two groups have roughly the same number of subjects who work in a coal mine and have F and G, and so the numbers there would for the most part cancel out.) The general lesson, then, is twofold. First, since the hypothesis that F is an INUS condition for G potentially explains why the number of Gs in the experimental group is significantly higher than the number of Gs in the control group, the fact that in fact the number of Gs in the experimental group is significantly higher than the number of Gs in the control group serves as a good reason for thinking that in fact F is an INUS condition for G (assuming there is no better available explanation). Second, given this, the view that causes are INUS conditions makes sense of the scientist's reasoning in randomized experimental studies in which he concludes that F is a cause of G. 8.5 Controlled Experiments 1. But it is a goal of science to make whatever conditions fall within the ceteris paribus clause explicit, by attempting to isolate the conditions which are (jointly) sufficient (and if possible, necessary) for some phenomenon. Science attempts to pin

136

down which are the factors which will allow us to predict with greater precision whether or not (e.g.) lung cancer will develop. We might discover in the future that those who smoke and (say) have a certain gene will near-universally develop lung cancer. An instantiation syllogism (IS) made on the basis of this theory will be cogent. 2. The method of controlled experiment is used to isolate causes, but can be applied only where the conditions which give rise to G are repeatable with great uniformity with and without the suspected cause. For example, a portion of a chemical can be added, or not, to a uniform sample of another chemical. We can thus design experiments, or look for data, in which the presence and absence of suspected causes varies. In situations where this method is appropriate, we can hope to approach a (jointly) sufficient condition and even a necessary condition. 3. Controlled experiments (where "control" means that one variable is manipulated deliberately by the investigator) attempt to approximate the method of difference, but are often best thought of as involving the method of double agreement, even when the number of cases is quite small. Describing what was perhaps the first controlled experiment, to discover the cause of scurvy, James Lind wrote, "On the 20th of May, 1747, I took twelve patients [suffering from] scurvy on board the Salisbury at sea. [They ate] one diet common to all." He then varied one element of their diet: two sailors drank cider, two vinegar, two sea-water, two "elixir vitriol", two ate a mixture of garlic, mustard-seed, horseradish and other herbs, and two ate oranges and lemons. The sailors taking the citrus fruit had recovered substantially within six days, while the other unfortunate sailors continued their treatments for three weeks with little effect (A Treatise of the Scurvy 1772 [1753] p. 149-153). Although Lind took pains to find 12 cases that were similar, and fed them all a common diet, it is not clear that we can say that he used a controlled experiment and this remains true even if we compare a single sailor who ate fruit with any of the sailors who did not. 4. One difficulty in establishing a universal relationship or correlation is that many states are impossible to duplicate. This is especially true in the historical sciences. For example, if we say that a particular dropped pass caused the team to lose, we mean that if the pass had not been dropped, the team would have won. But we cannot go back and replay the game exactly as it was up to that point. It is often the case that we simply do not have control over many of the factors involved, and so can run neither a randomized study or a controlled experiment. We simply have to patiently collect data 137

as it arises naturally. And even when a situation is repeatable, cost and time play a factor. Often, we cannot do better than a single, relatively small, randomized survey. (For more on the scientific method and controlled experiments, conduct an internet search. Among printed books, John Norton's (1998) How Science Works can be recommended.) 8.6 Inference To The Best Explanation (IBE) 1. In addition to detecting correlations between states of affairs, scientists also suggest new states which, although it has not yet been observed and/or tested for correlation, might function as an explanation for some explainee. Such reasoning is called an inference to the best explanation (IBE). The general form of IBE looks like this: (1) One or more instances of G. (2) G would be explained by F. (3) Of the available explanations for G, F is the best.(*) J ----------------------------------------------------------------(4) F. (2) presents F as an explanation. (3) claims that (2)'s explanation is the best of the available explanations, including those we might already accept or be working with. Arguers who make arguments in the form of IBE usually (but not always) include the claim that the explanation mentioned is the best one available. Thus there are no additional premises to be added to the basic argument. When premise (3) is missing, it should be added with an asterisk. Since there might be explanations that have simply not been thought of at all, IBE is at best cogent. Many scientific beliefs had long lives before being replaced; see this long list. 2. How scientists come up with novel explanations (whether a new type of thing or a new connection between types of thing) is a difficult and controversial topic. (It is possible that analogydiscussed in 9. 4is always involved.) The most we can say here is that there are certain criteria that new explanations must meet if they are to become the working theory of some phenomenon. The first two criteria we will discuss concern how well the explanation is integrated with other data and with explanations for other phenomena and, in general, with whatever scientific knowledge is available. The third and fourth criteria concern the quality of the explanation itself.

138

3. Any candidate explanation, if it is to be acceptable or preferred over another explanation must be (more) consistent with currently accepted data and explanations. That is, the total evidence requirement must be enforced. Suppose, for instance, that the light in a hallway is not working. That the electric power is out is one possible explainer, based on a generalization asserting that electrical power lights bulbs; that the light bulb has burned out is another, based on an generalization that bulbs only light when not burned out. The explainer "the power is out", however, conflicts with the fact that the light in the living room is on, since the explanation asserts that lights (generally) connected to a source of power go out when the source gives out. Hence, in this case, the bulb explanation conflicts with other beliefs to a lesser degree than the power-failure explanation does, and so the bulb explanation is better. We can test competing explanations by adding new cases which might falsify one of them. As an example of inconsistency with pre-existing theoretical commitments, consider the following passage, which asserts that Ptolemy's theory of planetary motion is superior to a rival theory from Copernicus: [Copernicus' and Ptolemy's theories] are in agreement with the observed phenomena. But Copernicus's theory contains a great many assertions which are absurd. He assumed, for instance, that the earth is moving with a triple motion, which I cannot understand. For according to the philosophers, a simple body like the earth can have only a simple motion. Therefore it seems to me that Ptolemy's geocentric doctrine must be preferred to Copernicus's. (Engel (1994) p. 144, citing Clavius' 1581 Commentary on Spheres) Here is another example, this time from Stephen Jay Gould: Orchids manufacture their intricate devices from the common components of ordinary flowers, parts usually fitted for very different functions. If God had designed a beautiful machine to reflect his wisdom and power, surely he would not have used a collection of parts generally fashioned for other purposes. Orchids were not made by an ideal engineer; they are jury-rigged from a limited set of available components. Thus, they must have evolved from ordinary flowers. Thus, the paradox, and the common theme of this trilogy of essays: Our textbooks like to illustrate evolution with examples of optimal designnearly perfect mimicry of a dead leaf by a butterfly or of a poisonous species by a palatable relative. But ideal design is a lousy argument for evolution, for it mimics the postulated action of an omnipotent creator. Odd arrangements and funny solutions are the proof of evolutionpaths that a sensible God would never tread but that a natural process, constrained by history, follows perforce. No one understood this better than Darwin. Ernst Mayr has shown how Darwin,

139

in defending evolution, consistently turned to organic parts and geographic distributions that make the least sense. (Gould (1980) p. 670) The creation explanation, acknowledges Gould, is a possible explanation for things with an optimal design, for the world (and thus everything in it) would be optimally designed if it were the work of an omniscient, omnipotent, and omnibenevolent being. The problem, he argues, is that it conflicts with things with a non-optimal design, things like orchids and pandas. The evolution explanation, in contrast, fits with things with an optimal design and things with a non-optimal design. Thus, concludes Gould, in this respect at least the evolution explanation is better than the creation explanation. 4. Second, unificatory power (or: scope) is important to explanations. When an explanation can be used to explain not only the explainee involved in the present case, but other different types of explainees (along with other explainers) and its rivals cannot, that explanation is said to have unificatory power. If one explanation has more scope or unificatory power than its rivals do, and if other things are equal, then the first is better than the others. More precisely, if the first explains more kinds of phenomena than others, and if other things are equal, the first is better than the others. Darwin, for instance, touts his theory as having a high degree of unificatory power: It can hardly be supposed that a false theory would explain, in so satisfactory a manner as does the theory of natural selection, the several large classes of facts above specified. ((1962), p. 476) Philip Kitcher explains: The questions that evolutionary theory has addressed are so numerous that any sample is bound to omit important types. The following short selection undoubtedly reflects the idiosyncrasy of my interests. Why do orchids have such intricate internal structures? Why are male birds of paradise so brightly colored? Why do some reptilian precursors of mammals have enormous 'sails' on their backs? Why do bats typically roost upside down? Why are there no marsupial analogues of seals and whales? Why is the mammalian fauna of Madagascar so distinctive? Why did the large, carnivorous ground birds of South America become extinct? Why is the sex ratio in most species one to one (although it is markedly different in some species of insects)? Answers to these questions, employing Darwinian histories, can be found in works written by contemporary Darwinian biologists. Those works contain answers to a myriad of other questions of the same general types. Darwinian histories are constructed again and again to illuminate the characteristics of contemporary organisms, to account for the similarities and differences among species, to explain why the forms

140

preserved in the fossil record emerged and became extinct, to cast light on the geographical distribution of animals and plants. (Kitcher (1982), p. 50-1) No other competing theory, the idea goes, has as much scope as evolutionism, and so at least in this respect evolutionism is the best of the available explanations. Aristotle, to give another example, had conjectured that the planets and stars were made of a different substance (a fifth element) in order to explain their circular motion, as opposed to the straight-up-and-down motion of objects on earth. His theory was superseded by Newton's, whose theory of gravity could explain both kinds of motion. Newton, in turn, was superseded by Einstein, again in part on grounds of unificatory power. Fogelin and Sinnott-Armstrong put the point thus: One of the main reasons why Einstein's theory of relativity replaced Newtonian physics is that Einstein could explain a wider range of phenomenon, including very small particles at very high speeds. ((1997), p. 267) If an explanation is to be accepted despite the fact that it contradicts existing data or explanations, it must be able to explain the data on which the existing explanations were based. This will likely mean that the explanation has unificatory power. 5. Although our understanding of the world advances by proposing, discovering and inter-relating different types of thing, explanations can also be judged on considerations of modesty. If one explanation is more modest than its rivals, and if other things are equal, the first is to be preferred. Modesty is also known as Ockham's Razor: "entities should not be multiplied unnecessarily" or in other words, explanations should involve as few types of item as are necessary. Suppose, for example, that Jack's car keys are on the kitchen table, and that there is music coming from his room. That Jack is home and put his keys there and turned on his radio is one potential explainer, and that Jack is home and put his keys there and turned on his radio and has on red pants is another. The second explanation involves red pants. While it might be true that Jack has on red pants, wearing red pants are an unnecessary part of the explanation. (In terms of necessary and sufficient conditions from chapter 8: there are jointly sufficient conditions of which Jack's wearing red pants is a part, but in no such condition is it a necessary part.) 6. Finally, if an explanation is simpler than its rivals, then, other things being equal, it is to be preferred. Simplicity is difficult to illustrate because it is almost always

141

accompanied by concerns about modestymore complex explanations will typically involve more types of entityand so cases with the same entities, or the same number of equally plausible entities, are rare. Conspiracy theories provide good examples of explanations which are overly complex and immodest. The claim that the moon landings were faked requires a host of additional items, such as secret sound-stages and desires to fool the population of the world, when a simpler (by comparison) explanation is available, involving the actual rockets and the actual moon. One example of equally modest explanations which differ in terms of simplicity might be Ptolemy's earth-centered view of the cosmos and Copernicus' sun-centered view. Both explanations have the same entities (the planets, sun and moon) and a principle about which body is at the center, but the mathematics required by Ptolemy's system is much more complex than the mathematics required by Copernicus' explanation. Here is an abstract example: suppose that you are studying the relationship between two variables, and that you get the following data: P=1 Q=2 P=3 Q=6 P=4 Q=8 P = 11 Q = 22 P = 33 Q = 66 The principle that 2P = Q might be part of one possible explanation, in that 2(1) = 2, 2(3) = 6, 2(4) = 8, 2(11) = 22, and 2(33) = 66. But notice that Q = (2P) + ((P-1) (P-3) (P-4) (P-11) (P-33)) is another possible description of the relationship between P and Q: for example, when P = 11, Q = (211) + ((11-1) (11-3) (11-4) (11-11) (11-33)) = 22 + (10 8 7 0 -22) = 22 + 0 = 22. Both methods of calculation arrive at the same answers, but the explanation making use of the first principle, however, is simplerand hence betterthan an explanation using the second one.

142

8.7 A Summary Of Terminology & Forms Sufficient Condition, Necessary Condition and INUS Condition F is sufficient for G: if F is, G is F is necessary for G: if F is not, G is not F is an INUS condition for G: F is an Insufficient but Necessary part of a joint condition which is Unnecessary but Sufficient for G

Inference To The Best Explanation (IBE) (1) G1 (2) G would be explained by F. (3) Of the available explanations for G, F is the best.(*) J ----------------------------------------------------------------(4) F.

143

Chapter 9 Arguments Using Correlations 9.1 Introduction 1. This chapter covers three different ways in which our knowledge of correlations, causes and explanations and theories can be used in further reasoning. The firs of three is Instantiation Syllogism (IS) which we saw in chapter 6. When a correlation involves at least one (near-)universal generalization, it can be used in IS, which in these cases can also be called Inference To An Explainee (IE). The other two are Inference To The Most Likely Explanation (ML) and Argument By Analogy (AAn). Although these are presented here as arguments, the second and third (ML and AAn) often serve a role, as attempts at explanations, in suggesting lines of investigation in our attempts to gain a better understanding of the world. They can perform this function even when, as arguments, they would be incogent. 9.2 Inference To An Explainee (IE) 1. Correlations which involve at least one universal or near-universal relation can be used not only in explanations but also to provide cogent (and in rare cases even valid) arguments. We saw IS in chapter 6: (1) In case1, F is instantiated. (2) In roughly % of cases which are instances of F, G is also instantiated. (3) Case1 is believed to be a typical instance of F with respect to G.* J ------------------------------------------------------------------------------------------(4) In case1, G is also instantiated. In the first premise, % is universal or near-universal. A speaker might phrase the first premise in terms of explanations or causes, since correlations can involve universal or near-universal connecting propositions. If the "additional" third premise is missing from the original passage, insert it in the standard form with an asterisk. When the generalization in (2) comes from an explanation rather than merely a relationship the argument can also be called an inference to an explainee (IE). The rest of the explanation need not be given as part of the argument, and so it will not be possible to tell whether the speaker is working from an explanation on this ground alone. However, IE also adds to IS a claim about the temporal order of F and G: G does not occur earlier than F (and typically follows) F. The temporal aspect of the situation is typically built into the generalization in (2). Consider the following causal example: 144

Gill, pulling into her driveway after a long day at the office, points the garagedoor opener at the garage door in the way she always does, and hits the button. She notes that the garage door's opening is a potential explainee. She then concludes on this basis that the garage door is opening. From the correlation, only one relationship is needed for the argument; the information about the other relation is discarded. The generalization used has a temporal element: the door opens after the button is pushed. Gill's inference is thus an instance of IE: she puts her specific action (i.e., her just now hitting the button on the opener while pointing it at the door) together with the (in this case, causal) generalization relating pointing the opener and pushing its button to the opening of the door, to infer the conclusion (i.e., the door's opening). Gill is assuming that the situation is typical. The typicality condition (proposition (3)) is here expressed in terms of "interfering factors". In standard form, Gill's inference runs as follows: (1) Almost every instance of hitting the button on the opener while pointing it at the garage door is also an instance when the garage door opens. (2) I [Gill] just now pointed the garage-door opener at the garage door, and hit the button. (3) There are believed to be are no interfering factors.* J ------------------------------------------------------------------------------------------------------(4) The garage door will open. And in general IE is: In case1, F is instantiated. In roughly % of cases which are instances of F, G is also instantiated. G is not instantiated prior to F. Case1 is believed to be a typical instance of F with respect to G.* J ------------------------------------------------------------------------------------------(5) In case1, G is (will be) also instantiated. Even if a correlation involves two universal or near-universal relationships, one of them is irrelevant to the argument. For example, a speaker making an argument that Smith does not have scurvy might say: Smith's illness can't be scurvy, since he has been taking in vitamin C, and lack of vitamin C is the cause of scurvy. The connecting premise here is a causal statement concerning scurvy and vitamin C. In analyzing this argument, however, we need only one of the universal propositions that the correlation involves. In standard form: (1) (2) (3) (4)

145

(1) Smith has been taking in vitamin C. (2) No one taking in vitamin C gets scurvy in the following period. J -----------------------------------------------------------------------------------(3) Smith does not have scurvy. Note also that the temporal element has been included in (2), and also that, since in this case the connecting proposition is universal ("no one"), there is no need to include a proposition about typicality. 3. For instances of IE to be sound, the premises, including the premise about typicality or the lack of interfering factors, should be true. When thinking about interfering factors you are trying to think of additional factors which might mean that the explanation does not apply in this particular case. In the garage-door example, premise (1) asserts "Hitting the button on this type of opener while pointing it at a garage door of this type causes the garage door to open." but perhaps on this particular day the battery in the opener has died, or a spring in the door has broken. If you know of specific interfering factors which the speaker is unaware the argument is not cogent. Consider the following scenario, for example: Jurors enter a jury room and one of them goes to turn on the fan. He presses the switch, expecting the fan to start running. However, the fan is on the same circuit as the light. The fan only runs when the light is on, but the light has not yet been turned on. (Based on a scene from 12 Angry Men) It is true that the juror pressed the switch, and it is further true that the fan beginning to run is a potential explainee for this action. But it is also true that there are interfering factors: the fan shares the same circuit as the light and the light is off. This greater knowledge of the connection between the pressing of the switch and the movement of the fan allows us to doubt and refute the inference. Because of this knowledge, the juror's inference that the fan will begin to run when he presses the switch is unsound. 4. There might be multiple explainees for a given explanation, and any one (or more) can be the conclusion to an instance of IE. Imagine, for example, that Henry drinks 15 bottles of Boddington's (beer) in one evening and that we have identified a number of generalizations describing the effect of alcohol on the human body. Applied to Henry's case these might include (1) Henry's blood-alcohol level will be very high. (2) Henry's hand-eye coordination will be impaired. (3) Henry's speech will be at least a little bit slurred.

146

(1) is a potential explainee, in that if a human were to have 15 bottles of Bud Light, then as a result that person would be very drunk. (2) is a potential explainee, in that if a human were to have 15 bottles of Boddington's, then as a result that person's hand-eye coordination would be impaired. And the same for (3). Given that Henry actually has drunk 15 bottles of Boddington's, an arguer can infer any or all of the three explainees. 9.3 Inference To The Most Likely Explainer (ML) 1. If there are a number of pre-existing explanations available for a given explainee, but as yet no information which would allow us to select a specific cause, we pursue the most likely. If this is proven not to be the case, we move on to the next most likely. The reasoning process exhibited here is a repeated application of inference to the most likely explainer. 2. As an example, consider the opening lines to Wilco's 'A Shot In The Arm', from the album Summerteeth: The ashtray says/You were up all night. Presumably, the singer has found an ashtray full of cigarette butts in the morning. He reasons that this would be explained if someone (the "you" that the singer sings to) had stayed up all night smoking. Other explainers are possible. It is possible, for example, that "you" collected butts from various places and put them in the ashtray. But this is highly unlikely. The next example is based on a scenario by Richard Fumerton: Jack, vacationing at a beach resort in Mexico, gets up early one morning to see the sunrise, and comes across some shoe prints as he walks the beach. He comes up with three potential explanations: (1) prints are made by people walking on beaches in shoes and a person walked the beach recently (2) prints are made whenever Jimmy Carter walks on a beach in shoes and Jimmy Carter walked the beach recently (3) prints are made when a cow wearing shoes walks on a beach and a cow wearing shoes walked the beach recently. (See Fumerton (1980) p. 5912.) Explanation (1) involving a person walking on the beach is intuitively more common than both the Jimmy Carter explanation and the walking-cow explanation. We

147

are very familiar with shoe-prints on beaches and we know that the first explainer is the much more likely than the others. Consider the following example from a famous fictitious master of such reasoning, Sherlock Holmes: A horse is stolen from a stable in the middle of the night. Holmes infers from the fact that the stable dog did not bark that the horse was stolen by someone familiar with the dog, since dogs do not bark when confronted with people they know. There are other explainers for why the dog did not bark, such as that the a stranger threw the dog a juicy steak which was laced with a sleeping powder, and so Holmes cannot be certain that he has made the right inference to the reason, though he might be confident, since the vast majority of cases of dogs failing to bark at night when people are around involve the dog knowing the person. Holmes is thus relying on a generalization inferred from lots of experience with how dogs react around strangers and non-strangers, and reasoning as follows: (1) The dog did not bark in the night time when the horse was moved by some person. (2) A dog's failure to bark in the night time when a horse is being moved by a person is explained by the person moving the horse being known to the dog. (3) Of the explanations for (1), the explanation in (2) is the most likely. J ------------------------------------------------------------------------------------------------------(4) The person who moved the horse was known to the dog. Similarly, in the same story (Silver Blaze) a curry has been used to disguise opium which put the stable boy to sleep. It is possible that the curry was made coincidentally and then used by the thief, but Holmes thinks it is much more likely that the curry was made deliberately for this purpose, (and this too suggests that someone at the stables stole the horse). 4. This type of inference is called an inference to the most likely explainer (ML). The general form of ML is as follows: (1) G1. (2) G is explained by F. (3) Of available explanations for G, F is the most likely.(*) J --------------------------------------------------------------------(4) F1. 5. ML involves a number of alternative explainers for a given explainee (as referenced in premise (3)), with one being the most common. There are often multiple explainers for a given explainee. Fires can come about in a variety of ways, for example.

148

This means that, while short circuits cause fires, we cannot argue from the presence of a fire, via the connecting principle "Short circuits cause fires.", to a short circuit as a specific cause). Whether or not the argument is cogent depends on the proportion of cases which are explained by the explainer in (2). To be cogent, this must be the explanation in an exceedingly large number of cases. That is, in order to argue from effect to cause there must be a single, or a vastly dominant, cause. Holmes' modern-day doppelganger, Greg House, M. D., also makes inferences about the cause of some explainee, in his case, medical symptoms. However, due to lack of information, House's inferences are often weak. And indeed, the show depends for some of its drama on the fact that he and his team give an incorrect diagnosis many times before arriving at the correct one. Nonetheless, these inferences are worth making, because they provide the best line of investigation to pursue. For example, imagine that the patient has a target-style rash. Such rashes can be explained in a number of ways, but are most frequently explained by Lyme disease. This might be the most likely cause of such rashes, but it still might only account for, say, 30% of such rashes. (In fact, the rash is distinctive to Lyme disease, as in episode 4.07.) In terms of ML, the inference would be incogent, even though it is the conclusion of which the doctors can be most confident. Chemical tests are then used to determine whether, in fact, Lyme disease is present. 9.4 Argument By Analogy (AAn) 1. Chapter 8 discussed how explanations often involve multiple reasons (contributing factors or INUS conditions) working together. Compare, for example, the basic relation between dogs and tails (if a dog is present, a tail is present) with a slightly fuller understanding of the relationship: the tail is positioned at the end of the dog's spine and above the anus, and so on. Compare, for example, the correlation between being bitten by a mosquito and contracting malaria with the full theory of malaria transmission, which involves a parasite incubating in the stomach of the female mosquito, being transmitted in the saliva of the mosquito, and so on. It can be seen that, even in quite simple understandings of any given state or thing, a variety of types of thing and of relationships is involved. 149

2. The understanding that underpins one explanation can be seen to be similar to another. "Similar" means that (at least some of) the things and/or relationships in each understanding can be said be to be different, but on some level can be said to be the same. Traditionally, the things being compared are called analogues (though we will also continue to call them cases), and the fact that their understandings are similar makes them analogous. Some examples will help make these concepts clear. (a) (b) Animals use their mouths to take in food; plants use their roots for the same purpose. The USA has a President as leader; in the UK the Prime Minister plays an analogous role.

The analogues in (a) are animals and plants; they are analogous in that they each have some part of their anatomies which is related to food (and the other parts of the anatomy) by taking in food. Notice that the analogues are in one sense different and in another the same: plants and animals are the same with respect to needing to take in food and having a way to do so, but the ways in which they do so are quite different animals have mouths while plants have roots. The analogues in (b) are the USA and the UK; they are the same in that in each there is a governmental position which is related to the nation in being the leader of the nation. Again, the elements involved are similar, that is, in one sense the same, in other sense different. Both are described as having a position of leader, but the leaders are different, they have different titles ("prime minister", "president") and have different duties and powers. Thinking in terms of similar understandings is familiar from poetical phrasing and metaphor. Think, for example, of a 'May to December couple' or the 'twilight' of one's years. Understanding such phrases requires transferring an understanding of other subjects (in these cases, the calendar and the stages of the day, respectively) to the age of human beings ("May" is early in a person's life, "December" and "twilight" are late). Similarly, some jobs are jails, and life is like a box of chocolates, only because the structure relating the items in one case to each other can be transferred to another case. Finally, you are probably familiar with analogical thinking from verbal reasoning problems such as "Thumb is to hand as _____ is to foot." and "School is to student as church is to _______.". Determining the missing term in such problems requires first working out what the relationship is between the first pair and then transferring that

150

relationship to another item. In the first example, you think of the theory relating thumb and hand, (it is the shorter, thicker, digit, attached to the palm, etc.) and then consider the foot as analogous and ask what plays the same role in the case of the foot. Verbal reasoning puzzles, in which a missing piece of information is inferred, bring us to the argumentative use of analogy. 2. In an argument by analogy (AAn) a speaker points to one analogue and its items and their relations to one another and then shows that some other analogue has all but one of "the same" elements of the explanation, and on this basis infers that the second analogue has a final element. ("Element" here means some item or relationship mentioned in the understanding of the analogue.) That is, on the basis of a similarity between all but one of the elements in the explanation of one analogue, the missing element in the second analogue is inferred. For example, Jack could argue that Casablanca will be a great movie by pointing to the features it has which are similar to the elements which make The Third Man great. Here are two further examples: (c) (d) The mountain road has many twists and turns and is not taken by many travelers. Jack's argument has many twists and turns and so, few people will be able to follow it. I [Jack] have subjective experiences, which is caused by my brain structure and chemistry. Other humans have similar brain structure and chemistry and so, probably have subjective experience similar to mine.

Each argument involves analogues which are analogous in various ways. The analogues in (c) are the mountain road and Jack's argument; they are partly analogous in that they both have elements that can be described as twists and turns, which makes them analogous in a further respect, of being hard to follow. The analogues in (d) are Jack and other people; they are partly analogous in that they have similar brain structure and chemistry, which are thought to be the cause (or perhaps the nature) of a final analogy, subjective experience. Each of (c) and (d) is an argument by analogy, as follows: In (c), both the mountain roads and Jack's argument are similar in having twists and turns, and twists and turns are properties which make some thing difficult to follow, allowing the arguer to conclude that Jack's argument will be hard to follow. In (d), brain structure and chemistry is thought to be relevant to having subjective experience, allowing the arguer (Jack) to conclude that other people have a similar subjective experience.

151

4. In general terms, the basic form of AAn is as follows: (Why the conclusion is numbered (5) will be explained shortly.) (1) Explanation X (involving elements of type F, G, (H, I, J, . . .)). (2) In case1, F, G, (H, I, J . . .) are instantiated. (3) In case2, F', (H', I', J' . . .) is instantiated. J -------------------------------------------------------------------------------(5) In case2, G' is also instantiated. Case1 is the "reference" analogue and case2 is the "target" analogue. (AAn typically involves only two analogues.) The element G is mentioned in premise (2) and the conclusion (5) and is the "inferred" element. The use of F-prime, G-prime ("F'", "G'") and so on in (3) and (5) represents the fact that the elements of case2 (or at least one of the elements of case2) are similar (but not identical) to those in case1. To repeat what was said above, by "similar" we mean that (at least one of) the elements of case2 are of a different type than in case1, though at some level can be considered the same. Some elements might in fact be of identical types. In the discovery of transmission of malaria by mosquito, the element 'mosquito' and 'transmission' are identical to the reference analogue, transmission of elephantiasis by mosquito, while the disease and its effects are different. This is why we say that at least some (i.e. one or more) of the elements must be similar and not exactly the same. If all of the elements were all the same, the argument would become an instance of IP. (See sub-section 6, below.) In (c) and (d), however, the elements are all of different types, though, to repeat, at some level of generality they can be considered to be the same. They are different even though in (c), which concerns Jack's argument and the mountain road, the same words are used to describe the elements: both analogues are described as having "twists and turns" (and these make the thing in question "difficult to follow"). However, the elements could have been expressed in terms which are specific to their respective analoguesthe mountain road has bends of various degrees in various directions while Jack's argument requires multiple steps which seem to go in different directions and which make it hard to see the end-point. AAn can make very different things "the same". And any element of an understanding can be inferred, including a relation or correlation or causal claim. AAn is thus extremely flexible and can involve items and relationships of what otherwise would be considered quite different kinds. We can argue that the same kind of 152

relationship holds between structures involving quite different elements because of the shared elements. For example, our knowledge of machines as input-output devices allows us to infer how the parts of computers we are unfamiliar with are related, based on the fact that they have similar partsan electrical plug, an on-off button, a keyboard or pointing device for input, a screen for output. Similarly, the anatomical arrangement of one species can be inferred from our knowledge of other species along once we identify shared elements in the new species. Or again, the discovery of the ring structure of benzene was (perhaps apocryphally) aided by a dream of a snake eating its own tail. 5. One additional premise must be added if AAn is to be cogent: (4) There are no disanalogies between the analogues that are relevant to G.* Or, in other words, we must abide by the total evidence rule as it applies to AAn. Consider the following argument: LeBron James has all of the qualities needed to be every bit as successful as Michael Jordan was. He has similar height and quickness to Jordan and is as good a shooter. He also has Jordan's physical toughness. We know how Jordan turned outsix-time NBA champion and five-time MVP. There's no reason to think that James won't achieve the same success in time. This analogy might be challenged by pointing to differences between James' and Jordan's situation, such as mental toughness or quality team-mates. An instance of AAn is incogent if there are relevant disanalogies between case1 and case2. We say that the disanalogies must be relevant because any two analogues will have some differences, but, again, what is crucial for AAn is that the disanalogies bear some relationship to each other and the target element, and that the strength of the relationship between the disanalogies and the target element is enough to make us doubt whether the structures in the premises are in fact analogous. AAn is often incogent due to dissimilarities between the analogues. Even when incogent, however, AAn, like ML, can suggest a path for further investigation. 6. Unfortunately, many speakers intending an analogical argument do not make it clear that a complex understanding is being used and so very often leave out various elements. For example, (c) The mountain road has many twists and turns and is not taken by many travelers. Jack's argument has many twists and turns and so, few people will be able to follow it.

153

does not explicitly say that the twists and turns are related to the fact that it is taken by few travelers. The argument might be put into standard form as follows: (1) The mountain road has many twists and turns. (2) Jack's argument has many twists and turns. (3) The mountain road is not taken by many travelers. J ------------------------------------------------------------------(4) Few people will be able to follow Jack's argument. Or in general: (1) Case1 instantiates F, (H, . . .). (2) Case2 instantiates F', (H', . . .). (3) Case1 instantiates G. J -------------------------------------(6) Case2 instantiates G'. The repetition of "twists and turns" in two different contexts (captured by the use of primes in the general form) suggests that we are dealing with AAn, but the argument lacks a statement of how the elements mentioned are inter-related. That is, how F and G are related over and above a mere pattern of presence-presence (or presence-absence, etc.) as in chapter 6.) If the argument is to be construed as an instance of AAn, this should be added, along with the premise claiming that there are no relevant disanalogies. (4) F, (H, . . .) is inter-related with G.(*) (5) The are no relevant disanologies between case1 and case2.* If (4) and (5) are omitted, the argument might be mistaken for a weak case of IP, based on a sample of one: The mountain road has many twists and turns. Jack's argument has many twists and turns. The mountain road is not taken by many travelers. The sampleof oneis large enough.* The sample is unbiased.* J ------------------------------------------------------------------(6) Few people will be able to follow Jack's argument. (1) (2) (3) (4) (5)

154

9.5 A Summary Of Argument FormsIE, ML, AAn % is a proportion "F" etc. are types of thing "F1" etc. are particular instances of types "a" etc. are properties "x" and "y" are states of affairs Inference To An Explainee (IE) (1) In case1, F is instantiated. (2) In roughly % of cases which are instances of F, G is also instantiated. (3) G is not instantiated prior to F. (4) Case1 is believed to be a typical instance of F with respect to G.* J ------------------------------------------------------------------------------------------(5) In case1, G is (will be) also instantiated.

Inference To The Most Likely Explainer (ML) (1) G1. (2) G is explained by F. (3) Of available explanations for G, F is the most likely.(*) J --------------------------------------------------------------------(4) F1.

Argument By Analogy (AAn) (1) Explanation X (involving elements of type F, G, (H, I, J, . . .)). (2) In case1, F, G, (H, I, J . . .) are instantiated. (3) In case2, F', (H', I', J' . . .) is instantiated. (4) There are no disanalogies between the analogues that are relevant to G.* J ----------------------------------------------------------------------------------------------(5) In case2, G' is also instantiated. Or Case1 instantiates F, (H, . . .). Case2 instantiates F', (H', . . .). Case1 instantiates G. F, (H, . . .) is inter-related with G.(*) The are no relevant disanologies between case1 and case2.* J ---------------------------------------------------------------------------(6) Case2 instantiates G'. (1) (2) (3) (4) (5)

155

PART 3

DEDUCTION

Chapter 10 The Venn Diagram Method 10.1 Categorical Generalizations 1. Consider the following propositions: (a) (b) (c) (d) (e) (f) All males are humans. Most physicists are male. Few teachers are rock-climbers. No dogs are cats. Some Americans are doctors. Some adults are not logicians.

(a), (b), (c), (d), (e) and (f) are categorical generalizations. That is, they are about categories or classes or types of things. (a), for example, is about the class of males and the class of humans. They make no mention of any particular members of the categories or classes or types they are about. 2. The propositions are also quantifications in that they state what proportion of a class does or does not belong to another class. Moreover, some categorical generalizations are universal in quantity while some are partial. For instance, (a) and (d) are universal: (a) quantifies over the entire class of males, saying that all of its members are in the class of humans; (d) quantifies over the whole class of dogs, saying that none of its members is in the class of cats. In contrast, (b), (c), (e) and (f) are partial: (b) says that most but not all physicists are males; (c) quantifies says that some but not many teachers are rock-climbers; (e) says that says that some Americans are in the class of doctors and (f) says that some adults are not in the class of logicians. The word "Some" in (e) and (f) does not tell us very accurately what the proportion is, but at a minimum, it tells us that there is at least one member of the first class which is also a member of the second. Thus (e) states that, of the class of Americans, at least one is a doctor. Propositions involving "some" can thus be called "existential". 3. Some generalizations are positive, and some are negative. (a), (b) (c) and (e) are positive: (a), for example, says what is in the class of humans, as opposed to saying what is not in the class of humans, and (e), likewise, says what is a doctor, in contrast to saying what is not a doctor. (d) and (f), on the other hand, are negative: (b) says what is not in the class of cats, and (d) says what is not a logician. 4. Using the Venn diagram method, we can evaluate arguments which involve propositions of four of the forms above: positive universals, such as (a), which have the

157

form "All Ss are Ps."; negative universals, such as (d), which have the form "No Ss are Ps."; positive existentials, such as (e), which have the form "Some Ss are Ps."; and negative existentials, such as (f), which have the form "Some Ss are not Ps.". (Capital letters in italics stand for any class.) Propositions such as (b) and (c) which use quantifiers other than "All", "No", and "Some" are not used in the Venn diagram method, since they do not yield inferences which can reliably be classified as valid or invalid when combined with each other or with categorical generalizations of other forms (though they can be combined with propositions about particulars in an instantiation syllogismsee section 6.3). For example, although the two propositions "Many teachers are rock-climbers." and "Many rock-climbers lift weights." can be connected by the term "rock-climbers", it is not clear that we could cogently conclude that many teachers lift weights, though we could conclude, with reasonably high confidence, that some teachers lift weights. The combination of "Many assistance animals are dogs" and "All dogs are four-legged." reliably yields "Many assistance animals are four-legged.", but, in general, propositions with indeterminate quantification, such as "many", "few", "a majority of", "a lot of", and so on, cannot usefully be described in general (that is, in a method) and must be examined on a case-by-case basis. From now on, then, we shall deal only with universal and existential categorical generalizations. 10.2 Some Extras On Categorical Generalizations 1. For the purposes of the Venn diagram method, the propositions involved must take a standard form. In everyday life, however, lots of categorical generalizations are disguised. 2. The sentence "Some roses are red." should be translated as "Some roses are red flowers.", or "Some roses are red things.". Likewise, "All professional football players are strong." should be "All professional football players are strong persons.", or "All professional football players are strong things.". The lesson is that for the Venn diagram method the subject and predicate terms need to be noun terms (as opposed to terms for properties). In short, they need to be names of things, as opposed to names of properties that things have.

158

3. The sentence "No cats bark." should be translated as "No cats are animals that bark.", or "No cats are things that bark.". In the same way, "All birds can fly." should be "All birds are animals that can fly.", or "All birds are things that can fly.". The lesson is that for the Venn diagram method the verb needs to be either "are" or "are not". 4. Sometimes categorical generalizations come without an explicit quantifier: (a) (b) (c) (d) Males are humans. A male is a human. Dogs are not cats. A dog is not a cat.

(a) and (b) should be translated as "All males are humans.", and (c) and (d) as "No dogs are cats.". 5. Some categorical generalizations come with disguised quantifiers: (a) (b) (c) (d) (e) (f) (g) (h) (i) (j) Every male is human. Whatever is male is human. Only humans are males. None but humans are males. Whatever is a dog is not a cat. Not a single dog is a cat. There are Americans that are doctors. Someone in America is a doctor. At least a few Americans are doctors. Not everyone who is an adult is a logician.

For use in the Venn diagram method, (a)-(d) should be translated as "All males are humans."; (e) and (f) as "No dogs are cats."; (g)-(i) as "Some Americans are doctors.". and (j) as "Some adults are not logicians.". 6. Some conditionals can be expressed as categorical generalizations: (a) (b) (c) (d) (e) (f) If someone is a male, then he is a human. A thing is a human if it's a male. If something is not a human, it's not a male. If something is a dog, then it's not a cat. If something is a cat, then it's not a dog. Something is a dog only if it's not a cat.

(a), (b), and (c) should be translated as "All males are humans." and (d), (e), and (f) as "No dogs are cats.". 7. Sentences about individual things can be expressed as categorical generalizations. The sentence "Paul is tall." comes to "All persons identical to Paul are

159

tall persons.", or "All persons identical to Paul are tall things.". In the same manner, "There is an empty beer can in the backyard." comes to "All places identical to the backyard are places where there is an empty beer can.". 10.3 Venn Diagrams & Categorical Generalizations 1. Consider the following diagram:

This is an example of a Venn diagram. The circle on the left represents the class of males (M), the circle on the right represents the class of humans (H). The shading of the nonoverlapping part of M is a 'shading out'; it indicates that nothing is both a male and a non-human, or, that all males are humans. The asterisk in the overlap indicates that at least one thing is both a male and a human. So in total, the diagram says that nothing is both a male and a non-human, and that something is both a male and a human. 2. (a) from abovea positive universalgets represented like this in a Venn diagram:

The claim, remember, is that if something is a male then it is a human, or, in other words, that every male is human, or, that nothing is both a male and a non-human. Thus the part of the Males-circle not overlapping the Humans-circle is shaded, representing the fact that that part of the Males-circle is empty. Positive universals could also be diagrammed like this:

160

The Males-circle is wholly inside the Humans-circle, thus representing the fact that all of the members of the class of males are members of the class of humans. In our system, however, we use two (or more) overlapping circles, rather than circles within circles, so that propositions of different forms can be represented on the same basic diagram. 3. (d)a negative universalgets represented thus:

The claim is that if something is a dog then it is not a cat, or, put another way, that nothing is both a dog and a cat. Thus the part of the Dogs-circle overlapping the Catscircle is shaded, representing the fact that that part of the Dogs-circle is empty. 4. (e)a positive existentialgets represented like this:

The claim is that at least one American is a doctor, that something is both an American and a doctor. Thus the part of the Americans-circle overlapping the Doctors-circle has an asterisk in it, representing the fact that that part of the Americans-circle is not empty. 5. (f) a negative existential gets diagrammed as follows:

161

The claim is that at least one adult is not a logician, that something is both an adult and a non-logician. Thus the part of the Humans-circle not overlapping the Logicians-circle has an asterisk in it, representing the fact that that part of the Adults-circle is not empty. 10.4 Existential Commitment 1. If we were to assume that there is always at least one member to every class, positive and negative existentials could together be diagrammed thus:

Part of the Americans-circle overlaps the Doctors-circle, thus representing the fact that some of the members of the class of Americans are members of the class of doctors; and part of the Americans-circle does not overlap the Doctors-circle, thus representing the fact some of the members of the class of Americans are not members of the class of doctors, and similarly for doctors who are not Americans. In our system, however, the mere fact that an area is open tells us nothing and an asterisk is required to indicate the existence of a member of a class. Consider the following diagram:

The Students-Failing-The-Final-circle (F) and the Students-Failing-The-Class-circle (C) overlap, but in order to represent the fact that there is at least one student who fails both an asterisk in the overlap is required. In other words, our system assumes that classes are empty unless an asterisk explicitly states otherwise, whereas the alternative operates on the assumption that there are members in every area of the diagram which is not shaded out. The issue here is existential commitment, that is, whether a proposition, and universal propositions in

162

particular, imply that there are entities of which the proposition is true, or, alternatively, whether the proposition is true even though nothing satisfies it 2. Consider (e) and (f) from the list of propositions in section 1 above. (e) is the proposition "Some Americans are doctors." and (f) is the proposition "Some adults are not logicians.". It is clear that in asserting (e), one is committed to the existence of Americans, in that one is saying in part that the class of Americans has at least one member. Moreover, it is equally clear that in asserting (f), one is committed to the existence of adults: one is saying in part that the class of adults is not emptythat it has at least one member. (e) and (f), thus, involve existential commitment. This is true generally: all positive existentials and all negative existentials involve existential commitment. In saying that some Ss are Ps one is committed to the existences of Ss, and the same for saying that some Ss are not Ps. 3. But what about positive universals and negative universals? In saying that all Ss are Ps, is one saying in part that the class of Ss has at least one member? In saying that no Ss are Ps, is one saying that Ss exist? Consider the propositions "Any student who fails the final fails the class." and "All breakages must be paid for." (which we might re-write as "All persons breaking items are person who pay for those items."). Intuitively, such propositions do not have existential commitment. The worry arises because of inferences such as "All roses have thorns. So, some thorny things are roses.". This inference can land us in trouble if the class is empty, as for example when we move from "All people contacting SARS will be quarantined." to "Some people who are quarantined are people who have SARS." The universal proposition remains true even if no one contacts SARS, which would make the existential proposition false, since it has existential commitment. The inference "All Ss are Ps. So, some Ps are Ss." might thus be invalid, in that it is possible to move from a true premise to a false conclusion. The same is true for the argument "All Ms are Ps. All Ms are Ss. So, some Ss are Ps." (e.g. "Anyone who breaks something must pay for it. Anyone who breaks something will be asked to leave. So, some people who will be asked to leave are people who pay for what they broke."). Obviously, there are two options. The first is to say that positive and negative universals involve existential commitment. The sentence "All males are humans." comes to "The class of males has members, and each such member is a human.", and "No dogs are cats." comes to "The class of dogs has members, and no such member is a cat.". The 163

second option is to say that universals carry no such commitment: "All males are humans." comes to "If something is a male, then it is a human.", and "No dogs are cats." comes to "If something is a dog, then it is not a cat.". So, whereas on the first option universals are conjunctions involving the claim that the class of Ss has members, on the second option they are conditionals involving no such claim. The modern logician opts for the second option. He argues, first, that everyday linguistic practice is inconclusive on whether we should go with the first option or the second. As we have just seen, usually when we give positive and negative universals we take there to be Ss and, thus, the fact that a person gives a positive or negative universal makes it highly likely that he takes there to be Ss, but sometimes when we give positive and negative universals we are non-committal on the existence of Ss. When a teacher tells his students on the first day of classes that all students failing the final will fail the class, he is not thereby committed either way to the existence of students who will fail the final (unless, of course, he is super-pessimistic). The modern logician argues, second, that treating universals as conditionals involving no existential commitment makes logic simpler and more powerful. The lesson, thus, is that whereas asserting a positive or negative particular involves asserting the existence of Ss, this is not the case with asserting a positive or negative universal. The proposition "All students who fail the final are students who fail the class." does not commit itself to the existence of students-who-fail-the-final. 10.5 Immediate Inferences, Categorical Syllogisms & The Venn Diagram Method 1. Consider the following arguments: (1) No dogs are cats. ----------------------(2) No cats are dogs. (1) All males are humans. (2) All humans are animals. -------------------------------(3) All males are animals. (Note: It is no longer necessary to add "J" or "E" to the analysis, since Part 3 deals only with arguments.) The first argument is an immediate inference, for it is made up exclusively of categorical generalizations, and it has exactly one premise. The second argument, in contrast, is a categorical syllogism: it is made up exclusively of categorical 164

generalizations, just like the first argument, but it has exactly two premises, as opposed to just one. 2. For immediate inferences, the Venn diagram method works like this. The first step is to make a translation key for the S-term and the P-term, where the S-term is the subject term in the conclusion and the P-term is the predicate term in the conclusion: C: cats D: dogs The second step is to put the argument in standard form relative to the key: (1) No Ds are Cs. ----------------(2) No Cs are Ds. The third step is to diagram the premise. We put the circle for the subject of the conclusion on the left and the circle for the predicate on the right:

The part of the D-circle overlapping the C-circle is shaded, thus representing the fact that that part of the D-circle is empty. The fourth, and final, step is to determine whether the conclusion is necessitated by the diagram. If it were in the diagram, the part of the C-circle overlapping the D-circle would be shaded. Given this and given that that part of the C-circle is in fact already shaded, the argument is valid. 3. For categorical syllogisms, the method works just like it does for immediate inferences except that there are three terms and three circles instead of just two. We construct the diagram as follows: we look at the conclusion and put the subject term of the conclusion first, then draw an overlapping circle to the right for the predicate term of the conclusion and we draw the third below and in the middle of those two and label it with the other term which appears in the argument. The translation key for the argument above about males, thus, looks like this: M: males A: animals H: humans

165

In standard form and relative to the key, it looks like this: (1) All Ms are Hs. (2) All Hs are As. ------------------(3) All Ms are As. The information in the premises gets diagrammed thus:

The shading in the part of the M-circle not overlapping the H circle represents the information in the first premise, and the shading in the part of the H-circle not overlapping the A-circle represents the information in the second premise. The conclusion is necessitated by the diagram: the part of the M-circle not overlapping the A-circle is already shaded; it must be the case that all Ms are As. So, the argument is valid. Note that if the left part of the overlap between the M-circle and the H-circle were not shaded in, the argument would not be valid, because it would be possible for there to be some Ms which are not As. Here is another example: Some philosophers are males, and some males are billionaires Some philosophers, therefore, are billionaires. With "P" standing for "philosophers", "B" standing for "billionaires", and "M" standing for "males", the argument looks like this in standard form: (1) Some Ps are Ms. (2) Some Ms are Bs. ---------------------(3) Some Ps are Bs. The information in the premises gets diagrammed thus:

166

Representing the information in the first premise requires putting an asterisk in the part of the P-circle overlapping the M-circle. But since that part of the P-circle itself has two parts, representing the information in the first premise requires putting an asterisk on the line separating the two parts. (You might want to make the asterisk larger than shown here, so that it clearly overlaps both sub-sections of the intersection.) The same goes for representing the information in the second premise, thus the asterisk on the line in the part of the M-circle overlapping the B-circle. And notice, the conclusion is not necessitated by the diagram. For if it were, then there would be an asterisk in the part of the P-circle overlapping the P-circle, but since the two asterisks in the diagram could be in the parts of the P and B circles which do not overlap, we cannot be sure that there is asterisk in the overlap between the P-circle and the B-circle. The conclusion is not necessarily true, and thus, the argument is not valid.

167

Chapter 11 The Big 8 Method 11.1 Introduction 1. Arguments can be constructed by employing premises at least one of which is a compound proposition, that is, a proposition comprised of two or more propositions in a conjunction, disjunction or material implication (also called a conditional). Another kind of complex propositionnegationalso frequently occurs in such arguments. The methods used to evaluate such arguments are methods in propositional logic. The Big 8 method is one method for testing such arguments for validity. The Big 8 method is applied to arguments that are in standard form and whose propositions have been translated from English into logically structured English propositions, so that it is clear that they are either negations, conjunctions, disjunctions or conditionals. 11.2 Logically Structured English Propositions 1. Consider the following propositions: (a) The cat is on the mat. (b) Jones owns a Ford, or Brown is in Barcelona. (a) is a simple proposition, in that it does not have any other propositions as parts. (b), in contrast, is complex, for it involves more than a simple proposition. (b) in particular is a compound proposition, it has propositions for two of its parts: "Jones owns a Ford.", and "Brown is in Barcelona.". (All compound propositions are complex propositions, but not all complex propositions are compound. Negations, as we are about to see, are complex but not necessarily compound.) 2. Now consider the following complex propositions: (a) (b) (c) (d) (e) It is not the case that Smith is at school. Jones owns a Ford, or Brown is in Barcelona. Willie Mosconi was a pretty good pool player, and Ohio State beat Miami in double overtime. If Jim Tressel takes birth control pills for the next six months, then he won't get pregnant during the next six months. Jack believes that the cat is on the mat.

(a) is a negation. The proposition being negated is "Smith is at school.". If we let the lower-case letter "s" stand for "Smith is at school", in our system of logically structured English propositions (or logically structured English, for short) the proposition has the

168

form "It is not the case that s.". We typically shorten this, to "Not s." even though "Not Smith is at school." is not a grammatically well-formed proposition. The awkwardness is less apparent when using a letter to stand for the proposition. In general, negations have the form "Not S.", where the capital letter "S" stands for any proposition, whether simple or complex. (b) is a disjunction. The first disjunct is "Jones owns a Ford.", and the second is "Brown is in Barcelona.". If we let "o" stand for "Jones owns a Ford." and "b" for "Barnes is in Barcelona." we can express the proposition in the logically structured English as "o or b". In logically structured English, disjunctions have the form "S or T.". (c) is a conjunction. The first conjunct is "Willie Mosconi was a pretty good pool player.", and the second conjunct is "Ohio State beat Miami in double overtime.". In logically structured English, conjunctions have the form "S and T.". (d) is a conditional or material implication. The antecedent (the "If " part of the conditional) is "Jim Tressel will take birth control pills for the next six months.", and the consequent (the "then " part of the conditional) is "Jim Tressel won't get pregnant during the next six months.". In logically structured English, conditionals have the form "If S, then T.". Note: the sentence about Jim Tressel is a conditional even though the consequent is itself a negation. S and T (and any other capital letters) are dummy letters for propositions, whether simple or complex. In other words "T" in this case would be "It is not the case that Jim Tressel will get pregnant during the next six months.". So, the whole proposition would have the form "If b then not p." where "b" is "Jim Tressel will take birth control pills for the next six months." and "p" is "Jim Tressel will get pregnant during the next six months." See section 11.3, just below, for how to use parentheses in order to make the structure of an proposition clear. (e) is a bit tricky. It is a complex proposition: it has the proposition "The cat is on the mat." as a part. But it is neither a negation, nor a disjunction, nor a conjunction, nor a conditional. Because of this, we use a letter to stand for the entire thing. 11.3 Simple Propositions, Compound Propositions, Ambiguous Propositions 1. In general terms, when making a translation key for an argument, assign a letter to every simple proposition, that is to every proposition in the argument excluding the language of negation, disjunction, conjunction, or conditional. 169

Consider, for example, the following propositions: (a) The cat is on the mat. (b) Jack is nice, and so is Henry. In making a translation key for an argument in which (a) is a claim, one should use a letter to stand for the entire thing: after all, it is a simple proposition and, thus, is neither a negation, nor a disjunction, nor a conjunction, nor a conditional. In contrast, in making a translation key for an argument in which (b) is a claim, one should not use a letter to stand for the entire thing because it is a conjunction. Instead, one should use a letter to stand for "Jack is nice.", and a different letter to stand for "Henry is nice.". 2. Conjunctions, disjunctions and conditionals are compound propositions, and the propositions that are their parts can themselves be complex. Consider the following conditional proposition: If Gill comes camping, then either Smith will come camping or Jones will. In this conditional, the consequent is "Either Smith will come camping or Jones will." and this is itself a disjunction of two propositions, "Smith will come camping." and "Jones will come camping.". So in putting this proposition into logically structured English, we would write "If g then (s or j)." where "g" stands for "Gill comes camping.", "s" for "Smith will come camping." and "j" for "Jones will come camping.". We add parentheses to disambiguate this proposition from the proposition "(If g then s) or j.". 3. Parentheses are used to disambiguate ambiguous propositions. For example, consider the following propositions in logically structured English, where "c" is "Bob is in Cleveland." and "o" is "Bob is in Ohio." and "t" is "Bob is traveling on business.": (a) If c then o and t. (b) c and o or t. (c) Not c and t. Each of these propositions is ambiguous. In the first case, the speaker might be saying that "If Bob is in Cleveland then he's both in Ohio and traveling on business." or that "If Bob is in Cleveland then he's in Ohio. And also, he's traveling on business.". The ambiguity can be removed in this and the other propositions by using parentheses, as follows: (d) (If c then o) and t. (e) (c and o) or t. (f) (not c) and t. vs. vs. vs. If c then (o and t). c and (o or t). not (c and t).

170

4. In general, the logical word "not" is considered to modify what follows it immediately. Consider the following proposition: Not a and b. In this proposition, the "not" is understood to modify only "a". To negate "a and b." we must use parentheses, as follows: Not (a and b). 5. Parentheses can also be used in order to make propositions easier to understand, even where there is, strictly speaking, no ambiguity. Consider the following proposition: If a and b then c. Unlike "If a then b and c.", this proposition is not ambiguous, since the "If" and "then" perform the function of parentheses, marking "a and b" as the antecedent and "c" as the consequent. However, since "and" and "or" are a frequent source of ambiguity, we can remove any doubt by using parentheses, as follows: If (a and b) then c.

11.4 Some Extras On Negations, Disjunctions, & Conjunctions 1. Some propositions which have the logical structure of a negation and should be translated as "Not S." (or in full "It is not the case that S.") are disguised. Here are three such propositions: (a) (b) (c) The cat is not on the mat. No doctors are rich. Socrates is unmarried.

All three of these are negations. (a) comes to "It is not the case that the cat is on the mat.". (b) comes to "It is not the case that some doctors are rich.". (c) comes to "It is not the case that Socrates is married.". 2. Some propositions which have the logical structure of a disjunction and should be translated as "S or T" are disguised: (a) (b) (c) Jack is either a philosopher or a bus driver. The team will win either this week or next. At least one of the two girls, Gill and Kofi, will get to the top of the mountain.

171

(a) comes to "Jack is a philosopher or Jim is a bus driver.", and (b) comes to "The team will win this week or the team will win next week.". Note that in the translation of (b) there is a complete proposition to the left of the word "or" and there is a complete proposition to the right of the word "or". Note that (a) does not rule out the possibility that Jack is both a philosopher and a bus driver, so it is unnecessary to translate such propositions as "(S or T) or (S and T)". In English, speakers typically rule out the possibility of both either by doing so explicitly ("Jack is either a philosopher or a bus-driver, but not both.") or by using "(either) or else " as in "Jack is a philosopher or else he's a bus-driver.". Such propositions should not be translated as "S or T." but as "(S or T) and not (S and T).". In some cases the contents of the proposition are mutually exclusive, and these can be translated as "one or other and not both". For example, "Jack is either on base or in the field." can be translated (using the obvious letters) as "(b or f) and not (b and f).". (c) does not use the word "or" at all, but it comes to "Gill will get to the top of the mountain or Kofi will get to the top of the mountain.". 3. Some propositions which are, or involve, the logical structure of a conjunction and which should be translated as "S and T.", are disguised: (a) (b) (c) (d) (e) S, but T. Neither S nor T. S. Moreover, T. S. Nonetheless, T. Although S, T.

"Neither S nor T." is equivalent to "Not S and not T.", whereas the others are equivalent to "S and T.". Consider, for example, the following propositions: (a) (b) (c) (d) Gill loves country music, but she hates rap. Although the meeting isn't until tomorrow, the members came yesterday. Aquinas's First Way is invalid; moreover, it has three false premises. Jack is neither nice nor good at algebra.

(a) comes to "Gill loves country music and Gill hates rap.". In addition to conjoining the propositions, the word "but" highlights the contrast between, on one hand, Gill's take on country music and, on the other hand, her take on rap. (b) comes to "The meeting isn't until tomorrow and the members came yesterday.". (c) comes to "Aquinas's First Way is invalid and Aquinas's First Way has three false premises.". In addition to conjoining the

172

propositions, words such as "but" in (a) and "although" in (c) highlight the fact that it is a bit surprising that the second proposition is true given that the first is true. We cannot capture this surprise in logically structured English and it is lost in translation. (d) comes to "Jack is not nice and Jack is not good at algebra." and is of the general form "Not S and not T". This is equivalent to the proposition "It is not the case that (Jack is nice or Jack is good at algebra)." or "Not (S or T).". "Neither nor " propositions, such as "Jack is neither nice nor good at algebra.", are not equivalent to propositions of the form "not both and ." such as "Jack not both nice and good at algebra.". Such propositions should be translated as "Not (S and T)." or "not S or not T.". Thus, with "n" standing for "Jack is nice." and "g" for "Jack is good at algebra.", "Jack is neither nice nor good at algebra." is translated as "Not n and not g." or "Not (n or g).". "Jack is not both nice and good at algebra" on the other hand, is translated "Not (n and g)." or "Not n or not g.". Consider, for example, the proposition "Gore and Bush were not both elected President.". This propositions does not say that neither Gore nor Bush was elected President. Rather, it is saying that either Gore was not elected President or Bush was not elected President. 11.5 Some Extras On Conditionals 1. Some propositions which have, or involve, the logical structure of a conditional are disguised: (a) (b) (c) (d) (e) (f) (g) T if S. S only if T. Provided that S, T. S is sufficient for T. Unless S, T. S is necessary for T. S if and only if T.

Each of (a) through (d) is equivalent to "If S then T." which is how a conditional is written in logically structured English. (e) is equivalent to "If not S, then T.". (f) is equivalent to "If not S then not T." (which is further equivalent to "If T then S."). (g) is equivalent to "(If S then T) and (if T then S).".

173

Consider, for example, the following propositions: (a) (b) (c) (d) (e) (f) (g) Jack got an A if he got every possible point on the homework assignments and exams. Henry graduated last spring only if he passed his logic class last fall. You can watch TV provided that you have done your homework. Jack's drinking a keg of vodka is sufficient for his getting drunk. Unless you pay up front, you're paying more than you need to. A necessary condition for Jim's knowing that the cat is on the mat is his believing it. You can leave your room if and only if you have apologized to your brother.

(a) comes to "If Jack got every possible point on the homework assignments and exams, then Jack got an A.". (b) comes to "If Henry graduated last spring, then Henry passed his logic class last fall.". To translate (c), we must first re-arrange the proposition so that "provided that" appears at the front: "Provided you have done your homework, you can watch TV.". This then comes to "If you have done your homework, then you can watch TV.". (d) comes to "If Jack drinks a keg of vodka, then Jack will get drunk.". (e) comes to "If it is not the case that you pay up front, then you are paying more than you need to.". Note that English allows the "unless" to occur in the middle of the proposition; in such cases, the constituent propositions should be re-ordered before translating. To translate (f), we must re-arrange the proposition so that "is a necessary condition for" is in the middle: "Jim's believing that the cat is on the mat is a necessary condition for Jim's knowing that the cat is on the mat.". This then comes to "If Jim does not believe that the cat is on the mat, then he does not know that the cat is on the mat.". (g) comes to "You can leave your if you have apologized to your brother and you can leave your room only if you apologize to your brother." which in turn comes to "If you have apologized to your brother then you can leave your room and if you can leave then you have apologized to your brother.". Concerning propositions such as (b), it might seem that "S only if T." is equivalent to "If T, then S.", rather than "If S, then T.". But consider the following propositions: (i) Bob is in Cleveland only if Bob is in Ohio. (j) If Bob is in Ohio, then Bob is in Cleveland. (k) If Bob is in Cleveland, then Bob is in Ohio.

174

(a) is in fact true. (b) is false, given that Bob can be in Ohio without being in Cleveland. (c) is true, given that Cleveland is in Ohio. If "S only if T." were equivalent to "If T, then S.", then (a) and (b) would be equivalent. But they cannot be equivalent, given that (a) is true while (b) is false. To quickly turn "S only if T" into an "If . . . then . . ." proposition of the standard form, replace the words "only if" with "then" and place an "if" at the beginning of the proposition. Concerning propositions such as (g), the phrase "S if and only if T." is a quick of saying "S if T, and, S only if T.". As we saw above concerning (a), the first conjunct comes to "If T then S.". As we saw above concerning (b), the second conjunct comes to "If S then T.". The whole proposition, therefore, comes to "(If T then S) and (If S then T).". Concerning propositions such as (e), conditionals making use of the word "unless" are tricky. Here is a general procedure for moving from them to conditionals in the desired form. First, ensure that the proposition is of the form "Unless S, T". If the proposition has the form "T unless S.", then move to "Unless S, T.". Second, in a proposition of the form "Unless S, T.", replace "unless" with "if ... not", and put the sentence in the form "If not S, then T.". Translating the sentence "Gill will not go out with Jack unless Jack is a bachelor.", for example, into logically structured English goes like this. Move first to "Unless Jack is a bachelor, Gill will not go out with him.". Replace "unless" with "if not", yielding "If Jack is not a bachelor, then Gill will not go out with him.". As an extra step in this example, both the 'if' and the 'then' part of the sentence are negations, so we move the negations to the front of each part of the proposition, as follows: "If it is not the case that Jack is a bachelor, then it is not the case that Gill will go out with him.". Using "b" for "Jack is a bachelor." and "g" for "Gill will go out with Jack." we write: "If not b then not g.". Concerning propositions such as (c), in everyday English the phrase "provided that" is typically used to stipulate a condition(s) under which a certain action or event will take place. If these are satisfied, the event will take place. Thus "You can watch TV provided that you brush your teeth." can first be rendered as "If you brush your teeth, then you can watch TV." and then into logically structured English as "If b then w.", using "b" for "You brush your teeth." and 'w' for "You can watch TV.".

175

Concerning propositions such as (d) and (f), notice that in the English proposition the parts are not, grammatically, propositions. They are often gerunds that point to a certain state of affairs, such as 'the sky's being blue' or 'Jim's keys being on the table.'. In logically structured English, however, these phrases are turned into propositions. Concerning proposition such as (f), when we describe one thing as being necessary for another, we are saying that if the condition is not satisfied, then the subsequent state of affairs will not be. For example, "120 credit hours are necessary for a student to graduate." states that if a student does not have 120 credit hours, she cannot graduate. So we can translate "S is necessary for T." into logically structured English as "If not S, then not T.". 11.6 The Big 8 Method; Asserting The Antecedent 1. The Big 8 method requires that the propositions in a given argument are in logically structured English and are unambiguous. The Big 8 method can be used to determine the validity or invalidity of some arguments, in which the key logical players are negations, disjunctions, conjunctions, and/or conditionals. For each such argument, the natures of and/or the relations between the negations, the disjunctions, the conjunctions, and/or the conditionals in it are supposed to help make it valid. The Big 8 method for checking for validity goes like this. First, make a translation key for the argument, using a single, lower-case letter for each simple proposition. Second, translate the propositions of the argument into logically structured English. Third, put the argument in standard form relative to the key. Fourth, compare it with the 8 forms described below (AA, AC, CC, CA, HS, CD, DD, and DS). If it is identical in form to AA, CC, HS, CD, DD, or DS, then it is valid. If, instead, it is identical in form to AC or CA, then it is invalid. 2. Consider the following argument: If Jack is at the office, then the cat is on the mat. Jack is at the office. So surely, the cat is on the mat. This argument is an instance of an argument form called asserting the antecedent (hereafter "AA"):

176

(1) If S, then T. (2) S. --------------(3) T. The easiest way to see this is to make a translation key. Let "o" stand for the proposition "Jack is at the office.", let "m" stand for "The cat is on the mat.", and replace the propositions with the letters: If o, then m. o. So surely, m. (Any letter can be used for any proposition (though a different one for each proposition). For purposes of familiarity, the letter used is usually the letter the verb begins with, except in case where the verb is a form of "is" or "have", in which case the adjective's or object's first letter is used.) Then put the resulting argument in standard form with propositions in logically structured English: (1) If o, then m. (2) o. ---------------(3) m. Notice, this is identical in form to AA: one premise is a conditional, the other asserts the antecedent, and the conclusion asserts the consequent. Note that it would not matter if premises (1) and (2) appeared in reverse order. 2. AA is a valid argument form, and thus an argument is valid if it is an instance of AA. The argument about Jack and the cat, therefore, is valid. 3. Note that "S" and "T" stand for any proposition, simple or complex. The following are all instances of AA and are thus all valid: (1) If a, then (b or c). (2) a. ---------------------(3) b or c. (1) If (a and b), then c. (2) a and b. -----------------------(3) c. (1) If not a, then b. (2) Not a. -------------------(3) b.

177

(1) If not a, then not b. (2) Not a. ------------------------(3) Not b. (1) If (if a then b), then c. (2) If a then b ---------------------------(3) c In every case, the premise which is a negation asserts the antecedent of the premise which is a conditional, and the conclusion is the consequent of the conditional, no matter how complex the antecedent and consequent might be. The same holds true for all of the other Big 8 patternsthe propositions involved might be complex, but as long as we can find the pattern that matches, we can identify which type of argument is being employed. 4. Note: AA is also known as affirming the antecedent and modus ponens. 11.7 Asserting The Consequent 1. Now consider an argument having a slightly different form: If it is raining, then the ground outside is wet. And the ground outside is wet. Thus, it's raining. This is an instance of asserting the consequent (hereafter "AC"): (1) If S, then T. (2) T. --------------(3) S. With "r" standing for "It is raining.", "w" standing for "The ground outside is wet.", and with the letters in place of the propositions, the argument looks like this: If r, then w. And w. Thus, r. In standard form with propositions in logically structured English, it looks thus: (1) If r, then w. (2) w. --------------(3) r. This is identical in form to AC: the first premise is a conditional, the second premise asserts the consequent, and the conclusion asserts the antecedent.

178

2. In contrast to AA, AC is an invalid argument form. For, some instances of it have true premises and a false conclusion. 3. As with AA, the premises can appear in reverse order. And again, be aware that "S" and "T" stand for any proposition, simple or complex. 11.8 Contradicting The Consequent 1. We turn now to contradicting the consequent (hereafter "CC"): (1) If S, then T. (2) Not T. -------------(3) Not S. Instead of asserting the antecedent, as in AA, the second premise contradicts the consequent; and instead of asserting the consequent, as in AC, the conclusion contradicts the antecedent. The following argument, for example, is an instance: If the President is in Ohio, he's talking about jobs being outsourced. He's not talking about jobs being outsourced, and so he's not in Ohio. With "o" standing for "The President is in Ohio." and "t" standing for "The President is talking about jobs being outsourced.", and with the letters in place of the propositions, the argument looks thus: If o, then t. not t and so not o. This, in turn, looks as follows in standard form using logically structured English: (1) If o then t. (2) Not t. ---------------------(3) Not o. This is identical in form to CC: the first premise is a conditional, the second premise contradicts the consequent, and the conclusion contradicts the antecedent. 2. CC is a valid argument form, and hence the argument about the President's whereabouts is itself valid. 3. As with AA and AC, the premises might be reversed. And again, be aware that S and T stand for any proposition, simple or complex. 4. Note: CC is also known by the names denying the consequent and modus tollens.

179

11.9 Contradicting The Antecedent 1. In contradicting the antecedent (CA), instead of contradicting the consequent (as in CC), the second premise contradicts the antecedent; and instead of contradicting the antecedent (as in CC), the conclusion contradicts the consequent. The general form is: (1) If S, then T. (2) Not S. --------------(3) Not T. Here is an example: If Smith knows that Jones owns a Ford, then he believes it and has a good reason for believing it. However, he doesn't know that Jones owns a Ford and, thus, he can't both believe it and have a good reason for believing it. With "k" standing for "Smith knows that Jones owns a Ford.", with "b" standing for "Smith believes that Jones owns a Ford.", with "g" standing for "Smith has a good reason for believing that Jones owns a Ford.", and with the letters in place of the propositions, the argument looks like this: If k, then b and g. However, not b and, thus, not (b and g). In standard form using logically structured English, this, in turn, is translated into: (1) If k, then (b and g). (2) Not k. ----------------------(3) Not (b and g). This, notice, is identical in form to CA: the first premise is a conditional, the second premise contradicts the antecedent, and the conclusion contradicts the consequent. 2. CA, like AC and unlike AA and CC, is an invalid argument form. 3. As with AA, AC and CC, the premises might be reversed. And again, be aware that S and T stand for any proposition, simple or complex. 4. The trick to distinguishing between AA, AC, CC, and CA is to compare the premise that is not a conditional with the premise that is a conditional. If the former asserts the antecedent as it appears in the conditional then the argument is an instance of AA, if it asserts the consequent then the argument is an instance of AC, if it contradicts the consequent then the argument is an instance of CC, and if it contradicts the antecedent then it is an instance of CA.

180

11.10 Hypothetical Syllogism, Constructive Dilemma, Destructive Dilemma & Disjunctive Syllogism 1. In addition to instances of AA, AC, CC, and CA, instances of the following arguments forms are common: (1) If S, then T. (2) If T, then U. --------------(3) If S, then U. (1) If S, then T. (2) If U, then V. (3) S or U. ---------------(4) T or V. (1) If S, then T. (2) If U, then V. (3) Not T or not V. -------------------(4) Not S or not U. (1) S or T. (2) Not S. -------(3) T. The first one is hypothetical syllogism (hereafter "HS"), the second one is constructive dilemma (hereafter "CD"), the third is destructive dilemma (hereafter "DD"), and the fourth is disjunctive syllogism (hereafter "DS"). 2. HS, CD, DD, and DS are valid argument forms. Note that, of the eight forms of the Big 8, CD and DD are the only ones with three premises. 3. A fallacy particularly associated with disjunctive syllogism (though with the truth of the premises rather than the form of the argument) is the fallacy of false choice or false dilemma. Consider the following: Mom, I would like to borrow the car or stay out later than normal tonight. Since you won't loan me the car, I guess I'll stay out later. This argument presents a false choice. The first sentence states that there are only two optionsborrowing the car or staying out later than normal. But the audience (the mother) would be quite justified in refuting this argument by pointing out that this premise is false on the grounds that there are other possibilities.

181

11.11 Recap Of The Big 8 Method 1. To repeat: the Big 8 method for checking for validity goes like this. First, make a translation key for the argument, using a single, lower-case letter for each simple proposition. Second, translate the propositions into logically structured English. Third, put the argument in standard form relative to the key. Third, compare it with the forms described below (AA, AC, CC, CA, HS, CD, DD, and DS). If it is identical in form to AA, CC, HS, CD, DD, or DS, then it is valid. If it is identical to AC or CA, it is invalid. Consider one final example: If Smith burnt down the house he will be in huge trouble with his wife. Smith did not burn it down. So, he is not in huge trouble with his wife. The Big 8 method gives the right result. Using obvious translation letters, in standard form using logically structured English, this comes to the following: (1) If b, then t. (2) Not b. --------------(3) Not t. This is identical in form to CA, and so is an instance of CA and, thus, is invalid.

182

11.12 A Summary Of Argument FormsThe Big 8

The following six forms are valid: Asserting the Antecedent (AA) (1) If S then T. (2) S. -------------(3) T. Hypothetical Syllogism (HS) (1) If S then T. (2) If T then U. --------------(3) U. Constructive Dilemma (CD) (1) If S then T. (2) If U then V. (3) S or U. ---------------(4) T or V. Contradicting the Consequent (CC) (1) If S then T. (2) not T. -------------(3) not S. Disjunctive Syllogism (DS) (1) S or T. (2) not S. --------(3) T. Destructive Dilemma (DD) (1) If S then T. (2) If U then V. (3) not T or not V. ------------------(4) not S or not U.

The following two forms are invalid: Asserting the Consequent (AC) (1) If S then T. (2) T. -------------(3) S. Contradicting the Antecedent (CA) (1) If S then T. (2) not S. -------------(3) not T.

183

Chapter 12 The Method Of Derivation 12.1 Advantages Of The Method Of Derivation 1. The Big 8 method has some shortcomings. Instances of AA, AC, CC, CA, HS, CD, DD, or DS have one and only one conclusion. Yet people often argue in stages, reaching an interim conclusion before going on to reach an ultimate conclusion, at each stage working with only a selection of the available information. The method of derivation handles such cases by treating the valid argument forms as rules of derivation and allowing repeated use of them. Further, an argument is an instance of AA, AC, CC, CA, HS, CD, DD, or DS only if it has no more than three premises and, thus, the Big 8 method is silent concerning arguments having more than three premises. Yet people often give arguments that have only one premise or more than three premises. Consider the following argument: If both Smith and Jones win their matches, the team will finish in third place. So obviously, if Smith wins his match, then the team will finish in third place if Jones wins his match. With "s" standing for "Smith will win his match.", "j" standing for "Jones will win his match.", and "t" standing for "The team will finish in third place.", in standard form using logically structured English the argument looks thus: (1) If (s and j) then t. ------------------------(2) If s then (if j then t). There is, in fact, no imaginable scenario in which the premises are true and the conclusion is false and, thus, the argument is valid. Since it has only one premise and since an argument is an instance of AA, AC, CC, CA, HS, CD, DD, or DS only if it has at least two premises, it is not an instance of AA, AC, CC, CA, HS, CD, DD, or DS. The method of derivation handles this problem by expanding the number of valid argument forms beyond the 8 of the Big 8 method. 12.2 Logically Structured Symbolic Propositions 1. The Method of Derivation, like the Big 8 method, works with arguments in which the key logical players are negations, disjunctions, conjunctions, and/or conditionals. Before we look at the method of derivation itself, we will take another step

184

in symbolizing such English propositions by using symbols for the logical words. We call these logically structured symbolic propositions (or symbolic for short). It is also commonly known as sentential. 2. Writing symbolic propositions requires three kinds of components, two of which will be familiar from logically structured English. First, it has lower-case letters. For example, "b", "c", and "d". Second, it has special symbols called "operators": the tilde (pronounced TIL-de) "~", the wedge (or "vel") "", the ampersand "&", and the horseshoe "". Third, it has parentheses "( )". (Note also that we cease using periods at the end of propositions.) In translating an argument into symbolic, use letters to stand for (a) simple propositions and (b) compound propositions that are not negations, disjunctions, conjunctions or conditionals. This is just like in logically structured English. Use the tilde to stand for "it is not the case that" and "not" in negations. Use the wedge to stand for "or" in disjunctions. Use the ampersand to stand for "and" in conjunctions. And use the horseshoe to stand for "if , then " in conditionals. The horseshoe should be placed where the "then" would go in logically structured English. 3. Some propositions in symbolic are well-formed, and some are not. First, a lower-case letter is a well-formed proposition. "b" and "c", for example, are well-formed. "B" and "C", in contrast, are ill-formed. Second, a lower-case letter with a tilde in front of it is a well-formed proposition. "~b" and "~k", for example, are well-formed, whereas "b~" and "k~" are ill-formed. Third, two lower-case letters with either a wedge, an ampersand, or a horseshoe between them is a well-formed proposition. "b c", "h & l", and "b q", thus, are well-formed. "b &", " b g", and "jk r", however, are not wellformed. Fourth, a well-formed proposition with a tilde in front of it is a well-formed proposition. So, "~(b c)" and "~(p & q)" are well-formed. The parentheses make it clear that the tilde is the main operator, in that they make it clear that the propositions have the form "~S" rather than "~S T" or "~S & T". Fifth, two well-formed propositions with either a wedge, an ampersand, or a horseshoe between them is a well-formed proposition. "(b c) & (d e)" and "b ((d c) & e)", for example, are well-formed. In the first proposition the parentheses make it clear that the ampersand is the main operator, in that they make it clear that the proposition has the form "S & T". In the

185

second, the parentheses make it clear that the horseshoe is the main operator. Sixth, everything else is ill-formed. 4. As in logically structured English, for propositions with two or more operators, parentheses are often needed to avoid ambiguity. Consider, for example, the proposition "p q & r". On one reading the wedge is the main operator and, thus, it has the form "S T", where "T" is "q & r". On another reading the ampersand is the main operator and, thus, it has the form "S & T", where "S" has the form "U V". Adding parentheses like this "(p q) & r" would make the ampersand the main operator, whereas adding parentheses like this "p (q & r)" would make the wedge the main operator. As it stands, "p q & r" is ill-formed. Note: there are two exceptions. First, a letter with two or more tildes in front of it is a well-formed proposition. The first tilde (i.e., the one furthest to the left) is the main operator. "~~~d", for example, is well-formed, and the tilde furthest from "d" is the main operator. Second, propositions like the proposition "~b & c" are well-formed. The tilde is understood to modify whatever follows it immediately. So in this case, the tilde modifies "b" and the ampersand is the main operator. In the proposition "~(b & c)", since parentheses immediately follow the tilde, the negation applies to "b & c" and not just to "b". Note also that a tilde can follow another symbol. All of the following are wellformed: "b & ~c", "b ~c", "b ~c". All other combinations of two or more symbols are ill-formed. 5. Consider, for example, the following propositions: (a) (b) (c) (d) (e) (f) The cat is on the mat. The cat is on the mat, or my grandma is in Barcelona. The cat isn't on the mat. Jack is slightly amused if either the cat is on the mat or the dog is chasing the frog. Nixon served as the President of the United States before Reagan served as the President of the United States. It is well known that the earth is not flat.

(a) is simple and, thus, gets represented as "m", where "m" stands for "The cat is on the mat.". (b) is a disjunction and, thus, gets represented as "m b", where "b" stands for "My grandma is in Barcelona.". (c) is a negation and, therefore, gets represented as "~m". (d) is a conditional and, so, gets represented as "(m c) a", where "c" stands for

186

"The dog is chasing the frog." and "a" stands for "Jack is slightly amused.". Although (e) and (f) are compound propositions, they get represented by letters"s" and "k", for examplefor neither is a negation, a disjunction, a conjunction, nor a conditional. 12.3 Rules Of Derivation 1. A rule of derivation is a rule describing a valid derivation (or: inference), from a set of one or more premises to a conclusion. The argument forms of the Big 8 method can be interpreted as rules of derivation. For example, asserting the antecedent (AA) can be understood as a rule which says it is valid to derive "T" from the propositions "S T" and "S" if they appear anywhere in the argument. (As before, capital "S" and "T" are variables standing for any proposition, either simple or complex.) In giving a derivation, we are interested only in the valid rules; we do not use AC or CA. Here are the Big 8's valid rules, in symbolic: asserting the antecedent, contradicting the consequent, hypothetical syllogism, constructive dilemma, destructive dilemma, and disjunctive syllogism. Asserting the Antecedent (AA) (k) S T . . (l) S . . (m) T k, l, AA Hypothetical Syllogism (HS) (k) S T . . (l) T U . . (m) S U k, l, HS Constructive Dilemma (CD) (k) S T . (l) U V . (m) S U . . (n) T V k, l, m, CD Contradicting the Consequent (CC) (k) S T . . (l) ~T . . (m) ~S k, l, CC Disjunctive Syllogism (DS) (k) S T . (l) ~S . . (m) T k, l DS Destructive Dilemma (DD) (k) S T . (l) U V . (m) ~T ~V . . (n) ~S ~U k, l, m, DD

187

'(k)', '(l)', '(m)' and '(n)' are any numbered lines in a derivation. The dots between lines indicate that the propositions which are used as premises can appear anywhere in the argument (and indeed, they can appear in any order). Using AA as an example, "T" is a conclusion derived from the propositions "S T" and "S" which appear above it in the derivation. To the right of the conclusion we write 'k, l, AA', which means that we derived "T" on line (n) from the information on lines (k) and (l) using AA. Each of "S T" and "S" (on lines (k) and (l)) might be either a premise from the original argument or an interim conclusion previously derived in the argument which we can use in a new sub-argument. If a proposition on a line is a premise given in the original argument it will have the word "Premise" written to its right; if a proposition on a line is an interim conclusion, to its right will be written the numbers of the lines of the propositions used and the rule used to infer it, just like the ultimate conclusion. (See the next section, just below, for more.) Notice that (as with the Big 8 method) DS involves negating the left-hand disjunct ("~S" on line (l).) and deriving the right-hand disjunct ("T" on line (m).). As part of the method of derivation we will later introduce a rule of equivalence called commutation which states that "S T" is equivalent to "T S". This will allow us to derive "S" when we have "~T" on line (l). 12.4 The Method Of Derivation 1. We can attempt to derive (or, prove) the conclusion of an argument from the premises using the method of derivation, that is, by using the rules of derivation successively as many times as are needed, using a sub-set of the premises at a time. If a derivation (or, proof) can be found, the argument is valid. 2. Consider the following argument, in standard form: (1) If logic is beneficial, then mathematics is beneficial. (2) Logic is beneficial. -------------------------------------------------------------------(3) Mathematics is beneficial. This argument is an instance of AA: (3) follows validly from (1) and (2) by use of the rule AA. Using the obvious proposition letters, the complete derivation is written as follows:

188

(1) l m (2) l (3) m

Premise Premise 1, 2 AA

Conclusion: m

We begin by setting out the premises and conclusion(s) of the argument. We write down and number the premises, and to the right of each premise we write down the word "Premise" to indicate that the proposition on that line is a premise. The word "Conclusion" and the conclusion are written down off to the far right, on the same line as the final premise. On the lines below the premises are written the result of any applications of the rules of derivations. In this case there is only one more lineline (3). It is labeled to the right with information about how the proposition on line (3) was derived. We write down (i) the lines of the propositions from which it was derived and (ii) the rule used to derive the proposition. In this case, we are able to derive the conclusion by a single application of AA from premises on lines (1) and (2). On line (3) we write down the result of the derivation "m" and indicate that it was derived from premises (1) and (2) using AA. Notice that there is no horizontal line dividing the premises from the results of applying the rules of derivation as there is when we write arguments in standard form. There is no need for a dividing line between premises and conclusion (as in standard form and the Big 8 method) because each line is labeled as either a premise or the result of an application of a rule of derivation. 3. Consider the following argument: Either I failed History 101 or I passed it. I know I didn't fail it, so I passed. And if I passed 101 I can register for History 201. So, I can register for History 201. Here, the speaker reaches an interim conclusion ("I passed.") before going on to derive another, ultimate, conclusion. The argument uses first DS and then AA to arrive at the conclusion. To show this in the method of derivation, we begin by setting out the premises and conclusions of the argument, as follows: (1) f p (2) ~f (3) p r Premise Premise Premise Interim Conclusion: p Conclusion: r

We write down all of the premises in the initial set-up, even though in the passage the final premise appears after the interim conclusion. Any conclusions appear

189

to the right. In this case, there is an interim conclusion ("p", "I passed.") in addition to the ultimate conclusion. We want to check whether the speaker reasoned well at each step, and not just with respect to the ultimate conclusion, so we will check whether or not the speaker validly inferred the interim conclusion by deriving it from the premises. So we write it off to the right, above the ultimate conclusion. After we have set out the premises and conclusion(s), we write the conclusion of each successive application of the rules of derivation on the next line below the final premise and number the lines successively. The result of the final application, written on the last line of the complete derivation, should be the conclusion we are seeking. In this case, it is clear that the (interim conclusion "I passed." was reached validly, using the rule of disjunctive syllogism (DS): The speaker began by presenting a disjunction and then denied one of the disjuncts. Using just these two premises and the DS rule, "p" can be derived. In our derivation, we add this on line (4), as follows: (1) f p (2) ~f (3) p r (4) p Premise Premise Premise 1, 2 DS Interim Conclusion: p Conclusion: r

Next, lines (3) and (4) are combined to give the ultimate conclusion "r" using AA. This is the conclusion we are seeking. The derivation is as follows: (1) f p (2) ~f (3) p r (4) p (5) r Premise Premise Premise 1, 2 DS 3, 4 AA Interim Conclusion: p Conclusion: r

4. Here is one more example: If Spurs win this weekend then they will top the league table. If Spurs are on top, attendance figures will rise, and then the club will be able to fix the stadium. So, if they win this weekend, repairs can be made. In standard form, using symbolic with obvious proposition letters to translate, the argument can be written as follows: (1) w t (2) t a (3) a r -----(4) w r

190

In the method of derivation, we can use the rules of derivation successively to reach the desired conclusion. The conclusion of one derivation can be used as a premise in another. In this case, we can use HS twice to reach the conclusion, as follows: (1) w t (2) t a (3) a r (4) w a (5) w r Premise Premise Premise 1, 2 HS 4, 3 HS

Conclusion: w r

Having derived "w a" on line (4) from (1) and (2), we combine it with line (3) in a new instance of HS, which yields the desired conclusion "w r". Notice that to the right of line (5) we write "4, 3 HS" rather than "3, 4 HS". This is merely a piece of etiquette which indicates that, if we follow the order of presentation as given by the rules we must consider line 4 as coming first. 5. Propositions can be used more than once in the course of a complete derivation. Consider the following argument, in standard form using symbolic: (1) a (~c b) (2) ~c (3) c a --------------(4) b The derivation of "b" from the premises requires the use of the premise on line (2) twice, as follows: (1) a (~c b) (2) ~c (3) c a (4) a (5) ~c b (6) b Premise Premise Premise 3, 2 DS 1, 4 AA 5, 2 AA

Conclusion: b

6. Often, when trying to show an argument to be valid using the method of derivation, it is useful to begin with the conclusion and work backwards. This is especially useful when the speaker does not provide interim conclusions which tell us the process of reasoning which she has used. Consider the following argument, whose conclusion is "b.". We write the conclusion some distance below the premises, as follows:

191

(1) c a (2) ~c (3) a b . . . . (m) b

Premise Premise Premise

Conclusion: b

The 'm' in brackets is the number that this line will have in the completed derivation. We do not yet know what it is. Working from the conclusion backwards, we think as follows 'Ultimately, we want to conclude "b". Where among the premises do we see "b"?' In this case, we see "b" in the first premise "a b". "b" is the consequent, which suggests the rule AA. We think 'If we had "a" by itself, then we could combine "a" and line 3 in order to derive "b".' (1) a c (2) ~c (3) a b . . . (l) a (m) b Premise Premise Premise

Conclusion: b

3, l AA

Now we ask the same question again: 'Where among the premises do we see "a"?' We see "a" in line 3, of course, but we want to use line 3 to get the conclusion on line m, and this suggests that line 3 will not be used on line l (though as we just saw in subsection 5, sometimes the same proposition is used more than once in a derivation). We also see "a" in line 1 "c a". If we had "~c" we could derive "a" from "~c" and line 1 using DS. (1) c a (2) ~c (3) a b . . (k) ~c (l) a (m) b Premise Premise Premise

Conclusion: b

1, k DS 3, l AA

Now our question is: 'Where among the premises do we see "~c"?' We see it on line 2 and, what's more, we do not need to perform any further derivation: Line (k) is

192

line (2) and we do not in fact need to write '~c' on its own line a second time (line (k)), since it already appears by itself on a line (line (2)). Our complete derivation is written as follows. It can be useful to read it from the bottom up: "b" was derived from 3 and 4 using AA. AA requires the antecedent of the condition ("a") by itself, and this was derived (on line 4) from 1 and 2, using DS. (1) c a (2) ~c (3) a b (4) a (5) b Premise Premise Premise 1, 2 DS 3, 4 AA

Conclusion: b

The general strategy is as follows: Look at the conclusion and find it, or its negation, among the premises. If the conclusion is found among the premises as ... ... a consequent, think of using AA and search for the antecedent. ... a negated antecedent, think of using CC and search for the negation of the consequent. ... a conditional, think of using HS and search for two conditionals, one having the antecedent of the conclusion as an antecedent, and the other having the consequent of the conclusion as a consequent. ... the right-hand disjunct, think of using DS and search for the negation of the left-hand disjunct. ... a disjunction, think of using CD or DD and search for the two conditionals and the appropriate disjunction. For each proposition sought, repeat this process until the proposition(s) needed are among the original premises. 12.5 Three Additional Rules Of Derivation 1. Additional rules of derivation can be added in order to expand the range of arguments that can be shown to be valid by the method of derivation. Two common rules are: Addition (Add.) (k) S . . (l) S T

k, Add.

193

Conjunction (Conj.) (k) S . . (l) T . . (m) S & T k, l Conj. 2. Do not confuse Add. and Conj.. Their names suggest something quite similar, but one produces a disjunction while the either produces a conjunction. 3. Add. allows us to add anything at all to an existing proposition. This might seem a strange rule at first, as it throws away information, but it does have an important use when a disjunction is involved in a conditional, but only one disjunct appears elsewhere. Consider the following argument: (1) (s (b& g)) c (2) s Premise Premise Concl: c

The conclusion appears amongst the premises as the consequent in line 1, the antecedent being "s (b & g)". We can use addition of "b & g" to line 2 to get the necessary antecedent, as follows: (1) (s (b & g)) c (2) s (3) s (b & g) (4) c Premise Premise 2 Add. 1, 3 AA Concl: c

Note that, however it is used, Add. is perfectly valid. If we grant that "S" is true, then it follows necessarily that "S T" would be true, for the supposed truth of "S" is sufficient to make "S T" true. 4. Another common rule is Simplification (Simp.) (k) S & T . . (l) S k, Simp. In chapter 2, you were instructed to break English propositions which are conjunctions into two parts and so after your analysis you will not have a proposition involving a conjunction of two simple propositions, such as "a & b". However, sometimes a conjunction will itself be involved in a complex proposition, as in the

194

proposition "d (a & b)", from which we might need to derive "a" by itself. In such cases, it will be necessary to use Simp. in order to derive "a". Notice that Simp. only allows us to derive the left hand conjunct. As part of the method of derivation we will introduce in the next section a rule of equivalence called "commutation" which states that "S & T" is equivalent to "T & S" (and also that "S T" is equivalent to "T S".) Having applied this rule of equivalence to line (k) we will then be able to use Simp. to derive "T". 5. These new rules suggest certain additional strategies when trying to work backwards from the conclusion. If the conclusion is found among the premises as ... ... a left-hand conjunct, think of using Simp.; ... a conjunction, think of using Conj.. And ... ... if the conclusion is a disjunction whose right-hand disjunct is not among the premises, think of using Addition. (And if the conclusion is an proposition which is not among the premises, think of using Add. followed by DS. (This is equivalent to finding a contradiction among, or deriving a contradiction from, the premises.)) 12.6 Rules Of Equivalence 1. Rules of equivalence can be used to replace a proposition or any part of a proposition. The most obvious such equivalence is double negation: Double Negation (DN) S <--> ~~S The double arrow "<-->" indicates an equivalence. Equivalence rules permit us, at any stage of a derivation, to substitute what appears on the left side of the double arrow with what is on the right, and vice versa. This means the we can convert in either direction, as follows: (k) S . . (l) ~~S and k, DN (k) ~~S . . (l) S

k, DN

2. Consider the following argument: Since sensory experiences cannot serve as good reasons only if coherentism is true, and coherentism is not true, thus sensory experiences can serve as good reasons. 195

Using the obvious translation key, we can write the argument in standard form using symbolic as follows: (1) ~s c (2) ~c --------(3) s The desired conclusion is "s". The premises suggest that the conclusion is reached by CC, but using CC yields "~~s", rather than "s". So, we convert "~~s" to "s" using the equivalence DN. The full derivation is written as follows: (1) ~s c (2) ~c (3) ~~s (4) s Premise Premise 1, 2, CC 3, DN Conclusion: s

3. We can employ equivalences on whole propositions or on partial propositions. In this respect equivalences are different from rules of derivation. So, for example, if an argument contained the line ... (k) ~~S T ... we could employ DN on the antecedent only to convert to ... (n) S T k, DN

4. Another rule of equivalence is commutation. Commutation (Comm.) S T <--> T S and S & T <--> T & S

Commutation states, in effect, that the order of appearance of propositions in a disjunction or conjunction makes no logical difference. It is important because we have defined only one ("left-hand") version of each of DS and Simp.. Given these definitions, it is not a valid inference to argue as follows: (1) s t (2) ~t (3) s

1, 2 DS

Rather we must proceed as follows: (1) s t (2) ~t (3) t s (4) s

1 Comm. 3, 2 DS

196

Similarly, the following derivation is not valid: (1) a & b (2) b 1 Simp.

Rather we must argue as follows: (1) a & b (2) b & a (3) b 1 Comm. 2 Simp.

5. Association is defined as follows: Association (Ass.) (S T) U <--> S (T U) and (S & T) & U <--> S & (T & U) Association states, in effect, that when there are three propositions concatenated by disjunction, or by conjunction, it makes no logical difference whether we treat the first wedge or ampersand as the main operator or the second. Indeed, when combined with commutation, we see that it would make no logical difference if we took the first and third as a pair, as follows: (1) (a b) c (2) a (b c) (3) (b c) a (4) b (c a) 1 Ass. 2 Comm. 3 Ass.

6. Other common rules of equivalence are exportation, transposition, material implication, and De Morgan's rule. These are defined as follows: Exportation (Exp.) S (T U) <--> (S & T) U Transposition (Trans.) S T <--> ~T ~S Material Implication (MI) S T <--> ~S T and S T <--> ~(S & ~T) De Morgan's Rule (DM) ~(S & T) <--> ~S ~T and ~(S T) <--> ~S & ~T

197

(Note: The rule of equivalence called "material implication" should not be confused with the use of the term to mean a conditional proposition.) These equivalences occur in ordinary English and can be intuitively seen to be logical equivalences. Compare the following pairs of propositions: If it rains, then, if the field is water-logged, then game is cancelled. If it rains and the field is water-logged, then the game is cancelled. If salt is soluble in water, it (salt) is ionic. If salt is not ionic, it is not soluble in water. If salt is soluble in water, it (salt) is ionic. Either salt is not soluble in water, or it is ionic. If it's raining, the river will burst its banks. Either it's not raining or the river will burst its banks. It can't rain and the river not burst its banks. (Often expressed as: It can't rain without the river bursting its banks.) It can't both be raining and warm. It is either not raining or not warm. It is not either raining or warm. It is not raining and it is not warm. (Often expressed as: It is neither raining nor warm.) The propositions in each set of propositions are equivalent to one another. The sets demonstrate, in succession, exportation, transposition, both versions of material implication, and both versions of De Morgan's. (The truth table method in the next chapter can be used to verify these equivalences.) 7. These rules of equivalence, especially MI and DM, suggest additional strategies for reaching a conclusion in a derivation. We might try using a rule of equivalence to convert the conclusion into an proposition with a different operator. MI converts conditionals into negated conjunctions or disjunctions, and vice versa, while DM converts negated conjunctions into disjunctions, and vice versa, or negated disjunctions into conjunctions, vice versa. Consider the following example: Xena will go out with Jack only if he is a bachelor, and, he will go out with her only if she likes orange juice. But since Jack is not a bachelor and Xena does not like orange juice, it's not the case that she will go out with him or he with her. Let the argument be rendered as follows:

198

(1) x b (2) j o (3) ~b (4) ~o

Premise Premise Premise Premise

Conclusion: ~(x v j)

The conclusion is "~(x j)". Since the conclusion is a negation, we might hope to derive it by using CC and so look for "x j" as the antecedent of a conditional. But we see do not see this among the premises. We suspect that an equivalence is necessary. We can convert the conclusion into the conjunction "~x & ~j" by using DM. The conjunction in turn suggests a conjunction of "~x" and "~j". "~x" can be derived by CC from lines 1 and 3; "~j" can also be derived by CC, from lines 2 and 4. The derivation can be written as follows: (1) x b (2) j o (3) ~b (4) ~o (5) ~x (6) ~j (7) ~x & ~j (8) ~(x j) Premise Premise Premise Premise 1, 3 CC 2, 4 CC 5, 6 Conj. 7 DM

Conclusion: ~(x v j)

12.7 Conditional Derivations & Indirect Derivations 1. Oftentimes, people suppose something for the sake of argument and from this supposition argue for something further. They can then conclude that, if the supposition is (or turns out to be) true, the conclusion follows. Another use for such assumptions is that we often assume something in order to help us think through the consequences. Consider the following example: Let's suppose that the El went by at midnight. If the El was going by, it would be really noisy. If there were a lot of noise, then the old man downstairs wouldn't have heard the struggle. So, if the El was going by, the old man wouldn't have heard the struggle. (Based on 12 Angry Men) Such an argument is called a conditional argument, and the derivation which shows that a particular conditional argument is valid is called a conditional derivation. 2. A conditional derivation takes the following general form:

199

(k) S . . (l) T . . (m) U (n) S U Assumption Cond.

l-m Cond.

Line (k) and the dotted lines immediately beneath it indicate that there might be additional lines prior to the conditional derivation. The conditional derivation might be only a portion of a complete derivation.) Lines (l) through (m) are indented to indicate their conditional nature. That is, they are based on the assumption in line (l). Line (l) is labeled "Assumption Cond." which means that it is an assumption for the purpose of a conditional derivation. Line (n) is the conclusion of the conditional derivation. It is not indented because it is not dependent on any assumption. It is labeled with all of the numbers of the lines which are dependent on the assumption, and "Cond." for "conditional derivation". For example, we had above the argument: Let's suppose that the El went by at midnight. If the El was going by, it would be really noisy. If there were a lot of noise, then the old man downstairs wouldn't have heard the struggle. So, if the El was going by, the old man wouldn't have heard the struggle. (Based on 12 Angry Men) The words "Let's suppose ..." indicate that the speaker is arguing conditionally, and the conclusion is appropriately a conditional one: If something is the case, some other state of affairs follows. The derivation which reaches this conclusion is written as follows, using obvious proposition letters: (1) e n (2) n ~h (3) e (4) n (5) ~h (6) e ~h Premise Premise Assumption Cond. 1, 3 AA 2, 4 AA 3-5 Cond. Conclusion: e ~h

We write the premises first, number them, and label them as premises. We indent the assumption, number it, and label it as an assumption for the purposes of a conditional derivation. The assumption is the antecedent ('e') of the conditional which appears in

200

the conclusion ('e ~h'). When we reach consequent of the conclusion ('~h') in the conditional derivation, we write the conditional conclusion on a non-indented line, number it, and label it with the range of lines used to arrive at it, and with the word 'Conditional' to indicate that it was arrived at by a conditional derivation. 3. Another common way of arguing is to suppose the negation of the conclusion in order to derive a contradiction. If assuming the negation leads to a contradiction, the (positive) conclusion is validly inferred. An argument of this form is an indirect argument, and the derivation which shows that a particular indirect argument is valid is called an indirect derivation. Consider the following example: Home ownership will rise. Why so? Well, we know that if rates fall or gas prices fall, then home ownership rises. Now, suppose home ownership won't rise. So neither rates will fall nor gas prices. What's more, either gas prices will fall or inflation will rise. So, inflation will rise. And finally, if inflation rises then rates will fall and confidence will rise. So, rates will fall and confidence will rise. So, based on our assumption, rates will both rise and fall. There you have it: home ownership will rise. 4. A indirect derivation takes the following general form: . . . (k) ~S Assumption Ind. . . (l) T & ~T (m) S k-l Ind. As always, the lines with dots indicate that there might be additional lines in the derivation. Lines (k) through (l) are indented to indicate their conditional nature. That is, they are based on the assumption in line (k). Line (k) is labeled "Assumption Ind." which means that it is an assumption for the purpose of an indirect derivation. The reason for assuming the negation of the conclusion is to show that it leads to a contradiction, which appears here on line (l). Line (m) is the conclusion of the indirect derivation. It is not indented because it is not dependent on any assumption. It is labeled with all of the numbers of the lines which are dependent on the assumption, and "Ind." for "indirect derivation". For example, we saw above the argument: Home ownership will rise. Why so? Well, we know that if rates fall or gas prices fall, then home ownership rises. Now, suppose home ownership won't rise. So 201

Conclusion: S

neither rates will fall nor gas prices. What's more, either gas prices will fall or inflation will rise. So, inflation will rise. And finally, if inflation rises then rates will fall and confidence will rise. So, rates will fall and confidence will rise. So, based on our assumption, rates will both rise and fall. There you have it: home ownership will rise. The derivation for this argument is written as follows, using obvious proposition letters: (1) (r g) h (2) g i (3) i (r & c) (4) ~h (5) ~(r g) (6) ~r & ~g (7) ~g & ~r (8) ~g (9) i (10) r & c (11) r (12) ~r (13) r & ~r (14) h Premise Premise Premise Assumption Ind. 1, 4 DC 5 DM 6 Comm. 7 Simp. 2, 8 DS 3, 9 AA 10 Simp. 6 Simp. 11, 12 Conj. 4-13 Ind.

Conclusion: h

As with derivations and conditional derivations, we begin by writing down the premises and only the premises. The assumption "~h" is then entered and labeled as an assumption. The assumption is indented, as are any derivations dependent upon the assumption. When a contradiction is reached, the un-negated assumption is entered on a non-indented line. 5. Indirect derivations are also called reductio ad absurdum because the general strategy is to show that show that something absurd or false or undesirable follows if we grant some proposition. If a proposition can be shown to lead to something impossible or morally unconscionable or strongly undesirable, that is sufficient reason for accepting its denial. This method is used frequently in mathematics as, for example, in various of the proofs of Euclid, in order to show that a certain proposition is a necessity (given the other postulates and theorems of the system).

202

12.8 A Summary Of RulesMethod of Derivation Asserting the Antecedent (AA) (k) S T . (l) S . (m) T k, l, AA Hypothetical Syllogism (HS) (k) S T . (l) T U . (m) S U k, l, HS Constructive Dilemma (CD) (k) S T . (l) U V . (m) S U . (n) T V k, l, m, CD Addition (Add.) (k) S . (l) S T Contradicting the Consequent (CC) (k) S T . (l) ~T . (m) ~S k, l, CC Disjunctive Syllogism (DS) (k) S T . (l) ~S . (m) T k, l DS Destructive Dilemma (DD) (k) S T . (l) U V . (m) ~T ~V . (n) ~S ~U k, l, m, DD Conjunction (Conj.) (k) S . (l) T . (m) S & T k, l Conj. Exportation (Exp.) S (T U) <--> (S & T) U Transposition (Trans.) S T <--> ~T ~S Material Implication (MI) S T <--> ~S T S T <--> ~(S & ~T) De Morgan's Rule (DM) ~(S & T) <--> ~S ~T ~(S T) <--> ~S & ~T

k, Add.

Simplification (Simp.) (k) S & T . (l) S k, Simp. Double Negation (DN) S <--> ~~S Commutation (Comm.) S T <--> T S S & T <--> T & S Association (Ass.) (S T) U <--> S (T U) (S & T) & U <--> S & (T & U)

203

Chapter 13 The Truth Table Method & The Truth Tree Method 13.1 Advantages & Disadvantages Of The Truth Table Method 1. Like the method of derivation, the truth table method and the truth tree method can be applied to arguments of any length; each will work no matter how many (or few) premises an argument has. 2. The method of derivation tells us that, if we find a derivation, the argument is valid, but it remains silent when we cannot find a derivation; perhaps we have failed to find a derivation because there indeed is no derivation, or because of lack of skill on our part. The truth table method and the truth tree method give us ways of telling whether an argument is valid, if it is valid, or invalid if it is invalid. 3. In addition, the truth table and truth tree methods are purely mechanical methods, which do not rely on any ingenuity on our part. However, the truth table method is artificial in that it does not demonstrate the steps in reasoning by which the conclusion is reached. The truth tree method solves this problem to some extent. 4. Another disadvantage of the truth table method, but less so of the truth tree method, is that truth tables can also be quite cumbersome to produce. A shorter version of the truth table method, called the targeted truth table method, is also given. 4. Both the truth table and truth tree methods are used on arguments whose propositions are in symbolic. 13.2 Truth Values & Truth Tables For The Logical Operators 1. A simple proposition can be either true (T) or false (F). Whether a well-formed complex proposition is true hinges solely upon whether the simple propositions in it are true, and in the following ways: ~ S F T T F S T T T F F T T T F T F T F S & T T T F F T F F F T F T F S T T T F F T F T T T F T F

The tables are read like this: Tilde: If "S" (i.e., any well-formed proposition) is true then "~S" is false, and if "S" is false then "~S" is true. 204

Wedge: If "S" is true and "T" is true then "S T" is true, if "S" is true and "T" is false then "S T" is true, if "S" is false and "T" is true then "S T" is true, and if "S" is false and "T" is false then "S T" is false. Ampersand: If "S" is true and "T" is true then "S & T" is true, if "S" is true and "T" is false then "S & T" is false, if "S" is false and "T" is true then "S & T" is false, and if "S" is false and "T" is false then "S & T" is false. Horseshoe: If "S" is true and "T" is true then "S T" is true, if "S" is true and "T" is false then "S T" is false, if "S" is false and "T" is true then "S T" is true, and if "S" is false and "T" is false then "S T" is true. 2. The truth table for tilde (negation) is obvious, given the assumption that there are two and only two truth values, true and false. If the proposition "Salt is an ingredient in ketchup." is true, then the proposition "Salt is not an ingredient in ketchup." is false. And if the proposition "Salt is an ingredient in ketchup." is false, then the proposition "Salt is not an ingredient in ketchup." is true. The truth table for wedge (disjunction) indicates that a disjunction is true unless both disjuncts are false, in which case it is false; otherwise it is true. This truth table allows that a disjunction is true when both disjuncts are true. For example, the sign at a charity book sale "Everyone is welcome to donate goods or buy them." would be true if a person did both. Clear cases of what is called "exclusive-or" ("one or other but not both", "... or else ...") can be rendered as "(S T) & ~(S & T)". The default, however, should be to the table above, for what is called "inclusive-or". The truth table for ampersand (conjunction) shows that a conjunction is true only when both conjuncts are true; otherwise it is false. For example, "Jack is in Baghdad and Gill is in Basra." is false if one or more of the conjuncts is false. The truth table for horseshoe (conditional) shows that a conditional is false only when the antecedent is true and the conditional is false; otherwise it is true. The first two lines perhaps are obvious, but the latter two require some explanation. As examples of the first two lines, a proposition such as "If I take the full course of antibiotics, the infection will clear up." is true if I take the full course and the infection clears, and is false if I take the full course of antibiotics and the infection persists. The last two lines, however, state that the conditional is true if the antecedent is false, regardless of the truth or falsity of the consequent. If I do not take the full course of

205

antibiotics, is the conditional false? No, but one might think that it not true, either. Why should we default to "T"? "If then " is naturally understood causally, as in the example of the antibiotic and the infection. But "If then " has other uses in addition to expressing causal connection, such as to express logical connections ("If "a b" and "a", then "b"."), or connections based on definitions ("If it's water, it has hydrogen."). Conditionals are also used to speculate about what might be, as in "If I were to win the Lotto, I would quit my job.". (One can even use "if then " to strongly assert the falsity of an antecedent, as in "If United loses on Saturday, then I'll eat my hat."). Common to all these uses is that the conditional is false if the antecedent is true and the consequent false. It might thus be better to think of "S T" not as "If S, then T" but as "not both S and not T", or, in symbolic, "~(S & ~T)". The common meaning to all conditionals is that the conditional is false when the antecedent is true and the consequent is false, which is equivalent to saying that it can't simultaneously be that the antecedent is true and the consequent is false. If "S" is true and "T" is false, "~T" is true and "S & ~T" is true, and ~(S & ~T) is false. Under all other assignments, the proposition is true. Note also that when "S" is false (as it is in the bottom of two lines of the truth table for the horseshoe), the internal conjunction is false, and the negation is true. Thus, "S T" is equivalent to "~(S & ~T)". "S T" and "~(S & ~T) are, further, equivalent to "~S T", as in "Either I do not take the full course of antibiotics or the infection will clear up.". Again, when "S" is true and "T" is false, the proposition is false, and when "S" is false (as it is in the bottom two lines of the truth table for the horseshoe), "~S" is true, and this is enough to make the disjunction true. Understood in this truth-functional way, the horseshoe is called material conditional or material implication, since it is agnostic about the type of connection being asserted in the original English proposition and merely expresses the implication between two propositions. 3. Since the four types of complex proposition are composed of simple propositions and the four logical operators, we can work out the truth value of a proposition based on the truth values of the simple propositions and the four basic truth tables, given above. Consider the following proposition:

206

((p q) & r) ~q Let it be that "p" is true, "q" false, and "r" true. What is the truth value of the proposition as a whole? The trick to determining the truth value of a complex proposition which involves parentheses is to work from the inside out. The main operator is the wedge, for the number of left-parentheses to its left is equal to the number of right-parentheses to its left: two for each. The horseshoe has two left parentheses to its left but no right parentheses and, thus, it can be ruled out as the main operator. The ampersand has two left parentheses to its left but only one right parenthesis and, thus, it can be ruled out as the main operator. The tilde, like the wedge, has the same number of left parentheses to its left as right parentheses, but in ties the tilde loses out to the other operators. The first disjunct is a conjunction, and the first conjunct is "p q". Since "p" is true and "q" is false, "p q" is false, since propositions having the form "S T" are false when "S" is true and "T" is false. Given this and given that "r" is true, "(p q) & r" is false, since propositions having the form "S & T" are false when "S" is false and "T" is true. The second disjunct is a negation, and since the proposition being negated (i.e., q) is false, it is true, for propositions having the form "~S" are true when "S" is false. Given this, "((p q) & r) ~q" is true, for propositions having the form "S T" are true when "S" is false and "T" is true. 13.3 Setting Up Truth Tables 1. The truth table method involves making a single truth table for all of the propositions in an argument. A truth table presents all of the possible assignments of truth or falsity to the simple propositions along with the truth value of each proposition in the argument under each assignment. 2. In this section, we consider how initially to set up a truth table. Here is a procedure for making rows of Ts and Fs under the letters in a truth table so that (a) each row is a possible combination of truth values for the letters and (b) no possible combination is left out. First, determine who many rows the table will have. If the propositions of an argument involve only distinct 1 letter in it, then its truth table has 2 rows. (Multiple appearances of the same letter are not counted more than once.) If they involve 2 distinct letters in it, then the argument's truth table has 4 rows. If 3 letters, 8 rows. If 4

207

letters, 16 rows. And so on. In general, if an argument involves n letters, then its truth table has 2n rows. Suppose that the propositions of an argument involve 3 letters: p q r

Given that it has 3 letters, there are 8 rows in its truth table: 23 = 8. Second, fill in each column, as follows: We begin with the first simple proposition and under it write down 4 (8 / 2) Ts. The table so far looks like this: p T T T T We then write down 4 Fs. The table now looks like this: p T T T T F F F F Below the second proposition letter we alternate between pairs of Ts and Fs. We begin by writing two Ts, as follows: p T T T T F F F F q T T r q r q r

And then two Fs:

208

p T T T T F F F F

q T T F F

Two more Ts and two more Fs: p T T T T F F F F q T T F F T T F F r

Under the third letter, we alternate between single Ts and Fs. The complete assignment appears as follows: p T T T T F F F F q T T F F T T F F r T F T F T F T F

In general terms, the procedure runs as follows. First, determine the number of rows in the truth table. Let that number be n. Second, give a column of n/2 Ts under the first letter, and follow this with a column of Fs equal in length. At this point, the first column is done. Third, give a column of Ts under the second letter half as long as the column of Ts under the first, give a column of Fs under the Ts equal in length, give a column of Ts under the Fs equal in length, and give a column of Fs under the Ts equal in length. At this point, the second column is done. And so on.

209

3. Once we have made sure that we will be considering every possible assignment of truth and falsity to the simple propositions involved, we can work out what the value of each whole proposition is under each assignment. 13.4 The Truth Table Method 1. The truth table method works like this: First, convert the argument into symbolic and put it in standard form. Second, make a truth table for the argument. Third, check the table for rows in which all of the premises are true and the conclusion is false. If there is such a row (or rows) the argument is invalid; if not, it is valid. 2. Consider the following argument: Jones owns a Ford. So, either Jones owns a Ford, or Brown is in Barcelona. With "o" standing for the proposition "Jones owns a Ford.", with "b" standing for "Brown is in Barcelona." in standard form, using symbolic, the argument is written as follows: (1) o -(2) o b In the truth table method, we make a truth table, as described above, for all of the propositions in the argument at once. We begin by picking out the simple propositions. This argument involves two simple propositions, "o" and "b". On one line, we first write down the simple propositions and then the premises and the conclusion. In this case, there is one premise and the conclusion. Thirdly, we generate all of the possible truth assignments to the simple propositions. Since there are two simple propositions, the table has four rows. The final step is to work out the truth values of the propositions of the argument, according to the basic rules and the procedure in 13.2.3. The final truth table looks like this: o T T F F b T F T F o T T F F o T T T F b

210

(If you find it helpful, you can fill in the values for each proposition everywhere they appear. In this argument, you could fill them in under "b" and "c" in "b c". The final truth table would look like this rather than the one just above: o T T F F b T F T F o T T F F o T T F F T T T F b T F T F

We put the column of values under each of the argument's propositions in bold (or when writing, draw a box around it). For complex propositions, this column should be under the main operator. In this argument, it appears under the wedge in the conclusion. Finally, we inspect the table in search of a row in which the premise is true and the conclusion is false. If we find such a row, the argument is invalid, since we know from our earlier discussion of the concept of 'validity' that no argument with true premises and a false conclusion can be valid. The premise is true in the first and second rows, and so is the conclusion. In the third and fourth rows, the premise is false. So there is no row in which the premise is true and the conclusion is false and, thus, this argument is valid. 2. Here is a truth table for an instance of AA, the argument "p q. p. So, q.". The first step is to identify the simple propositions. In this case, there are two: "p" and "q" (even though "q" occurs twice). We write these down in a row. The next (second) step is to write down the propositions of the argument, on the same line as the simple propositions. After the second step, the truth table looks like this: p q (p

q)

The third to step is to generate the possible combinations of assignments of T and F to the simple propositions. Since there are two simple propositions, there are four possible assignments of truth values. After the third step, the truth table looks like this: p T T F F q T F T F (p

q)

211

The fourth step is to work out the truth values of the propositions for each assignment, as described above (in 13.2.3) After this fourth step, the table looks like this: p T T F F q T F T F (p

q)

p T T F F

q T F T F

T F T T

There is no row in which the premises are true and the conclusion false. Thus, this argument is valid. 3. In contrast to the two examples so far, this argument is invalid: (1) b -(2) b & c Its truth table looks like this: b T T F F c T F T F b T T F F b & T F F F c

In the second row, the premise is true while the conclusion is false. We point to any invalidating row with an arrow. 4. Now consider the following argument: If Henry was not required to work and was able to save up enough money, he will be at the concert tonight. He's not at the concert, so he must not have been save up the money. With "w" standing for "Henry was required to work.", "s" standing for "Henry was able to save up enough money.", with "c" standing for "Henry is at the concert." and with the letters in place of the propositions, the argument looks like this: If not w and s, then c. Not c, not s. In standard form, using symbolic, it looks like this: (1) (~w & s) c (2) ~c --------------(3) ~s

212

This argument involves three distinct propositions, and so its truth table has eight rows. The completed table looks thus: w T T T T F F F F s T T F F T T F F c T F T F T F T F (~w F F F F T T T T & F F F F T T F F s)

~c F T F T F T F T

~s F F T T F F T T

T T T T T F T T

In the second row, the premises are true while the conclusion is false. The argument, thus, is invalid. 13.5 Logical Equivalence & Inequivalence, & Logical Contradiction 1. The proposition "If Jones is in Columbus, then he is in Ohio." is logically equivalent to "If Jones is not in Ohio, then he is not in Columbus.". Truth tables bear this outthe two tables have the same column of final values. With "c" standing for "Jones is in Columbus." and with "o" standing for "Jones is in Ohio.", the two become: c

~o

~c

A truth table for both propositions simultaneously looks like this: c T T F F o T F T F c

~o F T F T

~c F F T T

T F T T

T F T T

Both propositions are false in the second row, and both are true in the others. There is no row, thus, in which they have different truth values. So, they are logically equivalent. 2. In contrast, b c and c b are logically inequivalent. Consider the following table:

213

b T T F F

c T F T F

T F T T

T T F T

Not every row has the same ultimate value; rows 2 and 3 have opposite truth values. 3. If two propositions have opposite values on every line, they are said to be logical contradictories. 13.6 Targeted Truth Tables 1. Truth tables can be cumbersome, especially when there are multiple simple propositions. One way to shorten the process is by using the targeted truth table method. This method relies on the fact that in the truth table method we are interested in lines of the truth table in which the premises are true and the conclusion is false. The targeted truth table method attempts to find an assignment (or assignments) on which the premises are true and the conclusion is false without making a full truth table. If such an assignment(s) can be found, the argument is invalid; if not, the argument is valid. 2. Consider the following argument, in standard form using symbolic: (1) b c (2) a & (c d) -------------(3) d b Since there are four distinct simple propositions, the full truth table would have 16 lines. We can shorten the process by using the targeted truth table method. We begin by writing the simple propositions, the premises and conclusion on a line, as follows: a b c d b c a & (c d) d b

Since we are targeting the row(s) of the table on which the conclusion is false and the premises true, we think about which assignment(s) would make the conclusion false, or any of the premises true. The conclusion makes a good place to begin, since a disjunction is false only when both disjuncts are false. So we assign F to both "d" and "b" in the conclusion, and everywhere else either "d" or "b" appears in the premises. We fill in these values for "b" and "d" wherever they appear and we write an "F" under the

214

wedge in the conclusion. We cannot write anything yet under the main operators in the premises, because we do not have sufficient information to do so. a b c d F F b c F a & (c d) F d b F F F

Using this assignment for "d" and "b", can an assignment be found for the remaining letters ("a" and "c") such that the premises are true? The first premise is a conditional, with a false antecedent, so it is true regardless of the value we give to "c". We must thus leave the first premise, and the value of "c", aside for the time being. The second premise is a conjunction. In order for it to be true both conjuncts must be true. So we assign T to "a" and T to "c". This then allows us to obtain a value for the first premise, also. a b c d T F T F b c F T T a T & T (c d) T T F d b F F F

We have thus shown that the argument is invalid, because when "a" is T, "b" is F, "c" is T and "d" is F, the premises are true and the conclusion false. 3. Consider the following, different, argument: (1) b & c (2) a (c d) -------------(3) d b Employing the same procedure as before, we write out the propositions involved, and assign F to "d" and "b" in order to make the conclusion F. This time however, we can see not only that the conclusion is F (we write an "F" under the wedge in the conclusion) but also that the first premise is F (even without knowing the value of "c". So, we write an "F" under the ampersand. a b c d F F b & c F F a (c d) F d b F F F

Since the first premise cannot be true when we make the conclusion false F by assigning F to both b and d, (because for a conjunction to be true, both conjuncts must be true) this argument is valid; it is impossible for the premises to be true and the conclusion false. We say that when "a" is either T or F, "b" is F, "c" is either T or F and "d" is F, the argument is invalid.

215

4. In general, we start by looking for assignments that we must make in order to make either a premise true or the conclusion false; these will give us fixed pieces of information. We can immediately assign values to any propositions that are not complex. If a premise is a single proposition letter, assign T to it; if a conclusion is a single proposition letter, assign F to it. After this, it is a good idea to begin with the conclusion, since there is only ever one conclusion and it must be made false. Moreover, propositions which are either negations, disjunctions or conditionals are false under only one assignment. A negation is false when what is negated is true; a disjunction is false when both disjuncts are false; and, a conditional is false only when the antecedent is true and the consequent is false. 5. However, if the conclusion is a complex statement using parentheses, the best strategy might not be to begin with the conclusion. For example, if the conclusion is "~(a b)", we know that this is false when "a b" is true. But there are multiple assignments under which "a b" is true. Conjunctions, moreover, are false when either, or both, of the conjuncts are false. In these cases, we should look to the premises. For example, consider the following argument: (1) b & c (2) b d (3) d a ------(4) ~(a b) When we write out the propositions on a line, we see that there is no single assignment under which the conclusion is false. When we look at the premises, we are confronted with a conjunction, a disjunction and a conditional. Conjunctions, which are false under multiple assignments, are true only when both conjuncts are true, so the premise to begin with is the first one: we assign T to both "b" and "c" and fill in what else we can: a b c d T T b & c T T T b d T T d a ~(a b) F T T

The second premise is already true, so we leave the value of "d" in the second premise aside for the moment. We are not forced into making an assignment to "d" based on the third premise either, since if "a" were true, it wouldn't matter what the

216

value of "d" was. What about "a"? Again, we are not forced into a value for "a". Based on the third premise, it could be either, since "d" could be either; based on the conclusion, it could be either, since "b" is T. So there are a number of assignments which would show this argument to be invalid. If we assign T to "a", "a", "d" can be either T or F, as follows: a b c d T T T b & c T T T b d T T d a T T ~(a b) F T T T

Alternatively, if we assign F to "d", "a" can be either T or F, as follows: a b c d T T F b & c T T T b d T T F d a F T ~(a b) F T T

Any one of the following assignments of values is sufficient to show that the argument is invalid: a T T F b T T T c T T T d T F F

6. Sometimes there is no proposition which forces us into assigning truth values to any of the proposition letters involved. In such cases we must try out different assignments. Consider the following such argument: a b d b d d a b & a

In this argument we have conclusion which is a conjunction together with premises which are a disjunction and a conditional. We must simply make some assignments. Let's try assigning F to both "a" and "b", which makes the conclusion false. a b d F F b d F d a F b & a F F F

Using this initial assignment, we see that "d" must be F if the second premise is to be true, but when we do that, we see that the first premise is false. We get: a b d F F F b d F F F d a F T F b & a F F F

Note that the failure of this assignment does not shown that the argument is valid. Validity means that no assignment results in true premises and a false conclusion; all we have shown so far is that we do not get true premises and a false conclusion when we assign F to "a" and "b". Since this was only one of the assignments that would make the conclusion false, we must try the others. 217

We know that in order to make the conclusion false, either "b" or "a" must be false, so let us try again (on a new line) by assigning F to 'b' but T to "a": a b d F F F T F b d F F F F d a F T F T T b & a F F F F F T

In order to make the first premise true, "d" must be T, and now we have found an assignment under which shows that the argument is invalid, one which makes the premises true and the conclusion false: a b d F F F T F T b d F F F F T T d a F T F T T T b & a F F F F F T

Note: there might be additional assignments which show that the argument is invalid, but one is sufficient to show that it is invalid. On the other hand, to repeat, the fact that the first assignment we tried did not show the argument to be invalid was not sufficient to show that it was valid. To show that an argument is valid we would need to try all of the possible assignments that will either make the conclusion false or make one of premises true. 13.7 The Truth Tree Method 1. Truth trees are an alternative to truth tables and have the advantage of dealing more effectively with arguments involving a number of simple propositions. Their graphical nature also makes them more intuitive for some people. Truth trees are like targeted truth tables in that we seek to show that the argument is invalid, and if find that this attempt is contradictory, we conclude that the argument is valid. 2. We begin by writing the premises and the negation of the conclusion in a single vertical column. (That is, we assume that the given conclusion is false.) We then "decompose" each complex proposition listed, except singly-negated propositions. To decompose a proposition, we strike it off and extend each branch of the tree downwards in accordance with the following rules, which are either nonbranching or branching. The non-branching rules are as follows, with "S" and "T" standing for any proposition, whether simple or complex: S & T S T ~(S T) ~S ~T 218 ~(S T) S ~T ~~S S

The branching rules are as follows: ~(S & T) ~S ~T S S T T ~S S T T

Some of these rules will be familiar from the truth tables above. "S & T" decomposes into a non-branching "S" and "T" because "S & T" is true only of both conjuncts are true, while "S T" branches because it is true if either "S" or "T" is true. "S T" branches because it is true when the antecedent is false (or its negation is true) or its consequent is true. The rules for "~~S" can be understood by comparison with double negation (see 12.6.1) and "~(S & T)" and "~(S T)" can be understood by comparison with De Morgan's rule. (See 12.6.6.) If a branch (including the main stem) contains a contradiction between any two lines, we can close that branch off by placing an "X" below it. We continue this process until every proposition has been decomposed. If compound propositions have a branch beneath them, the decomposition must be applied to all branches below. Whenever possible, decompose non-branching propositions before branching ones in order to avoid duplication. At the end of the process, either every branch will be closed, or nor every branch will be closed. If every branch is closed, we have shown that the assumption of invalidity was contradicted, and so the argument is valid. Each branch that remains open, however, illustrates a counter-example to the argument, equivalent to the row of a truth table (the assignment of truth values) which shows the argument to be invalid. 3. Here is a demonstration of the validity of AA. Letting "S" and "T" be "a" and "b", the propositions involved are "a b", "a", "b", and so we write down: a b a ~b There is only one proposition requiring decomposition ("a b"), which branches into "~a" and "b". We add these to the trunk on the next line below and strike out the proposition being decomposed: a b a ~b ~a b

219

After the decomposition of each proposition, we check each branch for contradictions. Moving up from the left-hand branch, we see an "a" on the second line which contradicts the bottom "~a" and in the right-hand branch the "~b" on the third line contradicts the "b" on the fourth. We place an "X" beneath each branch to mark each of the contradictions, as follows: a b a ~b ~a b X X Since all of the statements which we have not crossed out are simple propositions or their negations, we have finished the decomposition process. We now look to see whether all branches have been closed. Since in this case there are two branches and both of them are closed, the argument is declared valid. 4. Consider the following argument, in standard form: 1. a ~b 2. ~a & c --------3. ~b We set up the truth tree as follows, writing "~~b" on the third line as the negation of the conclusion. a ~b ~a & c ~~b Although not on the first line, we decompose "~a & c" and "~~b" first, because they are non-branching. After decomposing the second line, we get: a ~b ~a & c ~~b ~a c We check for contradictions, but seeing none, proceed to decompose the third line ("~~b"). We cross out "~~b" on the third line and add "b" to the bottom of the tree, to get:

220

a ~b ~a & c ~~b ~a c b Again, we check for contradictions, but seeing none, proceed to decompose the first line. We get: a ~b ~a & c ~~b ~a c b a ~b Again, we check for contradictions, on each branch. The left-hand side contains a contradiction, between the "a" on the final line and the "~a" on the fourth line. So, we place an "X" beneath the left-hand branch. The right-hand side contains a contradiction, between the "~b" on the final line and the "b" on the sixth line. So, we place an "X" beneath the right-hand branch. a ~b ~a & c ~~b ~a c b a ~b X X Since we have finished decomposing, and there are only these two branches, both of which have been closed off, the argument is shown to be valid. 5. In the following example, which is already in truth tree format, the first two lines are the premises, while the third is the negation of the conclusion. The third line ("~~b") has been decomposed without branching, on the fourth line. Both the first and second lines will cause the tree to branch. Taking the first line first, we get:

221

a f (a & c) ~b ~~b b ~a f We look for contradictions, and find none. The second line is now decomposed. The results of its composition must be placed at the bottom of each branch below it. We look for contradictions and find that the second and fourth branches show a contradiction, between "~b" and "b" on the fourth line. a f (a & c) ~b ~~b b ~a f a & c ~b a & c X Finally, we decompose "a & c": ~a f (a & c) ~b ~~b b ~a f a & c ~b a & c a X a c c

~b X

~b X

The first branch closes, since the "a" on the seventh line contradicts the "~a" of the fifth line: ~a f (a & c) ~b ~~b b ~a f a & c ~b a & c a X a c c X

~b

However, the third branch does not close, and so we have shown that the argument is invalid. We have shown that the premises can be true and the conclusion false, when "a" is true, "c" is true, "b" is true and "f" is true. 222

13.8 A Summary Of Truth ConditionsTruth Tables & Truth Trees Truth Tables ~ S F T T F S T T T F F T T T F T F T F S & T T T F F T F F F T F T F S T T T F F T F T T T F T F

Truth Trees Non-branching: S & T S T ~(S T) ~S ~T ~(S T) S ~T ~~S S

Branching: ~(S & T) S T ~S ~T S

S T ~S

223

APPENDICES

Appendix to Chapter 6 Problems in Inductive Logic 6A.1 The Lottery Paradox 1. Consider the following propositions: (1) (2) (3) (4) There are 1,000 tickets in the Ohio Lottery: Ticket 1, Ticket 2, . . ., Ticket 1000. Exactly one ticket in the Ohio Lottery will win. The Ohio Lottery is fair. If there is good reason for thinking that 99.9% of Fs are Gs, that A is an F, and that b is normal, then there is good reason for thinking that A is a G. (5) If there is good reason for thinking that p, and if p entails q, then there is good reason for thinking that q. It is impossible, it is argued, for these five propositions to be true simultaneously. 2. (1) and (2) together entail that 99.9% of the tickets in the Ohio Lottery will lose: (1) There are 1,000 tickets in the Ohio Lottery: Ticket 1, Ticket 2, . . ., Ticket 1000. (2) Exactly one ticket in the Ohio Lottery will win. -----------------------------------------------------------------------------------------------------(6) 99.9% of the tickets in The Ohio Lottery will lose. Hence by (5) there is good reason for thinking that (6) is true. (There is good reason, suppose, for thinking that (1) and (2) are true.) 3. Ticket 1 is a ticket in the Ohio Lottery (see (1)), and it is normal (see (3)). Therefore by (4) there is good reason for thinking that Ticket 1 will lose. Let (7) stand for the proposition "Ticket 1 will lose.". (There is good reason, suppose, for thinking that (3) is true.) 4. That there is good reason for thinking that Ticket 2 will lose follows by the same reasoning. Let (8) stand for the proposition "Ticket 2 will lose.". 5. The same goes for the other tickets, meaning that there is good reason for thinking that (9) is true, good reason for thinking that (10) is true, . . ., and good reason for thinking that (1006) is true. 6. (7), (8), (9), (10), . . ., and (1006) together entail that (7) & (8) & (9) & (10) & & (1006) is true:

A1

(7) Ticket 1 will lose. (8) Ticket 2 will lose. (9) Ticket 3 will lose. (10) Ticket 4 will lose. . . . (1006) Ticket 1000 will lose. -------------------------------(1007) Ticket 1 will lose, and Ticket 2 will lose, and Ticket 3 will lose, and Ticket 4 will lose, . . ., and Ticket 1000 will lose. Thus by (5) there is good reason for thinking that (1007) is true. 7. (1007) and (1) together entail that no ticket in the Ohio Lottery will win: (1007) Ticket 1 will lose, and Ticket 2 will lose, and Ticket 3 will lose, and Ticket 4 will lose, . . ., and Ticket 1000 will lose. (1) There are 1,000 tickets in the Ohio Lottery: Ticket 1, Ticket 2, . . ., Ticket 1000. -----------------------------------------------------------------------------------------------------(1008) No ticket in the Ohio Lottery will win. So by (5) there is good reason for thinking that (1008) is true. 8. (1008) and (2) together entail that (1008)&(2) is true: (1008) No ticket in the Ohio Lottery will win. (2) Exactly one ticket in the Ohio Lottery will win. -------------------------------------------------------------(1009) No ticket in the Ohio Lottery will win, and exactly one ticket in the Ohio Lottery will win. Therefore by (5) there is good reason for thinking that (1009) is true. 9. But, of course, there is not good reason for thinking that (1009) is true. After all, (1009) is necessarily false. 10. It follows, then, that either (4) is false, or (5) is false. For (4) and (5) together with there being good reason for thinking that (1), (2), and (3) are true entail a falsehood (i.e., that there is good reason for thinking that (1009) is true), and by hypothesis there is good reason for thinking that (1), (2), and (3) are true. 11. (5) seems unimpeachable, and so (4) must be the culprit. 12. (Note: this kind of argument works for all generalizing arguments. Let p1, p2, p3, . . ., p1000 be the conclusions in different instances of generalizing argument form fi1, and suppose that there is good reason for thinking that the premises in each such argument are true. Then it follows, assuming that fi1 is sound, that there is good reason for thinking that p1 is true, good reason for thinking that p2 is true, . . ., and good reason A2

for thinking that p1000 is true. p1, p2, . . ., and p1000 together entail that p1&p2&. . .&p1000 is true, and so there is good reason (again, assuming that the argument form fi1 is sound) for thinking that p1&p2&. . .&p1000 is true. But since fi1 is generalizing, there is not good reason for thinking that p1&p2&. . .&p1000 is true: the odds are that at least one such proposition is false. Hence the assumption that fi1 is sound must be false.) 6A.2 The Problem Of Induction 1. Consider, for starters, inductive generalization: (1) In case1 . . . casen, F is instantiated. (2) In % of case1 . . . casen , G is also instantiated. ----------------------------------------------------------(3) In roughly % of cases of F, G is also instantiated. Obviously, the premises are not a conclusive reason for thinking that the conclusion is true, in that for all the premise says, it is possible (logically speaking) that the percentage is nowhere near %. But equally obviously, the premise is a probabilistic reason for thinking that the conclusion is true: that the premise is true makes it highly likely, though not certain, that the conclusion is true. So, although it is not perfectly reliable, inductive generalization is nonetheless highly reliable. This, at any rate, is the standard view. 2. But what reason is there for thinking that inductive generalization is highly reliable? That is, what reason is there for thinking that the premises in an instance of inductive generalization confer high probability on the conclusion? The only thing that comes to mind is that inductive generalization has had a nice track-record so far, in that the ratio of instances with a true conclusion to total instances is high. This, it might be argued, is a good reason for thinking that inductive generalization is reliable, that we can count on it to take us from sample percentage to population percentage. 3. But notice, this is itself an instance of inductive generalization, and so is circular, or question-begging: (1) There have been many previous cases of IG. (2) In almost all these cases, the conclusion was true. -----------------------------------------------------------------(3) Almost all instances of inductive generalization have a true conclusion.

A3

Since this is itself an instance of inductive generalization, to argue in this way assumes that inductive generalization is reliable. Thus, for someone wondering about the reliability of inductive generalization, it would be of no help: it simply begs the very question at issue. 4. This is a problem not only for inductive generalization, but for the other inductive argument forms too. Let fi be an inductive argument form distinct from inductive generalization. The only thing that comes to mind as a reason for thinking that fi is reliable is that it has had a nice track record so far: (1) Almost all observed instances of fi have a true conclusion. ----------------------------------------------------------------------------(2) Almost all instances of fi have a true conclusion. This is an instance of inductive generalization, and so to give it is to assume that inductive generalization is reliable. So, it would thus be of no help to someone wondering about the reliability of fi, since to help he would already need, but would not have (because he could not have), good reason for trusting inductive generalization. 5. The startling conclusion, then, is that to justify induction is impossible. To justify an inductive argument form other than inductive generalization requires first justifying inductive generalization, but this cannot be done. 6A.3 The New Riddle Of Induction 1. Consider the following scenario: It is 2004-02-24, and up to now Smith has looked at lots and lots of emeralds (from lots and lots of places). Each such emerald is green, and so Jim infers by inductive generalization that all emeralds are green. Intuitively, Smiths inference is cogent. For the sample is big enough, the sample is unbiased, and all the emeralds in the sample are green. 2. Consider the predicate "grue": X is grue if and only if: either (1) X is observed on or before 2004-02-24, and X is green, or (2) X is not observed on or before 2004-02-24, and X is blue. Four examples. First, the lawn at the White House on 2001-06-24 was grue, since it was observed before 2004-02-24 and was green. Second, the sky in Palm Springs on 1998-07-

A4

23 was not grue, because it was observed before 2004-02-24 but was not green: it was blue. Third, the lawn at the White House on 2006-06-24 will not be grue, since it is being observed after 2004-02-24 but will not be blue: it will be green. Fourth, the sky in Palm Springs on 2014-07-23 will be grue, for it will be observed after 2004-02-24 and will be blue. 3. Now contrast the scenario above with a slightly different one: It is 2004-02-24, and up to now Jones has looked at lots and lots of emeralds (from lots and lots of places). Each such emerald is grue, and so Jack infers by inductive generalization that all emeralds are grue. Intuitively, Jones' inference is incogent. For the conclusion says, in part, that the emeralds first observed on 2004-02-25 will be blue, but intuitively there is not good reason for thinking that the emeralds first observed on 2004-02-25 will be blue. (Note: the emeralds in the sample are grue since they are observed on or before 2004-02-24 and are green.) 4. The new riddle of induction, then, is that although there seems to be a difference in cogency between Smith's inference and Jones' inference, there do not seem to be any relevant respects in which they differ. First, Smith's inference is an instance of inductive generalization, and so is Jones'. Second, the premise in Smith's inference is true, and so is the premise in Jones'. Third, the sample in Smith's inference is identical to the sample in Jones'. 5. Consider the following scenario: Over a span of ten years or so, Jack visits lots of different track teams in lots of different parts of North America, and times each such team in the mile. As it turns out, 65% of the them (i.e., the runners, not the teams) run it in less than five minutes. The things in the sample are members of the class of people, are members of the class of people living in North America, are members of the class of people running track, are members of the class of people running track and living in North America, and so on. Relative to the class of people living in North America, the sample is highly biased. But relative to the class of people running track and living in North America, the very same sample is unbiased. Thus whereas the sample percentage gives us good reason for thinking that roughly 65% of all people running track and living in North America can run the mile in less than five minutes, it does not give us good reason for thinking that

A5

roughly 65% of all people living in North America can run the mile in less than five minutes. The lesson, then, is that whether a sample is unbiased hinges on more than just what is in it: the attribute class or property, Q, is key. For some classes or properties, the sample is just fine; but for others, it is highly biased. 6. The sample that Smith and Jones appeal to is biased relative to grueness. The property of not being observed on or before 2004-02-24 is relevant to being grue, since one of two ways for something to be grue is for it to be both (a) not observed on or before 2004-02-24 and (b) blue. The property of not being observed on or before 2004-0224, however, is severely underrepresented in the sample, in that none of the emeralds in the sample have it. So although Jim and Jack appeal to the same sample of emeralds, relative to Jacks inference the sample is biased. The result, then, is intuitive. The sample is unbiased relative to green, and so Smith's inference is cogent. But it is biased relative to grueness, and thus Jones' inference is incogent.

A6

Appendix To Chapter 7 Mill's Methods 7A.1 Introduction 1. Mill's methods are an older version of the process described in chapters 6 and 7: when we have an explainee and a number of possible particular causes, we must gather and examine the data, to determine which is the actual cause. Mill's methods are named for John Stuart Mill, who proposed them in 1843 in his book A System Of Logic. Mill described his methods as ways of identifying a cause from amongst the various possibilities. There is controversy as to whether Mill's methods allow us to discover entirely new causes and connecting propositions or whether they can only be used when we already have a generic-level (rather than species level) general connecting principle in mind. For example, Mill's method only allows us to identify the species of food which caused the illness, given that we already have in mind a generic explanation that it was the food they ate. See (e.g.) Cohen & Nagel (1934). The dispute perhaps is related to the issue of non-dogs in 7.3.2 and the total evidence rule. That is, all reasoning is done against an already-existing conceptual and theoretical background. 7A.2 The Method Of Agreement 1. Mill thought that there were two basic approaches to identifying (specific) causes. The first was to look at cases where the effect is present and look for what is common in all cases. Mill called this kind of reasoning the method of agreement. This method is successful in scenarios where all but one of the possible causes is ruled out by being absent in at least one case where the effect is present, while one of the possible causes is present in all of the cases in which the effect is present. Here is an example which is suited to the method of agreement: Smith, Jones, Jack and Gill had lunch together at Bob's Diner on Friday, and then later got food poisoning. They ordered a variety of items from the menu, including lentil soup, Portobello sandwiches, veggie burgers, egg salad, baked potatoes, water and tea. The only item that everyone ordered was the egg salad. Note that this description of the events already suggests a line of investigation. That is, the effect is described as "food poisoning" (rather than, say, a list of symptoms such as vomiting, sweating and queasiness, and leaving out the word "food") and the suspected

A7

causes are the different items of food that were eaten by the four diners. Each of Mill's methods works in this way: we must generate a list of possible causes before we begin applying the methods. These "possible causes" are a subset of the many states that are present prior to the effect. In the case of the diner, we have omitted the facts that they are all breathing in the same odors in the diner, using silverware that was washed in the same machine, touching the same table-cloth, and so on. We can put the information from the scenario in a causation table, as follows. A star (*) indicates the presence of a possible cause or the effect, while a dash (-) indicates its absence: Cases 1 2 3 4 (Smith) (Jones) (Jack) (Gill) LS * * * PS * * Possible Causes VB ES BP * * * * * * * W * * * T * Effect FP * * * *

The cases are listed on the left, the effect is on the right, and in between we list the possible causes and whether they were present or absent in each case. In this scenario, the effect is present in all four cases. When we look at the cases, we see that, apart from the egg salad, each suspected cause was absent in at least one case. The four cases agree only with respect to the egg salad. Hence the name "method of agreement". 3. The method of agreement in general looks like this: (1) There are numerous cases (case1 to casen) in which G is present. (2) These cases agree only with respect to possible cause F. -----------------------------------------------------------------------------------(3) F caused G. "G" is the explainee; "F" is the reason or cause isolated. 4. In terms of relationship and correlation, what do the data tell us? First, the data tell us that everyone who ate the egg salad became sick. Second, the data suggest that none of the other possible causes were sufficient to make people ill. This suggests that, in this scenario at least, if someone did not eat the egg salad, he did not become ill. However, we do not have any cases which strengthen the inference in this way; there are no cases in which the effect is absent and so there are no cases in which both the suspect case and the effect are absent, and it would help strengthen our confidence if

A8

we could survey other diners who did not get sick and find that they did not eat the egg salad. Let us add such cases. 7A.3 The Method of Double Agreement 1. Here we expand the information from the first scenario by adding two more diners, Henry and Bill: Smith, Jack, Gill, Jones, Henry, and Bill had lunch together at Bob's Diner on Friday, and then later Smith, Jack, Gill, and Jones, but not Henry or Jim, got food poisoning. Smith, Jack, Gill, and Jones had different things, with one exception: they all had the egg salad. But neither Henry nor Bill did. And let us suppose that the causation table for their meal is as follows: Cases 1 2 3 4 5 6 (Smith) (Jones) (Jack) (Gill) (Henry) (Bill) LS * * * PS * * * Possible Causes VB ES BP * * * * * * * * * W * * * * T * * Effect Ill * * * * -

In this scenario we see four cases of food poisoning and two without food poisoning and we see that the only food which matches the pattern of presence or absence of the effect is egg salad. Hence, we can conclude that anyone who ate the egg salad became ill and that anyone who became ill ate the egg salad. This method is called the method of double agreement, as it involves, simultaneously, two types of agreement: only with respect to the egg salad do the cases mirror the presence and absence of the effect. The additional cases, in which the effect is absent, strengthen our confidence in the claim that the egg salad is the cause. These cases make it more likely that the egg salad, and not some other cause that we omitted from the table, is the cause. If the egg salad really is the cause, we should expect to find it absent when the effect is absent. If the egg salad were to be present in either Bill or Henry's case, the egg salad would not be the cause. (Note that in the table above neither Henry nor Bill ate the soup. There are thus two cases (the lentil soup and the egg salad) which have two absences and so mirrors the absent effect. The double method, however, applies agreement in both present and

A9

absent cases simultaneously. We are looking for a cause which mirrors the effect in all (six, in this scenario) cases. If one of Henry or Bill had eaten the lentil soup, we could look only at these two cases and use what we might call "negative agreement" (where "the method of agreement" could be called "the method of positive agreement"), and in that case, just like (inverse of the) positive agreement, it will be necessary that each of the other suspected causes be present in at least one case. When the cases in which the effect is present are analyzed using positive agreement, and the cases in which the effect is absent are (separately) analyzed using negative agreement, this is the "joint method of agreement", or the "indirect method of difference".) 2. In general terms, the method of double agreement looks like this: (1) There are numerous cases (case1 to casen) in which G is present. (2) There are numerous cases (casen+1 ) in which G is absent. (3) The only possible cause which has the same pattern of presence and absence as the presence and absence of G is F. ------------------------------------------------------------------------------------------------------(4) F caused G. Instead of looking just at cases in which G is present, like in the method of (positive) agreement, (or absent, as in a method of negative agreement) in this method the arguer looks at cases of both kinds. Scenarios which provide data of two kinds (present-present and absent-absent) suggest that the two are (universally) correlated The other possible causes are ruled out by being either absent when the effect is present or present when it is absent. 3. Note: This method is often called the "method of agreement and difference" and described as involving the method of agreement and the method of difference. But, as Mill himself pointed out (p. 259), the method of difference requires that the cases are identical in every respect but one (as we shall discuss in the next section) and, since this is not true in scenarios to which the method of double agreement is applied, the method of difference is not involved in the method of double agreement. 7A.4 The Method Of Difference 1. Mill's second general approach was the method of difference. The method of difference attempts to show that one thing is a cause of another by looking at two cases

A10

which are very much alike, except that the effect is present in one and not the other and the suspected cause is present in one and not the other. There are many obvious scenarios in which we make use of this method. Where things that are largely the same from one moment to the next but undergo a change, we immediate think of what other changes have happened. For example, if a person is hungry at noon but satisfied at one, and is finishing lunch at one, we think that eating lunch is the cause of the relief from hunger, since this is the only thing that has changed, along with the effect. This kind of thinking can also be employed to isolate a cause from possible causes occurring together. Consider the following scenario: Jack and Gill had lunch together at Bob's Diner on Friday, and then later Jack, but not Gill, was sick. Neither had lentil soup. They both had a Portobello sandwich and French fries. Jack also had tiramisu for dessert, whereas Gill didn't. The causation table for this scenario looks thus: Cases 1 (Jack) 2 (Gill) LS Possible Causes PS Fr T * * * * * Effect Ill * -

In this scenario, we can conclude that the tiramisu is the cause of the food poisoning because each case is the same in every respect (LS, PS, Fr) except one (T) where they differ. Hence the name method of difference. 2. In general terms, the method of difference looks thus: (1) There are two cases, one (case1) in which G is present and one (case2) in which G is absent. (2) The only relevant respect in which the cases differ is that whereas F is present in case1, F is absent in case2. ------------------------------------------------------------------------------------------------------(3) F caused G. The presence or absence of F is the only difference between the two cases, and so there is good reason for thinking that F caused G. In particular, the method of difference strengthens the claim that F is necessary for G, that is, that if F were not present, G would not be present either. Based on the above table, it is possible that the tiramisu combined with the Portobello sandwich and fries is the (joint) cause. However, the table provides strong support for the claim that, in this scenario at least, the absence of the tiramisu would mean the absence of the illness, since in the other case (Gill) the other possible causes (the sandwich, the fries) were present but the illness was not. A11

To see the power of the method of difference, compare the table above with the following table: Cases 1 (Jack) 2 (Gill) LS * Possible Causes PS Fr T * * * * * Effect Ill * -

In this scenario, Gill did not have the soup, while Jack did, and their meals now differ with respect to two possible causes, the soup and the tiramisu. We are not as confident that the skipping the tiramisu would avoid the illness. (We might also consider this table, where Gill has the soup while Jack does not: Cases 1 (Jack) 2 (Gill) LS * Possible Causes PS Fr T * * * * * Effect Ill * -

There is still some support for the claim that the tiramisu was correlated with the illness, if we assume that the lentil soup would not counter-act the tiramisu. But the support is not as strong as in a scenario where the diners eat exactly the same items except the (tiramisu) since the soup is then taken out of consideration.) 3. The method of difference does not require that the two cases are exactly alike, since no two cases can be exactly alike. An ideal use of the method of difference is when the two cases are two trials made, in a short period of time, on the very same item, by adding or removing the possible cause. (See chapter 9 for more on such trials, also called "controlled experiments".) 7A.5 The Method Of Concomitant Variation 1. The methods described so far involve possible causes and effects which are either present or absent in a given scenario. Mill also recognized that many phenomena vary over time by degree, rather than being either present or not. When one possible cause but not others varies along with the effect, the method of concomitant variation suggests that there is a connection. Consider the following example: Jack notices that Jim's coat is sometimes thicker than at other times. He thinks it might be due to how much he walks him, or how much food Jim is given, or the temperature. He keeps track of Jim's coat through two cycles of thickening and thinning. Only the variations in the temperature mirror the variations in Jim's coat.

A12

In terms of a causation table, the premises look thus: Cases 1 2 3 4 Walk *H *L *H *H Possible Causes Food *H *H *L *H Temp *H *L *H *L Effect Coat *H *L *H *L

As before, an asterisk indicates the presence of a possible cause or the effect, while a dash indicates its absence. *H and *L indicate whether the item is strongly (High) or weakly (Low) present. (H and L can be doubled to differentiate further degrees of strength or weakness. E.g. *HH would indicate the presence of a condition or effect to a greater degree than *H.) In standard form, the method of concomitant variation looks like this: (1) G is present to a high degree in some cases and to a low degree in others. (2) Of the possible causes, only F varies whenever G varies. -----------------------------------------------------------------------------------------------(3) F and G are causally connected. The fact that we have no cases where the effect is absent means that it is possible that the effect is brought into being by something else, while the variation in the "cause" only causes variation of the effect. For these reasons, the conclusion of the argument is that the two are causally connected, rather than that one is the cause of the other. 2. Note that the second premise says only that F and G vary at the same time. The variation might match (that is, when F is present to a greater extent than previously, so is G, and when F is present to a lesser extent, so is G) but the variation can be inverse (that is, when F is present to a greater extent than previously, G is present to a lesser extent, and when F is present to a lesser extent, G is present to a greater extent). Inverse relationships between states can be found in scenarios which are analyzed by Mill's other methods. That is, some effects are brought on by a negative state of affairs (an absence). For example, a person who has not been vaccinated gets a certain disease. This scenario could alternatively (and perhaps better) be described by saying that having received the vaccine was the cause of not getting the disease. Let "d" in the table below stand for "vaccinated", while "e" is the disease.

A13

Cases 1 2 a * *

Possible Causes b c d * * * * * -

Effect e *

3. A final wrinkle worth mentioning is that scenarios which involve both states which are either present or absent with states that vary by degree. Such scenarios show that there is not really a distinct method of concomitant variation, but rather that the double method and the method of difference can be used in scenarios involving variation. (The method of agreement cannot, at least when the effect varies, because it is used in scenarios where the effect is present in all cases.) For example, eating a certain food or taking a certain drug might cause quicker recovery from injury or bruising. In these cases, we use *H and *L (for "present to a high degree" and "present to a low degree" in the effect column). In the following table, "d" might stand for "took the medication" while "e" is "speed of recovery". Cases 1 2 Possible Causes a b c d * * * * * * * Effect e *H *L

The inference that d is the cause of variation in e is obtained by using the method of difference, rather than the method of concomitant variation. 7A.6 The Methods & Cogency 1. As we stated in the introduction, Mill's methods require us to generate a list of possible (specific) causes before beginning. For cogency for an instance of one of these methods, therefore, there is one key additional premise, which is typically assumed and not expressed: There are no other possible causes which have the same relation (of presence, absence or variation) to the effect that the selected cause does.* An argument using any of the methods is sound just in case this premise is true (assuming that the premises are true as well.) For example, for an instance of the method of agreement, the question is whether there is something besides F that the cases of G have in common. Is it true that the only relevant respect in which the cases are alike is F? If, in fact, there is another relevant respect in which they agree, then the argument is unsound. In a bar, for example, one A14

patron had Jack Daniel's (whiskey) and Coke, while another had Bacardi (rum) and Coke. Both are tipsy. According to the method of agreement, of the three possible causes Jack Daniel's, Bacardi and Coke, the cause is Coke. Cases 1 2 Possible Causes JD B C * * * * Effect T * *

This case seems foolish in light of our other experience with soft drinks, which allows us to supply cases where Coke was present but the drinker did not become tipsy, but it is indicative of the limitation of the methods: we should try to seek ever larger samples in search of cases where F and G come apart; this will force us to consider smaller possible causes that larger states have as parts (such as the alcohol that is a part of both the Jack Daniel's and the Bacardi). For more on controlled experiments and how we identify and refine our knowledge of causes, see chapter 8. Where the method of agreement might remind you of IG (from 6.2), the method of difference is the basis for scientific method's core idea of controlled experiment (see 8.5) and the method of double agreement is the core of randomized experimental studies (see 8.4). Samples involving large numbers of cases certainly employ the method of double agreement or the method of concomitant variation rather than the method of difference. You might be tempted to think that the method of difference is being employed because the experiment divides the cases into two groups, the control group and the experimental group, and one groups receives the possible cause and the other does not (or receives a placebo). Drug trials, for example, will involve many different rats or other animals, divided into a group which receives the medication and a group which does not. Although there are only two groups, however, each group contains many cases, and there is a lot of variability within each group. Indeed, it is because of this variability that experimenters survey a large number of cases, so that the variability in each group is approximately equal.

A15

7A.7 Summary Of FormsMill's Methods Method of Agreement (1) There are numerous cases of G. (2) The only relevant respect in which the cases agree is F. ------------------------------------------------------------------------(3) F caused G. Method of Double Agreement (1) There are numerous cases (case1 to casen) in which G is present. (2) There are numerous cases (casen+1 . . . ) in which G is absent. (3) The only possible cause which has the same pattern of presence and absence as the presence and absence of G is F. ------------------------------------------------------------------------------------------------------(4) F caused G. Method of Difference (1) There are two cases, one (case1) in which G is present and one (case2) in which G is absent. (2) The only relevant respect in which the cases differ is that whereas F is present in case1, F is absent in case2. ------------------------------------------------------------------------------------------------------(3) F caused G. Method of Concomitant Variation (1) The are numerous cases in which G is present to a high degree, and there are numerous cases in which G is present to a low degree. (2) The only relevant difference between the first group and the second group is the degree to which F is present: F is present to a high degree in the first group, and to a low degree in the second. ------------------------------------------------------------------------------------------------------(3) F and G are causally connected.

A16

Notes For Teachers Preface 1. The talk of reasoning as an art is intended seriously and is worth constantly reinforcing throughout a course. In my course I constantly use language such as "master of the art" and "flexible response to <the given> passage" in order to convey that reasoning is not (yet) merely machine-code translation and symbol manipulation. The book course aims to give students a toolbox of techniques that they can bring to bear when confronted with instances of reasoning, so long as they do not panic. On the other hand, while teachers must often allow that the tools being practiced are sometimes outstripped by the passage under scrutiny, they should take care to insist upon, and highlight successes in, improved skill and insight. In addition to the items mentioned in sub-section 1 of the preface, this text differs from others in that it includes an emphasis on explanation (contrary to the advice of Walton (1996) p. 58) and induction, in addition to argument. Explanation is included as segue to correlation and scientific reasoning. Understanding correlation and scientific reasoning is as important (if not more so) than methods for testing for validity. The sections on induction, correlation and scientific methods are very basic; were there a "part 4" to this text, parallel to part 3's discussion of various deductive methods, it would cover Bayes' Rule and statistical analyses as formal methods for calculating conditional probability and updating hypotheses in the light of new evidence. 2. As is noted at the end of this sub-section, there is more material here than can be covered in a 15 week, 3 hour-a-week, term. Each teacher will thus find it necessary to make her own choices. In my own case, I speed through chapter 2 and the early part of chapter 4 and 4.7, and omit 12.7 and 13.5. Other sections that might be skipped: 4.6, or 4.6 and 5.5, on adding warrants; 6.4 might also be skipped, but it is included in the exercises for chapter 6; 10 (categorical logic) can also be skipped. Chapter 1 Chapter 1 is beneficial for students to read (and to re-read at the end of the course, as a summary), but in my own case I introduce various of the ideas in the introductory class, before the students have done any reading and proceed immediately to chapter 2. A17

1.1 rationality quotient. This idea is from Robin Hanson at overcomingbias.com (see also lesswrong.com). By coincidence, my critical reasoning course (prior to the course on reasoning and logic) also goes informally by the name "Overcoming Bias", as well as "Mental Self-Defense", which is a theme in Johnson & Blair (1993). In both cases, the idea is that the desire to reason in the brain is often weak in competition with the other desires. Logic textbooks assume that students are interested in logic. But it was my experience that they weren't. In an attempt at motivation, my critical reasoning course now includes material (books by Gilovich or Cialdini or both, and various other materials, a few of which are linked to in chapter 1) which show students the dangers that sloppy thinking and bad cognitive habits and attitudes can get them into. I continue to maintain, and believe ever more strongly, that for most human beings, developing a critical attitude and intellectual virtues is more important than mastering deductive logic or even induction. Should you wish to add such features to your critical reasoning or logic course, more resources are available at the web page for my critical reasoning course, http://facultystaff.vwc.edu/~rwoods/criticalreasoning.htm . 1.6 As mentioned in the Preface and here in chapter 1, this book does not cover problem-solving (or decision-making). This is so even though all of the reasoning we do is directed ultimately towards solving some problem and most generally the problem of how to live. A better book in reasoning and critical reasoning might be able to begin from this point and somehow make its readers feel the need for the methods of reasoning covered here, as well as others. In defense of this book, however, it can be said that it relegates formal logic to part 3 and emphasizes a view of reasoning as an attempt on the part of human beings to generate principles from our experience of the world so that we might understand it and make predictions about it. In the tradition of Toulmin and Johnson & Blair, it attempts, not as successfully as I would like, to keep the practice of argument and explanation to the forefront. Part 1 Chapter 2 The big departure from other texts in this chapter is the equal footing given to explanation, alongside argument. This is due to the fact that this book (or at least, part 2) takes how we generate generalizations to be just as important, if not more so, than,

A18

how we deploy them. Explanation will in particular be used to drive the concept of correlation, in chapter 7. For some reason, textbooks focus on argument and take pains to exclude explanation. Textbooks describe the differences between argument and explanation by whether or not the conclusion/explainee is already accepted. This difference is the result of a more fundamental difference, of explaining rather than justifying. 2.1.2 To argue is to . No rigorous distinction is made here between trying to argue or explain and succeeding in doing so, but the parentheses around "attempt to" indicate that there is a difference. This leads to a problem: students must be forced to recognize that bad arguments and explanations are still arguments or explanations. This is explicitly stated in 2.6. Justify belief. This is often shortened to "justify" elsewhere in the book (e.g. the table in 2.3.1), but nowhere does it mean (e.g.) "exculpate". 2.1.3 explanation. Philosophers often distinguish scientific reasoning from other types. Scriven (1962, section 3), has doubts. As the main text says, the examples to be used will focus on scientific explanation (why/how some thing is, to which I would also add what some thing is) but this is only because they are easier. (On the topic of what some thing is: Aristotle in his Metaphysics provides examples of varieties of explanations of the being of some thing. Some things are defined by how the parts are put together, such as mixtures and things made of fastened parts, but there are other ways too: some are defined by some or all of their qualities and the extent to which they have those qualities, some are defined by their position in a larger whole, some are defined by being at the same or different times as something else and some are defined by direction of motion (H.2, 1042b15 ff.).) "explainer" and "explainee". They're a little barbaric, but these days so too are the Latin terms. For "explainee" in particular, note that no distinction is made between the explainee as the proposition which expresses the phenomenon and the phenomenon. Students at this level have no problem moving between the state and the statement. "Explanation" is reserved for the set of propositions, both explainer(s) and explainee. English is ambiguous in this regard; sometimes "explanation" (also) means just the explainer(s). already believes the explainee. See 2.2.4.

A19

the reservoir is low. This example comes from David Kaplan, via Salmon (1984, chapter 5). 2.2 repeats 2.1 (as does 2.3.1.). 2.2.3 flag words. Lots of books emphasize flag words, but much more often, the language of the passage (especially the tense or the mood), or the context, are more revealing. 2.2.4 axons and dendrite example. Could the axons and dendrites example be construed as an argument which gains support from the claimed existence or presentation of an explanation? We might call this "appeal to explanation": "there is an explanation for "p"; therefore p.". I favor (but only very weakly) the interpretation that such cases begin with an implicit appeal to authority, followed by an explanation. To be further convinced of the truth, is not to become (initially) convinced. (For discussion see Walton (1996) section 2.9 and Mayes on exargation (web page, doc) 2.2.5 begins the active work for the students, as they begin to mark up passages. 2.2.6 obviously bad reasoning. See the remarks on 2.1.2, just above. 2.3 not all arguments are based on an explanatory connection. The difference between why one takes "p" to be true and what makes "p" true is a crucial one for philosophy students to grasp and comes up in passing in every philosophy course (at least at the beginning of a student's career). But it doesn't appear to be given particular attention in reasoning courses (because explanation is discarded) or to have standard tropes for its teaching. In courses on informal logic, this lesson is/can be taught when covering 'appeal to authority'. In this book, I do not discuss it there (when sources are discussed, in 4.4) but here in 2.3 and again at the beginning of chapter 7 (7.2) in the context of adding principles to explanations. The fact that it gets its own section here and comes up again later is an indication both of its importance and that I don't feel that I have really found the best single way to make this point. Feedback is requested. 2.4.7-8 make sense on its own and more than one proposition. The injunctions to make each proposition complete and to split up conjunctions is suggested mainly as an aid when considering the truth of the reasons (chapter 4). As a result of the latter injunction, however, a rule of conjunction (Conj.) is required in chapter 12. 2.4.8 is included because students then go on to break up (incorrectly) disjunctions and conditionals. The time spent making this point is good preparation for chapter 12, even though it is far off. A20

2.4.9 disordered and confusing passages. Underlying message to students: When things don't look exactly like you expected, don't panic; be confident. Chapter 3 3.2.2 the split-tailed arrow. In other texts on argument, a distinction is made between dependent and independent premises. This distinction is not used here. The first reason for this is that "independent" premises are typically diagrammed with multiple arrows, one for each premise. This, however, makes diagramming disputes which contain rebuttals to objections difficult, for which see the commentary on 3.6. Further, multiple arrows suggests multiple arguments or explanations. But it's often not clear that this is what the speaker intendsperhaps the support from multiple reasons is being combined, in order to reach some tipping point. The second reason is that the distinction is difficult to maintain. When a piece of reasoning is fully articulate, it is possible to see how the reasons are related, but in many passages the reasoning is not articulate and in others the speaker has only the vaguest idea about the relationship between the reasons. Instead, the text makes a very basic distinction between when we (the audience) cannot tell that the reasons should be combined in some way and when we can tell. Under "combined in some way" one might distinguish different modes of combination, such as the combined support from unrelated reasons and the articulation of a single line of thought in multiple premises (as in, e.g. an instance of modus ponens.) No classification of combinations is attempted here because (again) there are too many cases in which it is impossible to distinguish. 3.6 This section (and the next two) include the distinction between evaluating the reasons and the reasoning which is not explicitly covered until chapter 4, but students do not seem to have a problem grasping the distinction quickly here. the up-arrow. To my knowledge, only Austhink (austhink.com/reason) Kelley (Art of Reasoning) and Epstein (Critical Thinking) make an attempt at diagramming arguments involving counterarguments and only Epstein explicitly extends this to arguments with counterarguments to counterarguments. (The treatments of Johnson & Blair (Logical Self-Defense) and Moore & Parker (Critical Thinking) are rudimentary or overly schematic.) All of Epstein, Kelley and Moore & Parker use an arrow with horizontal marks through them (which I'll call "hash-arrows") to represent a A21

counterargument, while Johnson & Blair use a dashed arrow. The premises which support the counterargument are given regular arrows, and the counterarguments to those get hash-arrows. Two possible confusions result (and these might be two versions of the same thing): First, the supports for the objections have regular arrows. (This is noted by Austhink "Note that the reason here helps the objection, not the main contention. It provides evidence that the objection is a good one." http://austhink.com/reason/tutorials/Tutorial_4/2_reason_objection/reason_objectio n.htm) Second, objections to objections look identical to objections, even though the former are indirectly supporting the main conclusion. For these reasons I get rid of the hash-arrow/dashed-arrow and using up-arrows for the objections. Further, allowing objection arrows to point at either the number representing the reason or the arrow between reason and target, is found, to my knowledge, only in Thomas (1986). Objection arrows can point either at the number representing the claim (if its truth is being challenged) or at the inference arrow (made possible in multiple reasons arguments by eliminating the multiple arrowssee 2.5, above) representing the inference being denied. (Additionally, an objection can point to a specific tail, if an argument (perhaps in an incomplete objection) is being objected to piece-meal.) Alternatively, Kelley (p. 155) and Austhink diagram challenges to inferences by inserting a connecting premise, which must make the sub-argument at least cogent, and then attacking the additional premise. But "piling on reasons" multiple reasons arguments pose a problem for them, since they use multiple arrows but it's sometimes not clear what the connecting premise should encompass. (It might be possible to teach a two-stage method, adding a first stage, in which there can be multiple reasons diagrammed with multiple arrows and in the second, as just above, there is a single, sufficient, arrow and an added premise which collects the premises.) Further, I use up-arrows for propositions which support objections. In class I make use of the metaphors of a "tug-of-war" and of "clashing tides". 3.6.5 This section contains an example of a split-headed arrow.

A22

3.7 These two points get their own section because they are (especially the first) difficult for students. Even after having drawn attention to these in open class, I find myself repeating them to students when working on exercises. 3.8 I tell my students that editorials are useful because they're so sloppy. But it's really a cherry on a turdit's hard to keep reminding oneself what great practice one is getting as opposed to succumbing to anger at how poorly they're constructed. Although we have been building up the length and complexity of passages, editorials are a big step up in the level of difficulty, and many students doubt themselves. It is crucial to emphasize that they not panic. Chapter 4 4.1.3 slogan. With respect to arguments, specifically, the following also makes a good slogan: Are the premises true? (pause) Does the conclusion follow? properly (convincing/satisfying). Again the difference between trying and succeeding pokes through. It might be helpful to emphasize at this juncture that when we evaluate, we evaluate by comparing with correct reasoning, with standards of reasoning. explanatory. There doesn't seem to be a parallel to "sound" other than "explanatory". The idea is the same as with arguments: there are standards of reasoning that should be observed when forming explanations. satisfyingly. While arguments are said to be, pragmatically, convincing, explanations are satisfying. 4.1.4 A related phenomenon is that of source amnesia. See also 4.3.1 on reason substitutes. 4.2 evaluating the various argument structures. This is perhaps unnecessary or redundant, but it provides an opportunity for a set of exercises on the reasons vs. the reasoning. This distinction cannot be repeated enough. 4.3-5 These sections are (to some extent) equivalent to the section(s) on "informal fallacies" found in other texts. No detailed typology is attempted. Students get the ideas in these sections readily and so it can be approached by working through the exercise(s), with a summary at the end for completeness and perhaps some explicit attention to a few particular items, such as ambiguity (4.4.9).

A23

4.3.2 hasn't really offered. Note the "really". It's not clear that this is a reasoning (specifically, argumentative) context at all. We expect a reason after "That's the dumbest thing I've ever heard." but perhaps Jack is self-consciously not arguing? How would Jack's non-objection be diagrammed? With an arrow but no number? 4.3.5 you do it too. A.k.a. tu quoque. 4.6 adding warrants. First, why is this topic included here, in a chapter on evaluation, and not as, it would be in most text-books, in a chapter on analyzing passages? When adding "missing/hidden/assumed premises" (as they're usually called) the reader is typically instructed to add a proposition which makes the argument valid (or cogent). Thus, it requires this (these) notion(s), which suggests (though does not dictate) that it should be placed after a/the discussion of evaluation. More generally, when adding a missing premise one is thinking broadly about the reasoning, the connection between the particular information and the target. I put this section in chapter 4, then, as a method for getting students to think critically about the reasoning. Second, what is a warrant? Toulmin talks about the warrant, smartly described as the move from "What have you go got to go on?" (i.e. the grounds) to "How are you going to get there?" (i.e. the warrant) (Toulmin, Rieke & Janik p. 44). Less metaphorically, he describes a warrant as a generalized, legitimating principle. I borrow the word from Toulmin, but construe its meaning more broadly. That is, if an audience asks himself "What (else) would I have to believe to grant the conclusion?" the answers might not be a legitimating generalization. To begin with, there are cases where arguers give only a general proposition ("Dogs have tails.") and omit the specific information, which is taken as obvious (e.g. "Jim is a dog."). Similarly, to an argument about three cities, "A is west of B. So, A is west of C." one might add a piece of particular information, that B is west (or not east of) of C. (The logic of spatial and temporal and prepositional relations is not substantial enough to include as a separate chapter or section. There is a useful table on p. 470 of Goodwin & Johnson-Laird (2005) summarizing the transitivity and symmetry of some prepositional relations.) Further, there's an issue here (alluded to the main text's discussion) as to the status of the conditional in (e.g.) an instance of Asserting the Antecedent (a.k.a. modus A24

ponens). This is a warrant. But it is not general. (The general principle is the schema for AA: If "If p then q" and "p", then "q".) Similarly, the disjunction used in disjunctive syllogism is not general, at least insofar as it concerns two or more specific options. (I think Salmon in her Introduction to Logic & Critical Thinking notices this in her treatment of "unstated" premises, as she gives an example where the unstated premise is a disjunction.) On the one hand, this unfortunately means that there's no simple definition of warrant. In the text I use the rather cumbersome "generalization or background proposition (or: information)". On the other hand, the definition of warrant as whatever is needed to allow that target follows or is explained by the specific information allows for all kinds of warrant, whether they are conditionals or categorical propositions or information about how a sample was collected or the typicality of a case. (It would even allow an audience to entangle herself in Dodgson's What The Tortoise Said To Achilles worry: "Now, if only I had another premise which says "If I have a premise which says "If one thing, then another, and the one, then the other." and I have "If one thing then the other" and "the one", then I could conclude "the other".".) 4.6.3 Each type of reasoning has its own warrants. The types of reasoning and the warrant(s) required are, in brief, (i) (ii) (iii) (iv) (v) (vi) (vii) (ix) (x) (xi) (xii) IG/IP: Large size IG/IP: Representativeness IS: (Near) Universal Relation (All/Most Fs are Gs) IS: Typicality (total evidence) Expl: Correlation Expl: Prioricity Expl: No common explanation (total evidence) ML: No more likely explainer (total evidence) AAn: No relevant disanalogies (total evidence) Categorical: All Fs are Gs AA, CC, HS: If p then q

(viii) IE: Typicality/No interfering factors (total evidence)

(xiii) DS, CD, DD: p or q Section 5.5 (adding warrants to arguments) gives examples of many of these types, though without mentioning by name the different types of reasoning. Sub-section A25

5.5.3 contains a discussion of conditional propositions. See the remarks on that subsection, below. 4.7 sincerity and charity. Walton (1996) p. 213 has a useful list of five different principles which might be at work when people share and interpret arguments. By title they are loyalty, clarity, neutrality, charity, and principled preference. 4.7.2 As a general rule, information should not be discarded, but, contrary to what is written here, there is often a need to through out information in order to make a set of propositions consistent, though the sets involved are usually much larger than a single argument or explanation. Aristotle advises (Metaphysics 2.1) that when information is discarded we should be able to "save the phenomena", say why the speaker thought as she did, why things might have appeared thus-and-so to her. 4.7.3 This sub-section is included because it is a skill in philosophy to attempt to reconstruct an argument as the author might have meant it. This is particularly true, as the first line says, when the author is no longer present (or alive). The alternative would be to move straightaway to taking the argument for oneself and forgetting that it comes from a person. 4.7.4 straw man. The discussion of the straw man fallacy is related to the evasions towards the beginning of 4.3. 4.7.5 left with. Why should one tackle this "found" argument at all? And more generally, what passages are worth spending time on? Whose opinions should we listen to? The ancients? The moderns? The many? The wise? Chapter 5 5.1-2 validity. Toulmin and Johnson & Blair dispense with the distinction between valid and cogent support, on the grounds that most reasoning is cogent (at best) because arguments are (almost always) about material objects. But to my mind the set of artificial contexts (and natural laws), in which validity is possible, is large enough to warrant introduction of the concept and term validity. 5.2.1 In general the actual truth or falsity of the premises do not tell you . There is, however, the case of true premises and a false conclusion ruling out a valid argument. And there are some contexts in which this is useful, such as paradoxical arguments when the conclusion violates sensation or is in some other way obviously false. However, I have found that introducing this early confuses students and that it is A26

better to keep the two tasks (the truth of the premises, the strength of the reasoning) separate. 5.3.2 bank is open on Saturday. From DeRose (1992). 5.4 can be skipped if students have clearly mastered the concepts, but often they have not and another opportunity for a pair of exercises on validity vs. cogency is provided. 5.5 See the remarks on 4.6, above. In 5.5.3 some of my worries about the status of the material condition come out. The worry is not, precisely, that the premise to be added is not general (as 5.5.4 implies), but that it is at the same level of generality as the other premise and the conclusion. The "general" principle of 5.5.4 is more precisely a "more general" principle. But as is stated at the end of the sub-section, such premises are useful in articulating the connection to the conclusion even if they are already implied by the fact of presenting an argument itself. 5.5.4 conditional proposition types of item. Strictly speaking, this is are categorical propositions (All Fs are Gs), rather than conditionals. Part 2 Chapters 6, 7, 8 (and 9) present induction and scientific reasoning. 6 and 7 concern relation and correlation (Mill's methods can be found in appendix 7A), while 8 repeats the idea of correlation in the guise of necessary and sufficient conditions. 9 presents arguments using generalizations (and analogy). Chapter 6 Induction 6.1.1 Induction. There are two main issues in the background: the distinction between induction and deduction, out of which arises (secondly) the definition of induction. An argument is typically (or at least sometimes) said to be deductive if its conclusion follows necessarily from the premises, or is intended to follow necessarily from the premises, while an argument is inductive if the conclusion follows probabilistically from the premises, or is intended to follow probabilistically from the premises. Now, anyone can intend anything. Does inserting the word "necessarily" into a probabilistic syllogism make it deductive? Surely not. (And 5.4.3 accordingly A27

instructed students to ignore what speakers think of their arguments.) Perhaps then we should remove the "intended" and stick only with successful arguments? But then the deductive-inductive distinction is the same as the distinction between valid and cogent arguments. The distinction between deduction and induction is sometimes expressed alternately in terms of moving from the general to the specific (deductive) vs. from the specific to the general (inductive) but this is thought not to hold water, since argument from analogy and instantiation/statistical syllogism each has a specific conclusion but is "inductive" in the sense of "cogent (at best)". As a third alternative, Johnson-Laird (2006, p. 4) maintains that we would do better to talk about the informativeness of the propositions, by talking about the alternatives that are ruled out by them. Conclusions in inductive arguments close off more possibilities than the premises, while deductions only make explicit what is implicit in the conclusions. I think he is right about deductionI take deduction to describe or rely on a definite or constrained environment with well-defined entities but I disagree with his use of "induction" to cover "everything else". This overextends the meaning of "induction" and so I narrow it. If we distinguish between inferences which generate generalizations from those which employ them (or, to put it another way, we distinguish ascending from descending inferences, the way up from the way down) the line of thinking which distinguishes deductive from inductive arguments on the grounds that one moves from general to specific while the other moves from specific to general is correct about inductioninduction generates generalized propositions from specific cases. And this is how I use the term. I use "induction" to mean arguments which move from observed cases to unobserved cases on the basis of the number and frequency (of co-occurrence of features) alone. 6.1.2 The phrase "universal or near-universal" is important to stress. 6.1.4 employed in arguments. Instantiation syllogisms with probabilistic "major" premises are not inductions, even though they are treated in this chapter. (Chapter 9's Argument By Analogy is thus not inductive also, since the co-occurring features mentioned in the argument must additionally be elements within a structure.) 6.2 Inductive Generalization. There are other names, but this one seems to be most accessible and memorable for students. A28

6.2.1 Important Note. This point has been made before, in 2.4.9 concerning clean vs. messy passages and 3.9 on editorials, and is worth emphasizing. general form. In this chapter, and beyond, (quasi-)formalizations are given for various forms of argument. The terms "formal" and "informal" are typically used to described whether or not the strength of the inference can be evaluated without regard to the content of the premises. Formal logic is the evaluation of inference by examining only the form of the argument, in order to see whether it matches forms which are thought to indicate a strong connection, or whether its conclusion can be derived from the premises by use of such rules. Formal logic is typically symbolic, since symbols are used to keep us from being distracted by the specific content and focused on the structure of the argument. Evaluation of inferences which criticizes them in terms of the specific inference between the specific propositions, appealing to the specific content and without appealing to general forms (or, e.g. to parallel arguments) is informal. To my mind, however, the distinction between formal and informal is one of degree rather than type. Every argument can be given some kind of formal treatment, and no formal treatment is immune from having to look at the specifics of the argument in question. Take as an example the argument "The (U.S.) President says it will rain this afternoon at the White House. So, it will rain this afternoon at the White House." In evaluating the inference informally, we think about whether the premise ("The President says it will rain this afternoon at the White House.") could be imagined to be true and yet the conclusion ("It will rain this afternoon at the White House.") could be false. The basic problem with this argument is that we don't know whether or not we should trust the President when he predicts the weather. We might be convinced by this argument if we added to it propositions such as "The President is an expert on weather." (and "The President is unbiased." and perhaps various other things). This is our informal evaluation of the argumentwe are inclined not to believe the conclusion because of worries about the President's expertise. We are then led to think about the meaning and truth of this additional proposition (and any others). If we can be assured of its truth, we accept the conclusion. To give a formal treatment of this argument, we must abstract the concrete information from the propositions given and compare what remains against, or derive it from, the argument forms we have pre-determined as being strong. If we abstract A29

away as much as possible we get (perhaps) "Asserter-A asserts "Proposition-p".". This form is not one we will find in our store of strong inference forms. We could, however, add generalized forms of the additional propositions ("The President is an expert on weather." and so on,) as additional premises, to give a fuller form of what is commonly known as an argument from authority: "Asserter-A asserts "p". Asserter-A is an expert on the subject matter of p. Asserter-A is unbiased with respect to p. p is not controversial amongst experts. So, p." Even this form, however, might not be thought sufficiently abstract, as it contains unanalyzed terms such as "expert", "unbiased" and others. Abstraction might also be inadvisable, because the particulars of the argument sometimes make a difference. Philosophers debate as to how much abstraction must take place in order for an argument to be formalized. This is the problem of logical constants. The hope is that there are some forms which are universally valid. But instances of even the most basic forms of propositional logic can be invalid, due to a connection between the items mentioned in the propositions. For example, the argument form commonly known as disjunctive syllogism ("<proposition 1> or <proposition 2>. It is not the case that <proposition 1>. So, <proposition 2>.") generally give us true conclusions when the premises are true, but if proposition 1 is "The table is colored." and proposition 2 is "The table is brown." then the conclusion "The table is brown." is false, even though the premises are true. (This is because, if the table is not colored it cannot be brownthere is a very tight connection between being brown and being colored, one which over-rides the connection between the propositions in the argument. The logic of determinate-determinable, and other relations between entities and properties, would require too much articulation of particular relations to be useful when compared with ordinary grammatical ability. (For discussion of formalization, see MacFarlane's (2005) entry on Logical Constants in the SEP, Johnson-Laird (2006) Ch. 12, and Massey (1981).) 6.2.1 pattern of absences. This is the first mention of the four possibilities, treated in more detail in chapters 7 and 8 (and also the appendix 7A on Mill's methods). In this chapter, it is also mentioned in 6.3.2. It's a good idea to drop this into the course here, in advance of chapter 7's discussion of correlation. 6.2.3 add the generic premises. Students are thus required to think of the qualities of good samples every time.

A30

6.2.8 the total evidence rule. This bears emphasis. We'll see it repeatedly in a variety of contexts. In this chapter, it is also mentioned in 6.3.5the typicality requirement for IS. 6.3 Instantiation Syllogism. As with IG, a slightly different name than normal most commonly called statistical syllogismbut one that seems to work well. 6.3.4 To repeat from the note on formalization just above, that the number needed to make the argument cogent cannot be determined in advance shows that this argument "form" is not truly formalized. That is, what F and G in fact are can be important to the cogency of the argument. 6.4 Induction To A Particular. I'm inclined to think there's always a generalization step. But IP is included in any case. It serves as an opportunity to repeat IG and IS. Chapter 7 Evaluating Explanations 7.2.5 non-dogs. There is a problem here (not discussed) about how to define the class of non-dogs. Do we mean: other pets? Other animals? Other entities? visually represent. These diagrams are a version of the diagrams in Giere (1997). His book, Understanding Scientific Reasoning, in any edition, is highly recommended as background for the issues in chapter 7. 7.3.4 instruction in statistics. Giere's book (mentioned just above) continues beyond this point with minimal mathematics. 7.5.1 G does not occur prior to F. This applies both to case1 and to the correlation between F and G generally. Chapter 8 8.2.1 In this section, understandings of the explanations are made more complex in the light of defeaters. Toulmin (1958) puts the defeater after the reasoning (he calls it a qualifier), as a possible exception about which the speaker is ignorant in the particular case. Qualifiers weaken the strength of the explanation; if they do so too much, they must be built into the explanation and accounted for in the particular case. If they are highly unlikely, they can be left unchecked. 8.6 In the name "Inference To the Best Explanation" the word "to" is important. In other words, IBE is here understood as a creative abductionwe argue from the data to a novel explainer or novel explanation. A31

There is much confusion between this form of reasoning and inference to the best available explanation (whether explainer or connecting generalization) which I call inference to the most likely explanation (ML). Abduction of the sort that Holmes and House do is to the likeliest explanation; it's only in rare circumstances that we have two explanations that are equally likely and we must resort to the features discussed in 8.6 but which have attracted much attention in the philosophy of science. Chapter 9 9.4 What can be said about analogy except that it confuses everyone? Here, I present it as involving a relation more complex than simple cooccurrence; these can be very simple (as in the example of the tail as a part of the dog). It is presented in the main text as an argument, but even as an argument it is used in a problem-solving way. It is often incogent, but suggestive. Part 3 Part 3 is a standard coverage of deductive logic and there are fewer comments about this part than previous parts. A few innovations are noted at the start of each chapter's notes, below. Deduction. I do not know what deduction is. It might reduce to validity. That is how it is treated here. But I think IS and IE might be deductions. (And so Sherlock Holmes might be correct when he claims to make deductions.) The term is used here only because it is traditional. One way I think of part 3 is as concerning inferences in which the propositions are all at the same level of generality, whereas inductions move from particular to general (which I think of as moving up), and instantiations move from general (plus particular) to particular (moving down). Chapter 10 Venn Diagram Method. A brief and rudimentary treatment of the Venn diagram method for categorical logic is included because some students benefit from the pictorial representation. It is also hoped that a chapter will be added, perhaps as rudimentary as the chapter on categorical logic, on predicate logic, at the end of the book, in order complete a (brief) narrative of categorical logic + propositional logic => predicate logic. In general, though, categorical logic is over-rated and over-treated the A32

books that treat it. The propositional calculus provides the same kind of logical training but with a greater chance of being used in other studies and in real life. 10.4 The problem of the existential commitment of universal propositions is analyzed and dissolved by Toulmin (1958) pp. 113-118. Time can be saved by simply pointing out that an asterisk must be used and omitting the discussion of existential commitment. The modern logician. Aristotle of course assumed commitment in his logic, since logic was a tool of (physical) science. Chapter 11 The Big 8 method as a gradual introduction to propositional logic was an idea of Bill's and much of this chapter retains his impression. Chapters 11 and 12 are largely independent. (The important exceptions are 11.4 and 11.5, which indicate how various English constructions can be rendered in terms of the four connectives.) If you would like to have your students move immediately to symbolic propositions and to derivations, provide a translation key and proceed directly to chapter 12. 11.2/11.4 Students might raise the difference between inclusive- and exclusive-or in connection with these sections. It is explicitly mentioned in 13.2.2. 11.6-9. I change the names of the basic forms from their more traditional "affirming" and "denying" to "asserting" and "contradicting" because students get confused by the fact that one can affirm a negative proposition and deny a negative proposition by affirming its content. People for whom logic codes their thought and who are already simply pattern-matching have no problem with such an idea, but many of my students are not those people. 11.10 constructive dilemma; destructive dilemma. In some textbooks, the conditionals are conjoined. Chapter 12 I try to keep the emphasis in this chapter on treating actual arguments. Too quickly derivation gets cut loose from real life. Thus, for example, interim conclusions are maintained in 12.4. and in 12.5.

A33

12.5. Johnson-Laird (rightly) criticizes addition as being a loss of information (2006, p. 12), but it does have a function when an antecedent is a disjunction, and that is how it is introduced here. Similarly, simplification in 12.5.4 does have a purpose, even though chapter 2 instructed students to remove conjunctions involving simple propositions at the analysis stage. 12.6 any part of a proposition. This cannot be stressed and repeated enough. Chapter 13 Truth trees are the most elegant method, but I find that the truth tables are required for an understanding of the rules, and so one might as well do the truth table method. Also, the targeted truth table method makes a good segue to truth trees. 13.2.2 exclusive-or, inclusive-or. The English " or else " is a better indicator of exclusive-or, but many uses of "or" are intended as exclusive. If we are worried about informativeness (and we are), we should give a full translation of exclusive-or. It can then be simplified if necessary for the derivation. conditional. I try not to deviate from what I have written here, as anything else is like to be false and/or confusing. Many students do not understand the explanation here the first time around and need to consider the text slowly by themselves. Some never grasp it. 13.6 targeted truth tables. You can hold out the promise of targeted truth tables to smarter students who get the basic truth table method quickly. 13.7.3 after the decomposition of each proposition. This needs to be emphasized. Appendix 6A This appendix of philosophical issues in induction can be used for a separate discussion section or as reading for advanced students, or pieces of it can be included as the arise in covering part 2. Appendix 7A Mill's methods have been superseded but for some reason they are included in a majority of logic texts. This is strange for the further reason that there is controversy over what precisely Mill meant by the double and joint methods. In this appendix I A34

separate the double and joint methods. See the (long) parenthetical notes at 7A.3.1 (along with 7A.3.3) and 7A.4.2. Bibliography Allan, Keith & Burridge, Kate (1991) Euphemism And Dysphemism, Oxford University Press Borgida, E. & Nisbett, R. E. (1977) 'The differential impact of abstract and concrete information on decisions', Journal of Applied Social Psychology 7, 258271 Cohen, Morris & Nagel, Ernest (1934) An Introduction to Logic and Scientific Method, Harcourt Brace Darwin, Charles (1962) The Origin of Species, Collier Dodgson, Charles (1895) 'What The Tortoise Said To Achilles', Mind n. s. 4 Engel, S. Morris (1994) Fallacies and Pitfalls of Language, Courier Dover Epstein, Richard L. (2002) Five Ways Of Saying "Therefore", Wadsworth Feldman, Richard (2002) Epistemology, Prentice Hall Fogelin, Robert and Sinnnott-Armstrong, Walter (2000) Understanding Arguments (6 ed.), Thomson Learning Fumerton, Richard (1980) 'Induction and Inference To The Best Explanation', Philosophy of Science 47 Goldman, Alvin (1979) 'What Is Justified Belief?' in George Pappas (ed.) Justification and Knowledge, D. Reidel Giere, Ronald (1997) Understanding Scientific Reasoning, Harcourt Brace Gilovich, Thomas (1991) How We Know What Isn't So, The Free Press Goodwin, Geoffrey P. & Johnson-Laird, Philip (2005) "Reasoning About Relations", Psychological Review 112.2 Gopnik, Alison (1998) "Explanation As Orgasm", Minds And Machines, 8. Updated as Gopnik, Alison (2000) "Explanation as Orgasm and the Drive for Causal Knowledge: The Function, Evolution, and Phenomenology of the Theory Formation System", in Keil, Frank & Wilson, Robert (eds.) (2000) Explanation and Cognition, MIT Press

A35

Gould, Stephen Jay (1980) "The Panda's Thumb", in Robert T. Pennock (ed.) (2001) Intelligent Design Creationism And Its Critics: Philosophical, Theological, And Scientific Perspectives, MIT Press Hitchcock, Christopher 'Probabilistic Causation' in the Stanford Encyclopedia of Philosophy, Edward Zalta (ed) http://plato.stanford.edu/entries/causationprobabilistic/ Johnson, Ralph H. and Blair, J. Anthony (1993) Logical Self-Defense, McGraw-Hill Johnson-Laird, Philip (2006) How We Reason, Oxford University Press Juthe, A. (2005) 'Argument By Analogy', Argumentation 19 Kahneman, Daniel, Slovic, Paul & Tversky, Amos (eds) (1982) Judgment Under Uncertainty: Heuristics and Biases, Cambridge University Press Kahneman, Daniel & Tversky, Amos (eds) (2000) Choices, Values and Frames, Cambridge University Press Kitcher, Philip (1982) Abusing Science: The Case Against Creationism, MIT Press MacFarlane, John (2005) 'Logical Constants' in the Stanford Encyclopedia of Philosophy, Edward Zalta (ed) http://plato.stanford.edu/entries/logical-constants/ Mackie, J. L. (1965) 'Causes And Conditions', American Philosophical Quarterly 2.4 Massey, Gerald (1981) 'The Fallacy Behind Fallacies', Midwest Studies In Philosophy 6.1 Mill, John Stuart (1843) A System Of Logic, Longman, Green & Co. (Page references to the eight edition, 1906.) Nisbett, Borgida, Crandall, and Reed (1982) 'Popular Induction', in Kahneman, Slovic & Tversky (eds) Salmon, Wesley (1984), Scientific Explanation and the Causal Structure of the World, Princeton University Press Scriven, Michael (1962) 'Explanations, Predictions and Laws', Minnesota Studies in the Philosophy of Science, Vol. III Scriven, Michael (1964) 'The Structure of Science', (review of Ernest Nagel's The Structure of Science) Review of Metaphysics 17.3 Smedslund, Jan (1963) 'The Concept of Correlation in Adults', Scandinavian Journal of Psychology 4.1 Sosa, Ernest & Kim, Jaegwon (2000) Epistemology: An Anthology, Blackwell

Toulmin, Stephen, (1958) The Uses of Argument, Cambridge University Press Toulmin, Stephen, Rieke, Richard & Janik, Allan (1979) An Introduction To Reasoning, Macmillan Tversky, Amos & Kahneman, Daniel (1981) 'The Framing of Decisions and the Psychology of Choice', Science Vol. 211, No. 4481. (Jan. 30, 1981) van Heuveln, Bram, (2000) 'A Preferred Treatment of Mill's Methods', Informal Logic, 20.1 Walton, Douglas (1996) Argument Structure: A Pragmatic Theory, U. of Toronto Press Walton, Douglas (2005) Fundamentals of Critical Argumentation, Cambridge University Press Woodward, James (2009) 'Scientific Explanation' in the Stanford Encyclopedia of Philosophy, Zalta, Edward (ed.) http://plato.stanford.edu/entries/scientificexplanation/

Summary Of Forms, Terminology, Etc. Part 2 Induction % is a proportion "F" etc. are types of thing "F1" etc. are particular instances of types "a" etc. are properties "x" and "y" are states of affairs Inductive Generalization (IG) (1) In case1 casen, F is present. (2) In % of case1 casen , G is also present. (3) The sample is large.* (4) The sample is unbiased.* J ---------------------------------------------------(5) In roughly % of cases of F, G is also present. Instantiation Syllogism (IS) (1) In casei, F is present. (2) In roughly % of cases which are instances of Fi, Gi is also present. (3) Casei is believed to be a typical instance of F with respect to G.* J -------------------------------------------------------------------------------------(4) In casei, G is present. Induction to a Particular (IP) (1) In case1 casen, F is present. (2) In % of case1 casen , G is present. (3) In casen+1, F is present. (4) The sample is large enough.* (5) The sample is unbiased.* J ---------------------------------------------(6) In casen+1, G is present. Correlation (Corr.) (1) In % of cases where F is present, G is also present. (2) In % of cases in which F is absent, G is present. (3) The two percentages are significantly different. J ------------------------------------------------------------------(4) F and G are correlated. Explanation/Cause (Expl.) (1) In case1, F is (also) present. (2) F and G are correlated.* (3) G does not occur prior to F.* (4) There is no common explanation of F and G.* E ----------------------------------------------------------(5) In case1, G is present.

Sufficient Condition, Necessary Condition and INUS Condition F is sufficient for G: if F is, G is F is necessary for G: if F is not, G is not F is an INUS condition for G: F is an Insufficient but Necessary part of a joint condition which is Unnecessary but Sufficient for G Inference To The Best Explanation (IBE) (1) G1 (2) G would be explained by F. (3) Of the available explanations for G, F is the best.(*) J ----------------------------------------------------------------(4) F. Inference To An Explainee (IE) (1) In case1, F is instantiated. (2) In roughly % of cases which are instances of F, G is also instantiated. (3) G is not instantiated prior to F. (4) Case1 is believed to be a typical instance of F with respect to G.* J ------------------------------------------------------------------------------------------(5) In case1, G is (will be) also instantiated. Inference To The Most Likely Explainer (ML) (1) G1. (2) G is explained by F. (3) Of available explanations for G, F is the most likely.(*) J --------------------------------------------------------------------(4) F1. Argument By Analogy (AAn) (1) Explanation X (involving elements of type F, G, (H, I, J, . . .)). (2) In case1, F, G, (H, I, J . . .) are instantiated. (3) In case2, F', (H', I', J' . . .) is instantiated. (4) There are no disanalogies between the analogues that are relevant to G.* J ----------------------------------------------------------------------------------------------(5) In case2, G' is also instantiated. Or Case1 instantiates F, (H, . . .). Case2 instantiates F', (H', . . .). Case1 instantiates G. F, (H, . . .) is inter-related with G.(*) The are no relevant disanologies between case1 and case2.* J ---------------------------------------------------------------------------Case2 instantiates G'. (1) (2) (3) (4) (5)

Part 3 Deduction Logically Structured English Disguised Negations (a) The cat is not on the mat. (b) No doctors are rich. (c) Socrates is unmarried. Disguised Disjunctions (a) Jack is either a philosopher or a bus driver. (b) The team will win either this week or next. (c) At least one of the two girls, Gill and Kofi, will get to the top of the mountain. Disguised Conjunctions (a) S, but T. (b) Neither S nor T. (c) S. Moreover, T. (d) S. Nonetheless, T. (e) Although S, T. Disguised Conditionals (a)-(d)/Involving Conditionals (e)-(g) (a) T if S. (b) S only if T. (c) Provided that S, T. (d) S is sufficient for T. (e) Unless S, T. > If not S, then T. (f) S is necessary for T. > If not S then not T. (g) S if and only if T. > (If S then T) and (if T then S)

The Big 8 Method The following six forms are valid: Asserting the Antecedent (AA) (1) If S then T. (2) S. -------------(3) T. Hypothetical Syllogism (HS) (1) If S then T. (2) If T then U. --------------(3) U. Constructive Dilemma (CD) (1) If S then T. (2) If U then V. (3) S or U. ---------------(4) T or V. Contradicting the Consequent (CC) (1) If S then T. (2) not T. -------------(3) not S. Disjunctive Syllogism (DS) (1) S or T. (2) not S. --------(3) T. Destructive Dilemma (DD) (1) If S then T. (2) If U then V. (3) not T or not V. ------------------(4) not S or not U.

The following two forms are invalid: Asserting the Consequent (AC) (1) If S then T. (2) T. -------------(3) S. Contradicting the Antecedent (CA) (1) If S then T. (2) not S. -------------(3) not T.

The Method Of Derivation Asserting the Antecedent (AA) (k) S T . (l) S . (m) T k, l, AA Hypothetical Syllogism (HS) (k) S T . (l) T U . (m) S U k, l, HS Constructive Dilemma (CD) (k) S T . (l) U V . (m) S U . (n) T V k, l, m, CD Addition (Add.) (k) S . (l) S T Contradicting the Consequent (CC) (k) S T . (l) ~T . (m) ~S k, l, CC Disjunctive Syllogism (DS) (k) S T . (l) ~S . (m) T k, l DS Destructive Dilemma (DD) (k) S T . (l) U V . (m) ~T ~V . (n) ~S ~U k, l, m, DD Conjunction (Conj.) (k) S . (l) T . (m) S & T k, l Conj. Exportation (Exp.) S (T U) <--> (S & T) U Transposition (Trans.) S T <--> ~T ~S Material Implication (MI) S T <--> ~S T S T <--> ~(S & ~T) De Morgan's Rule (DM) ~(S & T) <--> ~S ~T ~(S T) <--> ~S & ~T

k, Add.

Simplification (Simp.) (k) S & T . (l) S k, Simp. Double Negation (DN) S <--> ~~S Commutation (Comm.) S T <--> T S S & T <--> T & S Association (Ass.) (S T) U <--> S (T U) (S & T) & U <--> S & (T & U)

Truth Tables ~ S F T T F Truth Trees

S T T T F F T T T F T F T F

S & T T T F F T F F F T F T F

S T T T F F T F T T T F T F

Non-branching: S & T S T

~(S T) ~S ~T

~(S T) S ~T S T ~S

~~S S

Branching: ~(S & T) S T ~S ~T S

You might also like