Phil210 Notes PDF
Phil210 Notes PDF
Phil210 Notes PDF
com
Having reason
• The most basic kind of communication for each of these purposes is the practice of
presenting statements as true
• To present some claim as if it were true is to assert it
• A great deal of our communicative exchanges consist of assertions, as we go about
telling one another the facts as we see them
• To assert is to undertake a kind of obligation: the obligation to defend or retract the
assertion in the face of questioning or when confronted with evidence to the contrary
• For this reason the fundamental units of rational exchange are not assertions, but
arguments
• Argument = the presentation of reasons
• An argument is a set of statements that are presented as true and that have a very
important internal relation:
o Some statements are premises – intended to provide rational support for a
further statement, the conclusion. An argument is premises given in
support of a conclusion
Property of argument that succeeds in supporting its conclusion is
soundness
• This property can be broken down into two sub-properties:
that is it valid, and that is has all true premises
• This approach stresses the fact that arguing is a process, one that occurs in a
communicative context
• Argumentation is a practice by which we aim to show the reasonableness of an
assertion, up to whatever standard of reasonableness is called for in that context
• Presenting an argument is a way of making good on the obligation to support an
assertion
• Argument can be a means of education or of explanation
• A good argument is the presentation of a collection of premises that jointly are rationally
persuasive of a conclusion. Taken together, the premises make it reasonable to believe
the conclusion
o Intuitionism also sets the bar higher for certain kinds of proof, since, without
excluded middle, you cant just take a disproof of not-P as a proof of P
o You need a direct proof in intuitionistic logic
• Dialetheic logic – keeps the law of excluded middle, but gives up (or restricts) the law of
non-contradiction
o The latter tells you that if a collection of propositions contains a contradiction, the
collection is incoherent. But in some situations, this produces such unwelcome
consequences that we should limit the application of non-contradiction
• Explanation is a form of reasoning that is broadly distinct from argument while often
overlapping with it
• Arguments aim at showing some statement to be worth believing, while explanations aim
to make better sense of something already believed
• Sometimes an explanation is causal, describing (in part) the prior conditions that caused
some event
• Other times an explanation aims to rationalize, to order reasons and definitions, or to
sort priorities according to principles of reasoning
• A specific key virtue of good arguments is shared with explanations: both arguments and
explanations are supposed to teach us something
• Explanation are open to an analogous problem of pseudo-explanation, primarily a matter
of providing a triviality or a mere label when an explanation is called for
• An argument that would not support its conclusion even if its premises were true
can be rejected without checking out the truth of its premises
• If it’s invalid, then it’s unsound irrespective of whether the premises are true
• A sound argument has all true premises
• Validity is a structural property of arguments
• We are not concerned with whether the particular premises are actually true when
evaluating the validity of some argument, but only with whether the conclusion
would be true if the premises were true
• One valid form of argument is called Modus Ponens. Its structure is as follows:
1. If P then Q
2. P
3. Therefore, Q
• Having true premises does not necessarily mean having true Ps and Qs
• If one premise in an argument is not-P then only a false sentence P will make the
premise not-P true
• As long as we pick P and Q in such a way that its true both that P, and if P then Q, then
Q will also be true
• That’s a structural property of the argument – a property that remains even if P and Q
don’t happen to make the premises true
• Another important valid argument form is called modus tollens:
1. If P then Q
2. Not Q
3. Therefore, not P
• Another valid form of argument is called disjunctive Syllogism
1. P or Q
2. Not Q
3. Therefore, P
• A quick and useful way of testing for invalidity is called the method of counter-example
• Definition of validity no way for conclusion of a valid argument to be false if all the
premises are true
• This means we can tell that an argument is invalid if we can think of ways for the
premises all to be true while the conclusion is false
• Example:
o The club president appoints the treasurer
o The chair of the club’s board of governors appoints the vice-president
o Therefore, the treasurer and the vice-president are appointed by different people
What if a single person serves as both president and chair of board? In
this situation the conclusion would be false
• Because validity concerns an argument’s form alone, a valid argument should remain
valid (though not necessarily sound) if we uniformly substitute one predicate or name for
another throughout the argument
• The existence of even one counter-example – either a scenario in which the actual
premises are true and the conclusion false, or a scenario on which a structurally identical
argument has true premises and a false conclusion – shows that an argument is invalid
• So when we test for invalidity, we have to actively search for counter examples
Simplification
P and Q
Therefore,
P
Conjunction
1. P
2. Q
3. Therefore, P and Q
Example: Eric is a doctor. Ellen is a doctor. Therefore, Eric and Ellen are both doctors
Addition
1. P
therefore,
2. P or Q
Example:
foxes are mammals. Therefore either foxes are mammals or cows are mammals
• Here it does not matter whether the statement added using “or” is true or false
• The resulting statement is true as long as at least one of the sub-statements is true
• If we already know P is true, this guarantees that P or Q is true – no matter what Q is
Hypothetical syllogism
1. If P then Q
2. If Q then R
Therefore,
3. If P then R
Example:
if the dollar is devalued, exports will rise. If exports rise, then unemployment will fall.
Therefore, if the dollar is devalued, unemployment will fall
Constructive dilemma
1. P or Q
2. If P then R
3. If Q then S
Therefore,
4. R or S
Example:
either it will snow tomorrow or there will be a quiz in class. if it snows tomorrow, classes
will be cancelled. If there’s a quiz in class tomorrow, I’ll fail it. so either classes will be cancelled
tomorrow or I’ll fail a quiz tomorrow.
Destructive dilemma
1. If P then R
2. If Q then S
3. Not R or not S
Therefore,
4. Not P or not Q
• Example:
if Zainab called her mother, the answering machine took a message. And if her
brother called her mother, the line way busy. But the machine did not take a message, or
the line wasn’t busy. So Zainab didn’t call her mother, or her brother didn’t.
Truth conditions
• The term reasonable will be used to apply to individual statements rather than
arguments
• A reasonable statement is one with sufficient evidence, all things considered, to render it
acceptable in a given state of information
• Truth is often formally understood to be a discrete concept: either a statement is true or it
is false, with no intermediate cases
• Bivalent – having only two possible truth values
• An important class of truth statements is those that must be true, either because they
are truths of logic and mathematics, or because they are true by definition in a broader
sense
• Contingent truth – things might have turned out differently
• Necessary truths: they would be true no matter how things might have turned out
• Premises can be treated as true by definition only if the acceptability of the definition
itself is not contested
• Necessary and sufficient conditions
• Simple (or atomic) statement – a sentence that does not contain another sentence as
one of its parts.
o Example: “my dog has fleas” and “continents drift”
• Conjunctive statement, or conjunction – a compound statement containing two or more
sub-statements (conjuncts), usually joined with the words “and” or “but”. A conjunction is
true if and only if both of its conjuncts are true
• Disjunctive statement, or disjunction – a statement of the form “P or Q” is true just in
case at least one of P and Q is true
o A compound statement containing two sub-statements (disjuncts), joined with the
word “or” or near equivalents like “alternatively.”
o A disjunction is true if and only if one of its disjuncts is true
o “or” can be understood inclusively or exclusively
Inclusive “or” is to say that at least one of the listed disjuncts is true
Exclusive “or” applies when one and only of the disjuncts is true
Best to treat “or” inclusively”, and regard the exclusive “or” as an artifact
of implicature in some contexts: a further interpretation that goes beyond
the strict and literal meaning of the words
o Particularly easy for disjunctive statements to be true
• Conditional statements – a statement of the form “if P then Q” is true unless P is true but
Q is false
o Conditionals are sentences with an if-then form
o P = antecedent (the “if” part); Q = the consequent (the “then” part)
o When we use a conditional statement, we intend to convey some sort of
explanatory relation between the antecedent and the consequent – for instance,
that P is what made it the case that Q
o Basic indicative conditional if P then Q
Complex statements
• Value-theoretic statements – statements involving moral concepts like right and wrong,
good and evil, and statements involving aesthetic notions like beauty and ugliness
• Many people believe that moral statements have no real truth conditions
• Tracing the important differences between value theoretic discourse like aesthetics or
morality and other discourses is a subtle matter; in the absence of such subtlety it is
unwarranted to dismiss moral and ethical claims as meaningless, or as lacking truth
values
• An argument is cogent just in case it makes its conclusion rationally credible – that is,
rationally believable
• A strongly cogent argument provides a high degree of justification for its conclusion,
while a weakly cogent argument might provide only a tentative or easily overturned
justification for its conclusion
• A deductively sound argument is fully cogent by these definitions: with true premises and
valid structure, it demonstrates the truth of its conclusion
• Arguments that are invalid simply don’t work; they are logical fallacies
• These arguments presented as valid, the success of which would require their validity,
but which have invalid forms
• Enthymemes – arguments are technically invalid in the sense that they have premises
that are left unstated, which the audience is supposed to understand from the context
Varieties of ampliativity
Inductive reasoning
• In general, the larger and more representative the inductive base, the stronger the
argument for the conclusion
• Cogency – an argument is cogent when it provides sufficient grounds for the rational
belief of its conclusion
• Key differences between deductive and inductive arguments:
o Deductive arguments
Satisfy, or aim to satisfy, the definition of validity
Do not strictly become more valid or more sound by degrees
If sound, remain sound no matter what other premises might be added
o Inductive arguments
Are strictly deductively invalid, being ampliative
Lend only a degree of support to their conclusion; the degree can vary
Are sensitive to subsequent information that may be added
• As new information comes in, ampliative arguments can be weakened or overturned
altogether
• Ampliative reasoning is defeasible: no matter how confident we may be in the cogency
of an inductive argument, in principle it remains possible that some new information will
weaken or overturn it
Abductive reasoning
• Context of discovery – might include any number of arational, accidental, and sheer
dumb luck explanations for someone’s having that “aha!” judgement
• Context of justification – in which we adduce the evidence that makes it
reasonable to regard the abductive judgement as one of the successes
• Two kinds of related mistakes in reasoning associated with this distinction are to
undervalue and to overvalue a claim on the basis of its context of discovery, overlooking
the role of the justification provided for the claim
Analogical arguments
Causal reasoning
• Perhaps the most important sort of empirical reasoning is that relating to causes and
effects
• Many of our inductive inferences aim at identifying a cause for oft-observed events;
many of our abductive inferences identify some hypothetical cause as the best
explanation for previously unexplained events
• Mill’s methods are useful for identifying causes in complex circumstances. Useful for
beginning to distinguish between intuitive causes and mere correlations, or patterns of
co-occurance among various factors
• There are five basic methods:
o Method of agreement hinges on the idea of factors common to a range of
circumstances
Suppose some effect E is produced in two situation S1, S2. If there is only
one factors F common to both, then F is the cause of E
o Method of difference
If S1 and S2 share every factor except that S1 contains F and S2 does
not, then E occurs in S1, F is the cause of E
This is the rationale behind having a control group in experimental studies
o Joint method of agreement and different when comparing a range of
complex circumstances, we look for a pattern that has some factor
common to all the circumstances in which the effect occurred and absent
from all the circumstances in which the effect didn’t occur
o Method of concomitant variations tells us to look for co-variation, or
coordinated changes, in the degree to which some factor is present and the
degree to which an effect is present in various circumstances
Intended to apply when fate or research budget doesn’t permit us to find
or construct distinct circumstances in which properties or effects are
entirely absent
o Method of residues
If we know that a particular range of factors causes a particular range of
effects, and we notice that all those factors minus F cause all those
effects minus E, the F is the cause of E
• What do Mill’s methods tell us? they codify intuitions about cause and effect that are,
or ought to be, obvious in principle but are surprisingly easy to forget in practice
• They don’t tell us how to individuate situations, nor do they tell us which factors in a
situation are potentially relevant
• Sometimes important to distinguish proximate causes from remote causes
• If we imagine a chain of causes over time leading up to event E, the first items in the
chain are the remote causes of E, while the events just prior to E are proximate causes
• Efficient causes – the direct event leading to some outcome
• Structuring causes – the framework of factors that enables a chain of efficient events to
occur at all
States of information
• State of information – the total evidence at our disposal when we consider the
proposition or some course of action
• Common for people to talk and think as if the rational believability (credibility) of some
proposition is an all or nothing matter
• As our state of information improves gradually, the rational credibility of some belief can
increase gradually
Defeasibility
• Our beliefs can be held on the basis of a current state of information without greatly
constraining what it would be rational to believe under some other (more complete) state
of information
• Defeasibility is a key feature of empirical beliefs
• In the empirical case, a belief that was first justified by the available evidence can later
be overturned without this meaning that the apparent evidence wasn’t really evidence in
the first place
• It might just mean that as more evidence accumulated, it came to point in a different
direction
• Another key element of evidential reasoning is the ability to recognize when we are in a
neutral state of information, and to condition our judgements appropriately
• Fallacy of equivocation – in which one illicitly uses a single term in two different ways
• Sometimes a state of information is neutral with respect to some proposition, not
because we have no information, but because the information we have seems to divide
roughly equally between supporting and undermining the proposition
• The upshot is that thinking critically from imperfect evidence requires monitoring our
tendencies to misread our own states of information – that is, it requires reflecting not
just on what evidence we posses, but on how to weigh that evidence
• A first step here is to recognize and employ forms of speech that express the modest
limits of what we may know in a situation
• When the evidence with respect to some statement P is impoverished or seems equally
balanced, and when no action that assumes either P or not-P is absolutely necessary,
the reasonable thing to do is to suspend judgement
Proving a negative
• As long as you stick to one conception of proof or the other, there is no principled reason
to say that one cannot prove a negative claim
• Broadly ampliative standards of proof are in principle defeasible
• Within the standards for inductive proof, one can often prove negatives with the similar
confidence that one can prove positives; most defeating conditions for evidence in the
negative case are defeating conditions in the positive case as well
• The real argumentative issues usually turn out not to be whether one can always or ever
prove a negative, but what should conclude in the absence of the evidence, and who
has the burden of proof when a negative claim is offered in an argument
• The specifics of such claims, their relations to our particular and theoretical knowledge,
and the context of communication in which they are uttered are all relevant to whether
we can prove or even need to judge them as sufficiently probable to accept
• What is asserted often goes well beyond the content of the sentences uttered
• Typically a reasonable audience will consider not merely what was said, but the point of
saying it
• Good critical reasoning requires some reflection on the range of linguistic and extra-
linguistic devices implicated in the communication of arguments and, occasionally, in the
commission of reasoning errors
• Many purposes of language are performative – that is, they result in the accomplishment
of some act rather than just describing it
• Our uses of language extend to include issuing commands, asking questions, and
making assertions
• Commanding, questioning and asserting are different kinds of speech-act
• Performing these linguistic acts is a matter of employing the appropriate kind of
sentence, since sentences have various grammatical moods that typically correspond to
different kinds of speech acts
• Imperative sentences are used to give orders
• Interrogative sentences are used to ask questions
• Indicative or declarative sentences are used to assert
• Assertions can be made without employing indicative sentences (most common example
is that of rhetorical questions)
• Rhetorical questions may be framed rhetorically for a reason: the effect of putting the
premise in the form of a question is to oblige the audience to look for evidence against
the claim, rather than the speaker providing evidence in its favor
• This tactic of shifting the burden of proof often indicates that there is little or no good
evidence to be given in support of the premise
• The premises and even conclusions can be implicit – that is, not written out in any form
at all, but intended to be obvious from the context
• Rhetorical questions and the like are part of a general way of indirectly setting out a
premise or a conclusion, known as conversational implicature
• This is the practice of using an utterance to convey a meaning beyond its literal meaning
• A careful reconstruction of an argument containing apparent uses of implicature should
explicitly note them, and either choose the most plausible interpretation given the
context, or analyze the argument twice – once with each interpretation
• Presupposition – a proposition that may not be explicit in some statement, but which
must be granted if the statement is to be meaningful or felicitous
Rhetorical effects
• Concept of rhetoric is sometime defined broadly as the study and use of effective
communication, including cogent argumentation
• Often rhetoric is distinguished from strict considerations of truth, accuracy, validity, and
soundness
Vagueness
• When vagueness is understood as mere imprecision, the critical thinking issues it raises
are essentially those raised by weasel words
• Calling a concept vague can be a way of making a technical observation about the
puzzling logic that characterizes statements employing the vague term
• In this sense, vague terms are those subject to Sorites reasoning
• There are two broad kinds of critical reasoning issues associated with vague predicates
o It is possible to reason poorly by disregarding vagueness, but also by over-
interpreting it
• The former kind of mistake comes when we fail to allow for blurred boundaries and
partial results, treating situations as if they must be definitely one way or definitely the
opposite way
• An important aspect of understanding vague language is distinguishing between being
unable to say where a difference lies and there being no difference at all
• Even though it might be difficult or impossible to determine precise truth conditions for a
statement involving vague terms, the statement may still be reasonable on broader
grounds
Ambiguity
• Direct quotation – using actual quotation marks, is a particularly powerful rhetorical tool
since it purports to let the quoted person speak for herself
• Indirect quotation – in which the gist or effect of someone’s utterance is presented, is
useful because it can be made sensitive to the context of utterance
• The most obviously misrepresentative form of direct quotation is the misquote – this just
amounts to attributing words in quotation marks to someone who did not actually use
those words, or at least a direct translation of them
• Misquotation can result from deception or an honest mistake, but the repetition of the
misquote very often is linked to its potential to cast a negative light on one side in a
personally, politically or religiously charged issue
• Misattribution – is another form of misquoting – it occurs when one speaker’s words are
attributed to another
• Another pitfall in the argumentative use of quotations is the out-of-context quote, or
quote-mining
• This is strictly distinguished from misquoting, in that a mined quote need not get the
speaker’s words wrong
• A mined quote is a correctly quoted sentence or phrase that is reported without
surrounding context that changes or qualifies its meaning and is therefore falsely
presented as characteristic of the speaker’s views
• In the case of shorter phrasal quotes, quote mining involves mixed quotations – partially
direct and partially indirect
• A stitched together sort of mixed quotation is a red flag, since this can amount to quote-
mining combined with unquoted interpretations of the speaker’s alleged views
• Finding the same quote reproduced elsewhere is quite weak evidence for its accuracy
and representativeness
• Charity – the idea is that one should always engage, not necessarily the argument
exactly as spoken or written, but the best version of the argument that is roughly
consistent with the speaker’s words and recognizable intent
• The principle of charity itself does not always dictate one particular kind of response to
an argument
• When interpreting or reconstructing someone’s argument, we are sometimes in the
position of being able to trade obvious invalidity for obviously false premises; in this case
it is difficult to say what charity requires
• Some styles of argumentation have more characteristic properties due to their particular
content or to the methods by which they work
Moral arguments
• common for naturalistic fallacy to occur in negated form, moving from premises about
what is unnatural to defend the claim that some human action is wrong
• it is substantive but morally irrelevant to say that something is unnatural, if this means
only that it is not characteristic of nature (if it is relatively rare)
• included in moral argument is reasoning that attempts to assess responsibility for
actions, especially for the purposes of assigning praise and blame
• much argument of this sort incorporates non-moral reasoning
• whether some outcome was intended or accidental is normally a relevant factor for moral
argument
• judging someone’s actions irresponsible or negligent is often a matter of emphasizing
the actual consequences of their actions over the intention with which they were
performed
• photographs and video clips can function as premises, conclusions, or even as implicit
arguments themselves
• the message communicated or the effect achieved by a visual image is a matter of the
objects or events depicted in the image
• the message can be more or less precise depending on such factors as the number,
kind, arrangement, and presentation of the objects
• a basic means of communicating a point through imagery is by this technique of
picturing two things together, known as juxtaposition
• red flag of photographs when used as evidence or to shape opinions
o they entirely erase the temporal context
• a range of interests other than – and possibly inimical to – the goal of accurate
representation of events is almost always part of the process by which photographic or
videographic information reaches us
• when we deal with video imagery, cropping can come to take on the character of visual
“quote-mining”, with bits and pieces stitched together to create a misleading impression
Chapter 4 – Fallacies
Logical fallacies
• the logical fallacy can be given formal or quasiformal definitions with regard to argument
structure
• any structurally invalid argument is logically fallacious; its premises do not suffice to
logically determine the truth of its conclusion
• logical invalidities are the largest class of fallacies, since both deductive and ampliative
arguments that fail to bear out their conclusions will count as invalid
• non sequitur (it doesn’t follow) or ignoratio elenchi ( an argument with an irrelevant
conclusion)
o ex: pumpkins are orange; therefore, professional athletes are overpaid
Conditional fallacies
• a conditional premise tells us that the truth of the antecedent is sufficient for the truth of
its consequent. But that doesn’t mean the truth of the antecedent is necessary for the
truth of the consequent
Scope fallacies
• quantifier scope fallacy – this name for the fallacy indicates that it consists in a
misordering of a universal quantifier (all, every, each) and an existential quantifier (some,
a, the, one), resulting in an invalid inference
• when an existential quantifier falls within the scope of a universal quantifier, we cannot
validly rewrite this with the universal quantifier falling within the scope of the existential
quantifier
Equivocation
Evidential fallacies
• a good evidential argument shows its conclusion to be reasonably likely – with all the
vagueness and context dependence that “reasonably likely” suggests
• evidential fallacies are defined in terms of the failure to meet this aim
• a fallacious evidential argument must also be logically unsound, but the diagnosis of its
failure typically provides more information than this fact alone
• pointing out that an inductive argument is unsound doesn’t explain why it isn’t even a
good evidential argument
Overgeneralizations
• The broadest kind of evidential fallacy is that of drawing a general inference too strong
for the specific evidence in hand
• Framed in such imprecise terms, this makes for plenty of uncertain cases, examples that
are only borderline fallacies
• Hasty generalizations
o Ex: ted is always saying something stupid
o This is too strong a conclusion even if those are the only two times i met ted
• Sweeping generalizations
o Ex: seeing consecutive news reports about very corrupt third world governments
and concluding that all or most third world governments are very corrupt
o If only seen a small sample, there is a reason to think the sample was not
random
• It can be important in diagnosing a hasty generalization to have some sense of the size
and potential variations in the whole set of things about which one is generalizing
Conspiracy theories
Vicarious authority
• Much of our information about the world is based on testimony from sources we take to
be reliable
• It cannot generally be a mistake to justify some claim by quoting the opinions of experts.
But there are many ways of getting such a justification wrong – ways of giving fallacious
appeals to vicarious authority
• Argument from authority, a species of a broader class of errors called the genetic fallacy:
o Evaluating a claim on the basis of irrelevant facts about its origins, rather than on
the basis of the evidence for it
o If the claim is justified by appeal to a proper authority, the justification may be
evidentially cogent
• People and groups are authorities only relative to a field
• In order to count as an authority for the purposes of some specific claim, the person
cited ought to have recognized expertise on that particular topic
• If the person cited as an authority has an appropriate expertise then the appeal to
authority establishes at least a default reason to regard the claim as correct
• A successful appeal to authority is one that implicates the received view among those
best qualified to judge the matter (ex: citation of a widely used university text)
• If we discover that a cited expert holds a view on the claim in question that is marginal
relative to other experts in that field, this is sufficient to greatly weaken, and perhaps
nullify altogether, the degree of justification that expert authority confers on a claim
• Two main ways for argument from authority to turn into a fallacy
o Citing an expert with unorthodox views relative to most other experts
o Having a mismatch between the claim to be supported and the expertise of the
person cited
• The sort of fallacious argument from authority that conflates areas of expertise is often
couched merely in reference to the authority’s intelligence
o The intended inference is that only someone “smarter” than an authority is
qualified to reject any belief they held
• Experts themselves may have an overdeveloped sense of the breadth of their insight
• Relevance of training and experience to a particular question can also be hard to judge
simply because the relations between topic and expertise can be very fine-grained
• Mismatched expertise does not even provide weak default justification
• There is no good reason to think that an expert in field X will have any special insight
into field Y
• Magical thinking – a blanket term for biases toward seeing a causal connection where
none exists
• Simplest kind of magical thinking is the fallacy of Post Hoc Ergo Propter Hoc: in effect
meaning after, therefore because
oEx: 1. A black cat crossed my path, and then i got hit by a bus
therefore,
o I got hit by a bus because a black cat crossed my path
• Here only the relation of temporal succession serves as the basis for inferring a causal
connection
• Law of similarity – this is a label for the tendency of people – sometimes one sees
primitive or pre-technological people listed as the main culprits – to conclude that factors
similar to some effect must have the power to cause the effect
• The problem with such reasoning is two-fold:
o Similarity is cheap; there are innumerable ways for one thing to strike us as
similar to another
o There is no good reason to think that what strikes us as similar to an effect is in
any way disposed to be a cause for that effect
• The most significant threat to the proper use of statistical reasoning and the methods of
good science
• One of the most common and important uses of evidential reasoning is to detect and
characterize correlations – between objects, trends, or more abstract phenomena
• Most common way of misdiagnosing correlations is known as the fallacy of multiple
endpoints
• Most objects and events have enough traits that as a matter of chance alone there will
be unusual relations between objects in a collection
• If we don’t specify in advance which sort of properties we’re interested in, then it is
usually easy, even trivial, to find properties that some things have in common
• It is a relatively high probability that by randomness alone a sample will show
commonalities along some dimension of comparison or other
• Multiple endpoints fallacy is committed when we first gather data and then look for
significance, instead of first deciding on a hypothesis and then testing it
Distractors
• Some fallacies are ways of communicating that shift attention from the argument at
hand, rather than poor arguments in themselves, though they may be understood as
fallacies of relevance
• Fallacies of relevance – those introducing irrelevant factors to the real issue under
discussion
Red herring
• The issues that get raised in the process of giving an argument tend to draw us into
assertions and objections that are related to the topic, but not precisely relevant to the
argument
• Statements or objections that lead the discussion away from the key point are called red
herrings, especially when there is some suggestion that this is done deliberately
• This term refers to the fallacy of misrepresenting an argument or a view in order to refute
a dumbed down version of it
• This fallacy typically results from ignoring the importance of charity in reconstructing or
interpreting the arguments of one’s fellow discussant
• Many different phenomena count as instances of the straw man fallacy, ranging from
deliberate deception to poor scholarship
• Direct and indirect quotations are the most obvious ways of attributing views to other
people
Ad Hominem
• A common example of well poisoning is the dismissive observation “oh, I used to think
that when i was younger”
• Well poisoning is often pre-emptive, moreover, with the speaker attempting to
delegitimize an obvious or predictable objection for which there is no good reply
Confusions
• Other procedural fallacies are a matter of setting up the discussion or the argument
poorly
False presuppositions
• Presuppositions are propositions that one must grant or assume in order for a statement
to make sense
• The problem is that a statement’s presuppositions may be false, in which case
addressing the statement will commence with addressing its presuppositions rather than
its explicit content
• Another way in which assumptions or biases may find their way into a persuasive case
without being explicitly represented in the premises is through the particular choice of
words and the communicators’ assumptions about them
• Slanting language – this fallacy is committed in its most general form when a speaker
describes some situation in terms that already entail or suggest the desired conclusion
• Persuasive definition – not simply describing something in question begging terms, but
attempting to define it in such language
• A fallacy akin to persuasive definition is no true scots fallacy – which is a kind of
equivocation between an empirical claim and a definition
• This fallacy can come in stronger and weaker versions
• A very common procedural fallacy of definition is argument by dictionary
• Dictionary definitions tend to be biased themselves: perhaps only rarely by the
publisher’s perspective on a topic, but always by the need for brevity and simplicity
• This fallacy is committed when a speaker appeals to a dictionary definition as a means
of settling a dispute
• Most famous examples of the complex question are cases of loaded questions, in which
some proposition is presupposed whether the answer replies in the negative or the
positive
• If the questioner requires such a one word answer, the question amounts to a version of
the false dilemma fallacy: it limits the options to two cases
• Other fallacies quite different from loaded questions also count as complex questions
• One version hinges on the behaviour of disjunctions in evidential and group decision
contexts
Outliers
• Outlier – something that is far from the norm or not easily categorized
• Fallacy of false enchotomy is a very common kind of fallacy that consists of the
assumption that there are only a certain number of possibilities, when in fact there are
more
• Problem with an argument based on this assumption is that it simply contains a false
disjunctive premise
• Whenever we list 3 or 4 or 5 options and treat them as exhausting the alternatives it is
incumbent on us to be confident that we really have listed all the live alternatives
• To see the fallacy in its simplest form, recall the valid argument form of disjunctive
syllogism:
o P or Q
o Not Q
therefore,
o P
• The two disjuncts must exhaust the possibilities. If they do not then its possible for both
P and Q to be false
• One reason this fallacy is easily committed is that it can reflect a rush to judgement from
someone already convinced of the conclusion, so that one too hastily takes ones own
position as the only remaining option, once some other view is falsified
• The fallacy of composition and the fallacy of division are duals; some cases equally well
diagnosed using either explanation
• Both fallacies are a matter of the relation between a whole and its parts
• The fallacy of composition occurs when we say thing of the form “the parts each (or
mostly) have property X; therefore, the whole has property X”
• The fallacy of division runs the other direction: “the whole has property X; therefore, its
parts have property X”
• Both directions of inference are invalid
• One of the great intellectual advances of humanity was the set of discoveries that enable
the precise quantification of facts, situations, or data
o Using numbers and numerical concepts to characterize things
• In its most basic form this process began with the invention of number systems, enabling
us to reason in the abstract about counting and measurement
• Careful and appropriate quantification of data can greatly assist our understanding of
complex situations
• Associated with numerical reasoning, is a substantial class of fallacies
• One of the fundamental problems in this domain is the interaction of two phenomena:
o Innumeracy (the arithmetical equivalent of illiteracy)
o The belief that innumeracy is intellectually acceptable
• The attempts at persuasive reasoning that we encounter from every source of
information exploit quantitative reasoning, in particular in claims about rates,
percentages, averages and other stats
• To be unable to understand these claims critically is to be caught in a dilemma:
o Either to trust claims that are framed in mathematical terms we cannot evaluate
ourselves
o Or to generally reject them because we can’t evaluate them ourselves
• The most common uses of numerical reasoning in popular discourse involve
percentages, rates, and simple averages
• What these uses have in common is that each is a way of taking some complex state of
affairs or state of information and reducing it to a single representative number that is
supposed to encode what’s important about that information
Percentages
• Only by returning to the absolute numbers can we meaningfully combine the data
• It is a mistake to calculate percentages from other percentages, so any suggestion that a
figure has been derived in this way is an immediate red flag
• Faulty reasoning about percentages does not require the comparison of different
samples
• Changes over time in a single measured group must be handled carefully too, if they are
represented by percentages
• The real danger in making such comparisons is that we may too hastily overlook what
the percent claim means
• The concept of a percentage has become so widely employed that it has assumed a
non-literal conventional meaning in some contexts
• Whenever possible, it is worth confirming that a claim phrased in a percentage really
bears the proper relation to a ratio of 100
• Applying any sort of arithmetical analysis to a purely conventional percentage claim is
likely to lead to a false sense of precision and representativeness
• Sometimes percentage claims are meaningful only if understood metaphorically
• One of the most common uses of percentages is to express not just changes but rates of
change
• As absolute amounts change, and as base rates change, the potential for confusion in
comparing percentage rates is substantial
• One source of reasoning errors about rates and percentages is the fact that some
apparently simple claims about rates are subject to ambiguity
• A further set of logical pitfalls associated with reasoning about rates derives from the
difficulty of keeping straight what level is being described whenever we consider claims
about higher order rates
o That is, rates of rates
• An evidential fallacy that frequently arises in reasoning about rates is linear projection
• This species of hasty generalization is the assumption that a rate observed over some
specific duration must extend into unobserved territory as well – either the past or the
future
• Alarmist linear projections are red flags, obliging us to look carefully at the context to
determine whether the assumption of a fixed rate is a reasonable one
• Sometimes factors are related in a fixed linear manner, but they might also be related
exponentially, in irregular and more complex ways, or even chaotically (unpredictably)
• It all depends on the details of the mechanisms relating the factors, and those details
can vary greatly at different times and different levels
• Some factors are related in a linear fashion only above or below certain thresholds
• Percentages are not raw or absolute scores, unless the raw data happens to be out of
100
• Percentages are at least representations of the absolute scores (70% = 21/30 on a quiz)
• Percentile – is a term often used to numerically rank values by how they compare to
other values
• Percentiles are inherently comparative within a group
• The idea of an n-ile ranking can be more coarse grained than percentiles. Instead of
listing where some value falls relative to others on a scale of 100, we can instead break
down a set of things into quartiles (4 even groups), or quintiles (5 even groups)
• It can be easy to confuse one concept for another when percentages and percentiles are
combined
• The use of definite numbers to rank people, institutions, or products can be unprincipled
or highly misleading
• Whenever we assign some objects a rank ordering, we are again dealing with a
representative number, with all the loss of information that this implies
• Ordinal – means first, second, third, and so forth
• Cardinal numbers are one, two, three ...
• Whenever we look at a comparison of ordinal rankings, the potential for reading too
much into the numbers is particularly strong
• Whenever we are given an ordinal ranking there is a danger of just this kind of thing
occurring:
o That both the position of an entry on the list and the use of seemingly hard
numbers to frae the statement will convey the sense of a major difference where
there is little or no difference in reality
• A single numerical ranking of any group of things having more than one standard of
goodness presumes that there is some way of sorting these standards out into a single
ordering:
o Either reducing them to a more basic standard or taking some combined
measure
Pseudo-precision
• Claims can gain rhetorical power through the use of numerical expressions,
piggybacking on the perceived clarity and certainty of mathematics
• One way of exploiting this effect is to state a numerical claim in highly precise terms,
heightening the perception that a great deal of research and care underwrites the
statement
• Sometimes the claim could not possibly be warranted to the degree of precision it
displays
• Such claims are merely pseudo-precise
• This can include framing a statistical statement in terms more precise than the “noise”
(like rounding off numbers, and so forth) make reasonable and overstating the numerical
precision of calculations based on measurements known to have been much more
coarse grained
• There are a range of visual tricks that can be employed in the presentation of data
through a chart or graph, amounting to visual rhetoric
• Without strictly lying, one may present graphed information so that it inherits the
overtones of definiteness and objectivity that quantificational claims often have, even
though the style of presentation makes the graph at least powerfully misleading
• The range of numbers chosen for the axes can make all the difference between changes
that are barely noticeable and changes that look alarming and sudden
• A narrow range of values on a charts axis can function as a kind of microscope, making
small fluctuations appear large
• By choosing an outrageously large scale one can make changes that are significant by
some independent standard appear trivial
• Charts and graphs can be vehicles for the display of most pitfalls in reasoning and
communication
• For example, a graph that used fixed spacing to indicate ordinal rankings would visually
mislead us into inferring a substantial difference between first and worst, in just the way
we saw allusions to ordinal rankings may do on their own
• the term average is triply ambiguous between the three interrelated concepts of the
mean, the median, and the mode
• for each definition, the average is supposed to be a single case or single value that can
represent the sample well enough for the rational purposes at hand
• the arithmetical mean represents a set of values as a ratio between their total value and
the size of the set
o it is calculated by adding up the values of a sample and dividing the sum by the
number of elements in the sample
• when someone refers to an average without further qualification, they very often have
the arithmetical mean in mind
• the median represents a group of data points by indicating the midpoint of its distribution
• the median value in the group is the one that has as many elements greater than it as
are less than it
• if there is an even number of values and hence no single midpoint element, the median
is taken to be the arithmetical mean of the two central values
• the mode is representative by way of being the most commonly occurring value in the
set
• depending on what we are interested in measuring, or depending on the point someone
is trying to convey, these different conceptions of a representative value can be used to
very different effects
• people explicitly or implicitly using these broadly statistical concepts can confuse them
by mere carelessness or incompetence, rather than by any intent to deceive
• averages can be used and abused even in contexts where all the information is in our
possession
• in these cases, we instead examine a sample drawn from the larger group that we want
to know about and then take our conclusions about the sample to apply to the whole
population
• this raises a family of extra problems regarding whether any such mean, median, or
mode is representative
• we’re trying to infer the unobserved from the observed
Representative sampling
• besides the need to make decisions, weigh risks, and predict outcomes, we use
probability and statistics as powerful investigative tools that can reveal correlations and
causes among various events or conditions
• we need to be confident of our ability to say what they averages and trends are in the
first place
• the general problem is that we can easily pick an unrepresentative sample from the
population at the outset
• if this happens, then no matter how careful our reasoning about the sample, it will be
misleading with respect to the population
• two broad way in which this can happen:
o by incorporating some bias into the selection of the sample (selection bias)
o by being unlucky
• a selection bias commonly occurs in informal polling (and sometimes in serious polls)
whenever the sample is simply drawn from the people who want to be heard on that
particular issue
• sometimes a sample exhibits a bias because it was chosen for the purpose of creating
controversy (ex: call in shows)
• any claimed trend might be an artifact of a trimmed sample; certainly a sample range or
time period that isn’t a conventional round number is a red flag
• a large component of any science of measurement within a particular field is the study of
how to get non-biased samples in light of the challenges specific to that field
o often best tool one can use is one’s general knowledge and imagination, since
selection biases can be complex and subtle
o it is useful to be able to think creatively about how the selection method might be
biased
• the more people in the sample, the more likely it is that we’ll get a proportional
representation of the population
• easiest way to see this is pictorially, using the normal curve, or bell curve
• the shape of the curve indicates how the data in a sample are distributed
• the standard deviation basically tells us the shape of the curve
o a small standard deviation means a narrower curve (the data clusters together in
the centre)
o a larger one indicates a flatter curve (the data spread out relative to the mean)
• if the data fit a normal curve, then approximately 68% of the cases fall within one
standard deviation of the mean, while approximately 95% of the cases fall within 2 SD of
the mean, and 99% fall within 3 SD
• people are so interested in assembling and comparing statistics because they tell us
why things are the way they are
• correlation – two phenomena or variables that move together, that is, they co-vary in
predictable ways across different circumstances
• by looking at correlations (or their absence) and theorizing about possible explanations,
we can use statistical and probabilistic reasoning not just to understand complex
relations but to intervene and affect them
• null hypothesis – is the assumption that any correlation observed between the
phenomena is purely random or accidental
o its character as an assumption is crucial; if the evidence does not force the
rejection of the null hypothesis, the response is not to cvonclude that there is no
real effect
o the only conclusion is that the null hypothesis is undefeated
o the stronger the conclusion that there really is no effect or no correlation normally
requires looking at a series of studies, all of which fail to reject the null hypothesis
• a simple sort of correlation is co-occurrence, or just being found together
• positive and negative correlations
• how good do our grounds need to be for rejecting the null hypothesis?
o The normal scientific practice is to make it easy for the null hypothesis to “win”
• The term “P-value” is used to denote how probable it is, given a particular sample, that
you would get a sample that far from the null hypothesis if the null hypothesis were true
• The null hypothesis is regarded as undefeated as long as it has even a 5% (or 1%)
probability of being correct in light of the evidence, which means that the probability of a
non-random correlation between two phenomena, given the observed data, must be
95% (or 99%) before it can be said to hold with the appropriated confidence
• The null hypothesis fails when the correlations we observe are highly likely not to be
merely random
• Confounds – alternative explanations for the observed data
• The classic example of a confound for a causal explanation is a common cause
o That is, X and Y may be correlated because they are both caused by Z, and not
because X causes Y or vice versa
• Rejecting the null hypothesis does not in itself mean inferring a particular efficient-causal
explanation of the data. It only means that the observed correlation is genuine, that it
ought to be regarded as predictive of future situations that are relevantly similar
• When we draw (non-deductive) inferences from some set of data, we can only ever be
confident in the conclusion to a degree
• Statistical significance – is a measure of the confidence we are entitled to have in our
probabilistic conclusion
• Confidence interval – the confidence interval is the range of values within which we can
be statistically confident (to some specified degree) that the true values falls
• The margin of error is half that range, expressed relative to the midpoint of the
confidence interval
• As soon as we consider more precise claims, with smaller margins of error, we are less
entitled to confidence in their truth unless we improve our state of information by
considering more evidence
• The smaller a margin of error we want to have in framing our conclusion, the more data
we need in order to have high confidence in it
• When we are given some statistic based on some sample, we need to know both the
margin of error and the significance of the result
• A statistical conclusion always represents a compromise between the precision of the
claim and our confidence in its truth
• Using fixed standards of significance is the most common way of simplifying the
interpretation of statistical correlations
• One consequence of there being a specialized sense of “significance” in the realm of
statistics is that an ambiguity may arise when the term is popularly applied to a scientific
claim
• There are two ways to go wrong in assessing whether two phenomena are correlated or
whether some condition exists
o These are called type I and type II errors
• Type I errors are sometime called false positives
• Type II errors are sometimes called false negatives
• But (in theory) we don’t assert the null hypothesis when our results are indecisive. We
just fail to find grounds to reject it
• May be something of a mistake to call type II errors false negatives; they are not genuine
assertions of false negative claims, but rather are non-assertions of positive claims when
a positive claim would be warranted
• There are stronger and weaker type II errors that can be made, depending on how
strong a conclusion we wish to draw, or just how we have to act, in light of evidence that
suggests no positive result
• How can we distinguish between inductive reasoning that nevertheless reliably supports
its conclusion, and reasoning (inductive or not) that is not merely deductively invalid but
non-cogent in a broader sense?
• This is really the point of probability and statistics: together they are the science of
confidence under imperfect knowledge
• We use probability judgements for many purposes, broadly under the headings of
prediction, explanation, and rational decision making
• Probabilistic concepts provide a measure of that degree of confirmation
• Together the possession of predictive tools and good explanations enables us to choose
actions most likely to lead to some desired results
• Probability is primarily about prediction and explanation or conditioning our expectations
on the basis of observed data
• Statistics is primarily about analyzing observed data correctly
• Our “factory settings” when it comes to reasoning about odds and risk are made up of
mostly implicit or unconscious heuristics of various sorts (that is, rough and ready rules
of thumb)
• We are not very good at recognizing the relevance of information to the truth of a
proposition
Basics of probability
• These numbers are to be understood in terms of relative long run frequency of events
• The second basic axiom is that necessarily some member of the set of possible events
occurs. Necessarily, something or other happens
o Axiom 2. Where S is the set of all possible outcomes, P(S) = 1
• Alternatively, this says that there are no outcomes outside S
• The probability we calculate is only as well defined as the set of outcomes S
• The less we know about how things might turn out, the less reliable our calculation of
probability based on what we do know
• Other things being equal, if we overestimate the number of possible outcomes, we will
underestimate the probability of any particular outcome, and if we underestimate the size
of S, we will judge any particular outcome more likely than it really is
• S can be divided into two classes of outcomes for any event e: the outcomes on which e
occurs and the outcomes on which e does not occur
• The probability that e occurs is 1 minus the probability that it does not occur
o P(e) = 1 – P(-e)
• The probability of an event is given by
o Number of relevant outcomes / total number of possible outcomes
• A large part of formulating good probabilistic explanations and predictions in complex
situations consists of finding useful ways of individuating outcomes – that is, carving up
possible events into relevant outcomes
• Event = a set of outcomes
• For disjoint events (A or B, at least one event occurring) we use U to mean, roughly, “or”
• For conjoint events (A and B, all the specified events occurring) we use ∩ to mean
roughly. “and”
• Often we need to know the likelihood that at least one of two or more events will occur
• The probability that interests us is:
P ( A ∪ B )=P ( A ) +P ( B )−P( A ∩ B)
• The probability that A or B occurs is the probability that A occurs plus the probability that
B occurs, minus the probability that both A and B occur
• Mutually exclusive – they can’t both occur
• If both A and B are mutually exclusive then P(A ∩ B), the probability that they occur
together is 0
• So the last part of the equation can be dropped for the special case of mutually exclusive
events
P ( A ∪ B )=P ( A ) + P ( B )
• It is normally easier for a disjunctive statement to be true than for a simple sentence or a
conjunctive statement, since the “or” statement casts its net wider
• P(A ∩ B) is there so we do not count probabilities twice
Independent A and B
P ( A ∩ B ) =P ( A ) × P ( B )
• When we ask for the probability of conjoint events A and B, we are asking about overlap
between the possible A-events and possible B-events. Multiplying the probabilities tells
us the likelihood that an event will count as both kinds
• Dependent events –sometimes the probability of a B-event will be affect by whether an
A-event also occurs and vice versa
• Conditional probability – the probability of A given B
Dependent A and B
P ( A ∩ B ) =P ( A ∣ B ) × P ( B )
Conditional probability
• Conditional probability – the probability that an event will occur, given that another event
occurs
P ( A ∩ B)
P ( A ∣ B)=
P (B)
• The probability of A given B is the probability that A and B co-occur divided by the
probability of B
• Conditional probability is one of the key concepts of reasoning from states of information
• The way that changes to our state of information affect the confidence we should have in
some proposition is a manifestation of conditional probability
• Effective probabilistic reasoning often hinges on distinguishing conditional probabilities
from categorical probability, as the basic relative frequency of an event is known
• Good conditional reasoning is often just a matter of taking one’s best evidence into
account
• We must be maximally specific relative to our state of information regarding an event’s
properties, in order to derive a useful conditional probability calculation
• We have to take account of background information when using conditional probabilities
• Base-rate error – is made when we neglect broader statistical or probabilistic information
(or overlook the fact that such information is required) in favor of immediate or local
information that is either incomplete or irrelevant
• Line of reasoning among gamblers to suppose that, since there is a certain distribution
of wins and losses over the long run in random events like dice-rolling or card dealing, a
short term run of losses must be balanced out by a short term run of wins
• The fallacy is to think that if a series of independent events has the conjoint probability p,
then the probability of any single event in the series is somehow dependent on the
probability of the series as a whole
• A broader version of this fallacy occurs whenever we take the odds of a single event to
be conditioned by the odds of the series of events of which it is a part
Comparing probabilities
• Comparisons that hold within all partitions of the set of possible outcomes do not
necessarily hold for the set as a whole
• The lesson specific to simpson’s paradox is that an apparent correlation in a set of data
may actually be reversed within each subset of the data when it is partitioned in a
particular way
• Unless one is informed about the ways in which such apparent correlations can be
misleading, it is easy to overlook the occurrence and relevance of such “simpson’s
reversals” in a statistical argument
Regression fallacy
• People for whom things are going as well as possible naturally will be subject to large
downward fluctuations much more than large upward fluctuations
• One commits the regression fallacy when one confuses a pattern in random events by
overlooking such regression effects
• Regression is a label for the almost trivial fact that a random sample within a normal
distribution tends toward the mean
• In other words, if you randomly pick a bunch of points under that curve, the odds are that
most of them will be at or near the mean
• If we start off in a tail of some distribution of possible experiences, then the trend toward
the mean as our experience grows can easily strike us as calling for some correlational
or causal explanation, rather than as a trend entirely consistent with randomness
• To make this mistake is to commit the regression fallacy
• Cognitive processes are often characterized in terms of heuristics: these are problem
solving methods that trade some accuracy for simplicity and speed and are usually
reliable for a limited range of situations
• Our pattern recognition biases deliver false positives, error like the superstition of bad
events happening after a black cat crosses one’s path
• Also lead to false negatives, like failure to detect slightly but definitely elevated
frequencies of disease associated with some diets, habits or policies
• One of the least subtle cognitive biases affecting the way credibility is assigned to a
statement is the tendency to believe statements that one hears repeated many time
• This could be called the repetition effect – the tendency of people to judge claims they
hear more often as likelier to be true
• Repetition isn’t a fallacy; it is just something to which our belief formation and decision
making processes are sensitive
• Fallacy of argumentum Ad Baculum (ex: argument from threat of force) – the basic idea
of this supposed fallacy is believe that P or suffer the consequences
• The relation between the threat or use of force and changing people’s beliefs is,
plausibly, more purely a matter of causes and not a matter of reasons
• A bias means a disposition to reach a particular kind of endpoint in reasoning or
judgement, being skewed toward a specific sort of interpretation
• They contribute to our ability to make fast judgments using incomplete information
Perceptual biases
• In order to have a reasonable sense of when other contextual evidence should weigh
more heavily than the evidence of our senses, we need to gain a sense of two things:
o The sort of biases that can affect our perceptions
o The sort of circumstances under which it is reasonable to worry that they are
having an effect
Low-level biases
• Some biases are built into our perceptual “hardware” while others are effects on
perception of beliefs, expectations, and emotions
• Some perceptual biases are largely the result of the basic structure of our perceptual
and neurological mechanisms
• A more complex visual bias is that in favor of detecting faces
• A great deal of important social information about the people around us, both their
identities and their moods or attitudes, can be gleaned from their faces
• An effect of this face-detection bias, is that we are quite easily spoofed into seeing faces
in visual information that is ambiguous or just plain random
• One particularly nice illustration of the strength of this perceptual bias is the hollow face
illusion
• Judgements and experiences that seem clear and obvious can simply be the result of
biased information processing
• When there are red flags about the truth of some such perception, the clarity of the
perception or the judgement cannot be taken to automatically override such concerns
• McGurk effect – our brains can “Edit” reality quite heavy-handedly provided that the right
conditions hold
o This illusion is multi-modal, meaning that it involves more than one sensory
system
• Cutaneous rabbit – a somatosensory illusion (that is, involving our sense of touch)
• When a series of rapid taps to the same spot on the wrist are followed by one near the
elbow, the subjects report feeling a series of evenly spaced taps that hop up from the
wrist to the elbow
Cognitive biases
• The most psychologically distinctive feature of humans is not our sensory capacities, but
our cognitive capacities
• As critical thinkers we ought to be fundamentally concerned with the (apparently)
uniquely human psychological activities of judging, thinking, planning, deciding and
remembering
• Confirmation bias – this is really a blanket expression for a family of biases, a wide
variety of ways in which beliefs, expectations, or emotional commitments regarding a
hypothesis can lead to its seeming more highly confirmed than the evidence really
warrants
• Disconfirmation biases – biases that overstate the evidence against a hypothesis
• The idea applies to any way of gathering, noticing, interpreting, or remembering
evidence so as to make one overestimate the evidence for one sort of conclusion
• Cognitive biases, including confirmation biases, need not and probably do not in general
indicate a broad incompetence or poor grasp of reasoning skills
• They often work piecemeal, shepherding particular bits of poor reasoning through our
cognitive self-policing, when personal convictions, attitudes, desires, and expectations
are on the line
Creating evidence
• The notion of a confirmation bias is usually applied to people and their psychological
disposition, but can be meaningfully applied also to situations themselves
• Situations may be structurally biases to deliver only information that supports or
information that undermines a hypothesis any situation in which we are given a biased
sample is likely to fall into this category
• One reason i might end up with too strong a sense of evidence supporting a hypothesis
and too little sense of evidence undermining it is that only or mostly evidence supporting
the hypothesis is provided to me
• In general, when evidence seems to be accruing with respect to some claim or
conjecture, it is important to think about the structure (in a broad sense) of the situation,
in order to gauge the likelihood that one sort of evidence is being filtered out by
contingent circumstances
• One more class of structurally biases problem cases consists of events that don’t
happen
• Counterfactually – as events that could have happened but didn’t
• We should expect event based evidence about policies to over represent negative
cases, even if they are positive measures overall
• The content of a claim or the definitions of its key terms may inherently create a bias for
the appearance of supporting evidence
• A somewhat similar effect arises from temporally open-ended predications or prophecies
• Just by its content, such a claim is biased in favor of supporting evidence
• Their apparent confirmations, should they occur, are likely to seem intuitively more
significant than they are
Noticing evidence
• Even when the situation does not make one kind of evidence hard or impossible to find,
confirmation biases of the more usual cognitive sort can result in a failure to consider
countervailing information
• Such attentional biases can take many forms, ranging from simple perceptual
behavioural phenomena, like how long someone looks at something, to complex
patterns of biased information gathering
• One reason for not paying attention to some bit of information might be that it looks like
evidence against a proposition that one already believes
• The expectation that some view is correct can function (often quite rightly) to minimize
the attention paid to evidence that threatens to count against it
• Aspects of emotion, preference, or desire – “motivation – can play an important top
down role, both here and in confirmation biases more generally
• Already believing that P may lead us not only to ignore evidence suggesting that not-P,
but also to stop looking for evidence once we have data supporting our prior belief
• That is, a confirmation bias can be manifest in the information gathering methods we
employ
• Naturally one way of inflating the evidence supporting a belief already held is to go about
looking for evidence in a way that is particularly likely to find results favourable to the
belief
• Behaviour of this sort overlaps with the multiple endpoints fallacy
• A powerful way for confirmation biases to affect how we notice evidence occurs when it
is only or mostly the confirming instance that remind us of the hypothesis in question
Remembering evidence
• Memory is also implicated in confirmation biases that create the unwarranted perception
of a trend or regularity, when chance occurrences of a kind of event remind us of the
other events of that kind that we have experienced
• Such effects can not only reinforce existing unwarranted beliefs and expectations, but
can introduce new unwarranted beliefs as well
• Self-fulfilling prophecies – predictions that come true not simply because the predictor
foresees how events will unfold, but because the prediction itself has an effect on how
things unfold
• That effect can apply to the events themselves, in the case of self-fulfilling prophecies, or
the perception of the events in a related sort of case
• In the self-fulfilling case, the prediction gives rise to an expectation that the prophesied
event will occur, and this expectation leads to actions that bring about the event. This
doesn’t indicate that the predictor had any real insight into the future, however.
• The existence of a plausible means by which the prediction could have contributed to its
own fulfillment counts as a red flag for the claim that the predictor had some special
insight or predictive powers
• Another way in which predictions can be judged correct as a result of their own effects is
by their affecting perceivers’ judgements themselves
• The genuine sort of self-fulfilling prophecy discussed first amounts to a situational
confirmation bias, with respect to evidence for the prophetic abilities of the predictor
• By causing a situation likely to produce confirming evidence, the prediction biases the
evidence in favour of itself
• The merely apparent sorts of self-fulfilling prophecy, on the other hand, have the
appearance of accuracy due to a range of psychological biases. These many include:
o Effects on perception that seem to confirm the prophecy
o Effects on the interpretation of evidence in a way that supports the prophecy
o Confirmation biases on both one’s decisions about what to count as having been
a prediction, after the fact, and one’s memory of what predictions were even
made
• These sorts of self-fulfilling prophecy may occur together
Egocentric biases
• We have a tendency to read special significance into the events that involve us and into
our roles in those events
Self-serving attributions
• This is due to the fact that we have so much more information about our case, than we
do about anyone else. So it is natural that our place in the events we experience seems
particularly significant
• One area in which egocentric biases come to the fore is attribution theory – an approach
to studying how people ascribe psychological states and explain behaviour – including
their own
• My emotions, desires, expectations, and personality may combine to produce a self-
serving bias toward one of these explanations over the other, depending on whether I
would rather think of myself as talented but lazy, or modestly gifted but hard working
• A more fundamental distinction however, is between explanations in terms of internal
factors, which would include both abilities and effort, and external factors – aspects of
the situation, other peoples actions, history and other environmental contingencies
• A great deal of social psychology research reveals a tendency for people to explain
success internally and failures externally
• On one hand it can cause us to underestimate the need to change our own ability and
effort levels as a response to failure, and on the other hand it can lead us to devalue and
hence fail to maintain the social or situational factors that lead to our successes
Optimistic self-assessment
Hindsight bias
• Hindsight bias (historian’s fallacy) – the error of supposing that past events were
predictable and should have been foreseen as the consequence of the actions that
precipitated them
• Hindsight bias does not just consist of past events coming to seem inevitable and
foreseeable. It manifests as unreliable memory, an overestimation of one’s own earlier
confidence that events would happen as they actually did
o S should have known that X was a bad idea
o I knew that X was a bad idea before S tried it
• The problem is that these false beliefs undermine our motivation for considering
alternative methods of gathering and evaluating evidence and of making decisions
• There is some evidence that the hindsight bias is one of cognitive availability – that one’s
knowledge of the actual outcome makes one’s memories of the particular prior evidence
that was relevant to that outcome more available to recall
• Strategies for reducing the bias include rehearsing possible explanations for alternative
outcomes
• When subjects remembered seeing a retraction for a story, they should be less likely to
have marked that story as one they believed in the first place
• Continued influence effect – a term denoting the way that information continues to
influence our judgements even after we know enough to conclude that it was actually
misinformation
• The initial adoption of a general critical thinking attitude toward reports prepares us to
use new information when it comes along
Framing effects
• The way a situation is described can have a powerful influence on our thinking about it.
these influences are called framing effects
• Framing can also affect judgements that are more perceptual in nature
• The effect of fallacies like slanting language and persuasive definition can be very strong
and can also range beyond the scope of argumentative fallacies
• Any careless or imprecise description of a situation, deliberate or not, has the potential
to bias our intuitive reactions or reasoning about the situation
• One potential strategy for avoiding such a bias is to monitor claims and questions for
terms with strong connotations, and to consider how such claims and questions would
look with more neutral terms substituted for them
Biases of memory
• Words used to ask about a past event can fairly directly affect the “visual” contents of the
memory being accessed
• When a memory, even a clear and compelling memory, is inconsistent with independent
evidence, the memory becomes only one piece of evidence among others
False memories
• One way for memories to acquire misleading details after the fact, then, is through
descriptions containing loaded terms
• Not only can aspects of memories be added or changed, but entire vivid memories can
be implanted by circumstance or by design
• The evidence is strong that imagining events as if they happened, and talking about
them as if they happened, is a reasonably effective way of producing potentially very
vivid memories of the events happening
• Memories that could have been instilled in this way should be regarded as red flags
• If independent evidence suggests that the event did not happen, or did not happen in the
way remembered, the clarity of memory cannot be taken to outweigh such evidence
automatically
Chapter 8 – the more we get together: social cognition and the flow of information
• Our private and public lives are largely constituted out of our relations with the people
around us. They are the immediate sources of much of our information about a vast
range of topics
• Much of our reasoning is specifically about these people: what they have done, what
they think, and what they will do
• A person’s interest in how others are acting is often a matter of deciding what is rational
for the person herself. Cases of cooperative action are good examples
• Any analogous situation requires both evidence gathering and thinking about what the
other people are doing, or will do in the future, in order to decide one’s own best course
of action
• Even when we are not thinking about other people explicitly, their presence and our
implicit attitudes toward them can powerfully influence the way we think about the
situation at hand
• In general the social groups in which we are embedded have an enormous influence on
what we believe and the inferences we draw from our beliefs
• The most obvious determinant of social groups is physical location
Social stereotypes
• One of the reasons we spend so much time and energy thinking about other people is
that their motives and character generally mean at least as much to us as the
consequences of their actions
• Fundamental attribution error – this is a bias in favour of explaining someone’s situation
or behaviour in terms of their personality, character, or dispositions while overlooking
explanations in terms of context, accidents, or the environment more generally
• The fundamental attribution error has particular significance for the existence of social
stereotypes of various sorts.
• It is apt to prop up the idea that if someone is poor, ignorant, unemployed, homeless, or
otherwise disadvantaged, it must be owing to her personal disposition to be that way
• Whether through choices or abilities, from the perspective of the fundamental attribution
error a person’s circumstances stem from something internal to them
• We might see links between our intuitive reasoning on such matters and certain cases of
the naturalistic fallacy.
• It is unclear whether anything about a person’s character, personality, or general beliefs
can be reliably inferred from single instances of behaviour.
• The sorts of behaviour that strongly tempt us to attribute character, beliefs and attitudes
may have very little to do with any of these dispositions
• Features of the immediate situation are a surprisingly strong determinant of a person’s
behaviour, leaving it unclear how great a role is normally played by character
dispositions
• A person’s action or situation at any given moment may have far more to do with
immediate environmental factors than with their personality traits
• Reasoning about other people based on small samples of their behaviour is therefore a
red flag
• Critical thinking about numerical claims is very often a matter of actually doing the
calculations yourself or confirming them through some independently reliable source
• We don’t just think about other people; other people change the way we think about
everything
• One of the simplest and most general respects in which social contexts affect our
judgement is just the bandwagon effect
o The tendency for our beliefs to shift toward the beliefs we take to be widely held
by those around us
• when it occurs it is almost certainly a result of many cognitive, motivational, and social
factors working together
• bandwagoning can be partially explained by the appeal of certainty and the costs of
holding an unpopular view
• the idea is that it’s easy and pleasant to have one’s beliefs confirmed
• a minority view will be challenged more often, requiring its holder to have and to produce
an argument for it, whereas someone who voices the consensus view is less likely to
have to defend it
• these explanations of the bandwagon effect are largely in terms of motivated inferences;
that is, they appeal to what it is personally advantageous or preferable for an agent to
think
• popular perspectives are likely to be the most repeated and most frequently endorsed
within that group; this will bring the repetition effect into play as well, skewing individuals’
judgements toward the popular view in a self-reinforcing way
• once it becomes more work (within one’s social group) to explain why one doesn’t
do/believe something, than it is to explain why one does, a bandwagon effect is likely to
be exerting some pressure on one’s judgement
• false consensus effect – is the common tendency to overestimate the extent to which
others share our beliefs and attitudes
• it is perhaps understandable that we so easily believe that others around us share our
beliefs, since, by and large, those around us really do share our beliefs. But this is due to
the fact that most of our beliefs are pretty obvious, even to the point of being trivial
• our social interactions typically must presume a large overlap of belief sets with our
hearers, if we are ever to get around to saying something relevant with the expectations
that it will be understood
• one way in which the false consensus effect is implemented is by interpreting other
people’s lack of objections as evidence for their concurrence with one’s own view
• this bias can be a two stage process that actually entrenches one’s expressed
convictions:
o first I take other people’s silence as evidence that they share my expressed belief
or attitude,
o and then I interpret this imagined consensus as a validation of my belief
• one of the general effects of false consensus judgements, is to give us a misleading
sense of the reasonableness of our beliefs and attitudes
• it is often quite obvious to us when our own silence is permitting someone else to
mistakenly believe that we agree, yet this prospect is much harder to take seriously from
the other side of the informational divide
• interpersonal strategy
• a family of interrelated strategies falls under the heading of self-handicapping:
o claiming that there are barriers to one’s success, typically as a way of excusing
failure or magnifying success
ex: student claiming to have not studied much before an exam
• in spite of the fact that we are rarely persuaded by others attempts to minimize the
perception of failure and maximize the perception of success through self-handicapping,
we still indulge in it ourselves
Biases in aggregate
• challenging the speaker on a claim, even one that is demonstrably false, is very often
interpreted as a hostile social act, not only by the person challenged but by other
witnesses
• rather than be seen as committing an act of social aggression, many people are content
to let points of disagreement pass in silence. Indeed, this is often judged to be a social
grace
• there is a clear connection here to the fallacy of appeal to popular opinion; what we have
is a partial explanation for why any opinion’s being widely held is not only no guarantee
of its truth, but need not even carry much evidential weight at all
• many of the stories circulating through our social contexts begin with “did you hear about
the guy who...” framed in terms of some particular person, such stories take on a
narrative quality and repeatability that would not attach to an otherwise equivalent
utterance beginning with “did you know that it’s possible to ...” personalized anecdotes
like these can these can divert us, appeal to our fears, gull us, convince us of their literal
truth, or serve as miniature morality lessons
• calling these stories urban legends is a common but somewhat misleading choice, since
it suggests that all such stories are false
• urban legends often come in a few different versions, each of which might appeal to us
in slightly different ways. The appeal of a story might stem from its use of irony, of
coincidence, or of just deserts
• if a story seems implausible it may help to think about the social assumptions and
implicatures that situate many utterances
• truth, evidence and reasonableness needn’t have much to do with whether a statement
or story gets widely repeated
• people generally presuppose that the assumption that a statement is true because if it
were false this would come to light
• there are several non-exotic reasons why a disprovable statement may be presented as
true
o the obvious reasons is that the speaker may believe it’s true
o the more interesting reasons involve dishonesty. Its an idealization to think of
lying as an extreme sort of act, where there are unsurprising reasons that
someone might lie quite directly
• it is a red flag to find yourself thinking that nobody would deliberately mislead you under
the circumstances at hand
• another presupposition is active in the opposite sort of case, when i reason that some
surprising claim must be false, because if it were true i would already have heard about
it
• the prospect always exists for random oversights or situational biases to have left one
unaware of something confirmable, even easily confirmable
• the importance of taking seriously how few filters for falsehoods, and how little incentive
for strict truth, can characterize a social context of communication
• it is very often reasonable to believe what we are told, all things considered
• yet critical reasoning about a claim and the community through which it has travelled
frequently requires us to consider whether its truth would explain its transmission, and
whether its falsity would have prevented its transmission
• the less reason there is to think so, the greater the red flag attaching to the claim
Anecdotal evidence
• the aggregation of social and cognitive biases also explains the relative unreliability of
anecdotal evidence: the unmoderated “story telling” sort of evidence that informal
socializing largely provides
• the norms of casual socializing, combined with the operations of memory, are apt to over
represent certain kinds of information in the anecdotes that spread through a group
• there us neither a general cognitive reason to remember nor a social or conversational
reason to report things that you didn’t do, nor to correlate things that you didn’t do with
other things that happened to you
• people relate similar anecdotes
• two complementary phenomena that occurred in experiments when they induced the
social transmission of information through personal interactions:
o levelling – is the process by which the elements of a story that are perceived as
minor or less central tend to get minimized or omitted over successive retellings
o sharpening – occurs when some aspects of a story become exaggerated as the
story is retold. This exaggeration can be a matter of the specific details getting
enhanced, or it can simply be a relative matter of some details acquiring different
significance or connotation, once the contextual details that would normalize or
make sense of them are levelled away
• levelling is largely a matter of details simply disappearing from a story
• some things are commonly levelled:
o for example, the non-human details of human interactions
• while details likely to be levelled are correspondingly unlikely to be sharpened, what is
likely to be sharpened can depend on contingencies about the particular testifiers and
the circumstances under which they are reporting
• small differences of emphasis or word choices early in transmission process can be
sharpened into the key elements of the described situation within just a few reteliings
• It is sometimes said that science is a matter of “just the facts” or is built upon pure
observation of data without preconceptions
• Science is distinguishable from non-science by its practice of starting from the facts
rather than introducing doctrinal or theoretical commitments right from the outset
• The psychology of observation, and the concepts of fact and data themselves, suggests
that facts and data are “theory laden” or “theory infected”. This doesn’t mean that there
is no difference between theories and data, but it strongly suggest that science can’t
really be defined in terms of some conception of facts as independent of scientific
theorizing itself
• One common line of thought identifies the scientific method as the defining characteristic
of science. The following set of steps as written captures the content of most such
summaries:
o Scientific method:
1. Observe some phenomenon or problem to be explained
2. Formulate a hypothesis which, if true, would explain the phenomenon
3. Deduce the implications of the hypothesis, including its observational
consequences
Naturalism
• Another line of thought take verifiability as the feature that distinguishes since from non-
science
• For a claim to at least be a candidate for a scientific truth, we have to be able to say
what evidence would verify it.
• On this proposal for solving the demarcation problem, statements having expressible
verification conditions are scientific (though perhaps false) while statements having no
such specifiable verification condition are altogether non-scientific
• the falsifiability condition might be more reasonably invoked to distinguish science from
non-science, or at least proper from improper science, if understood not exactly as a
property of theories, but as a property of theorizers’ attitudes
• scientific theories are often changed to better accommodate new data; far from being an
essential feature of pseudo-science, this is arguably a great strength of much successful
science
• falsifiability to rule out of science seem more a matter of the attitudes and dispositions
with which one weighs evidence than of anything inherent in the theories themselves
• in effect, the proper attitude is one that takes seriously the defeasibility of scientific
claims
• good science requires an openness to the prospect that new data, new theories, or new
concepts will make it reasonable to reject the propositions we current accept
• good science is characterized by at least a basic willingness to entertain alternative
theories and a sensitivity to the potential for new data to reduce the credibility of our
current theories
• poor science is characterized by the degree to which it relies on a range of the fallacious
reasoning, dubious statistics, and perceptual, cognitive, memory based, and social
biases
• the difference between good and bad scientific practice is multi-dimensional, and each
dimension is open to differences of degree rather than just all or nothing distinctions
• scientific theories, methods, or research can fail by degrees
Experimental design
information drawn from the entire group, but can still monitor at least some non-test
cases in parallel with the test group
• Fully understanding a scientific result requires examining this information to see just
what balances were struck and how these might contribute to an unreliable outcome
• Experiments – an investigative technique involving a deliberately controlled intervention:
actively introducing some factor into a situation in order to see what results
• For a whole host of reasons ranging from the financial to the ethical, however, it is often
possible only to observe events in an uncontrolled (or less controlled) non-intervening
fashion. This kind of inquiry is called an observational study
• Observational studies are rarely as reliable as experimental studies
• Observational studies tend to offer more suggestive than compelling conclusions
• The term “placebo” if often used to mean a treatment or action that is inactive, one that
has neither positive nor negative effect in and of itself
• The idea is crucially important for controlled experiments involving human (and even
many animal) agents
• Part of having the control group match the test group as closely as possible is having the
control group go through as many of the test procedures as possible
• The placebo effect can be extremely powerful
• People who believe they are receiving a treatment often feel better as a result; indeed
they sometimes recover as a result
• Both the control and treatment groups in a human study might show the same
improvement owing to some common external cause, and not simply because of their
expectations, beliefs or attitudes
• Best evidence for a genuine placebo effect in a single experiment occurs when the
experimental factors are already well understood, and no common external mechanism
for the indiscriminate effects on both groups is plausible in light of that information
• Similarly, we can attribute the placebo effect when blinded applications of the treatment
consistently produce insignificant results and unblended applications of the treatment
frequently produce significant results
• The power of the placebo effect means that an experiment involving people, and
especially one in which people are given treatments of some sort, must take special
steps to ensure a proper control group
• To get the placebo effect, the distribution of subjects between groups must be at least
single-blind – the subjects cannot know whether they are genuinely being treated
• People might be able to guess which group they are in
• Experimenters, if they know the members of the two groups might behave differently
towards them
• In many kinds of inquiry a major worry is the prospect of experimenter bias (E-bias)
• They are called E-biases in the context of an experiment or observational study, when
the beliefs, attitudes, or emotional commitments of the experimenter influence the data
recorded and the conclusions drawn from it
• Experiments on E-bias illustrate how powerful top-down effects of expectation or
motivation can operate on educated professionals in information-gathering contexts
• Both perception and judgement are open to such biasing effects
• The primary reason for having studies that are double blind when any possibility exists
for the experimenters to consciously or unconsciously nudge the outcome in a particular
direction, whether by a biased selection of control and test group; by letting the group
members know which group they are in, including by actions as minor and unintentional
as body language; by interpreting data in light of expectations; or by any similar process
• A double blind study is one in which neither the subjects nor the experimenters know
which subjects are in the test group and which are in the control group
• Usually accomplished by having the division of the subjects into control and test groups
performed by someone at arm’s length from the experiment itself
• The revelation of circumstances that allow such information to contaminate the results –
in other words, a revelation that forces everyone, including the researcher himself, to
trust his belief that he didn’t allow that inside knowledge to colour his decisions and
judgements – is a red flag, meaning that we should at least regard the results as
questionable if we don’t reject them all together
• The structural and functional features of experimental design we have been considering
share at least one trait: there is a clear reason for each of them, given the constraints of
the situation, in terms of the need to maximize the data gathered while minimizing
confounds
• Making sense of a scientific report requires thinking about whether the methods
employed are fitting to the circumstances
• A scientific report that relies on data gathered in a single blind study when a double blind
study would be more appropriate is automatically under a cloud
• Similarly, a scientific report that makes use of data from an observational study when a
controlled experiment could have been performed is not only less compelling but also
raises red flags about the conduct of the work
• In general, a good experiment employs the simplest familiar methods necessary to
deliver non-confounded data
Peer review
• Scientific claims, reports, and conclusions should be willingly placed before the
community of informed experts for their critique and analysis
• Avoiding the open critique of scientific community, attempting to shepherd a work
through the normal processes of critical review or shelter it from analysis, and failing to
respond to or acknowledge the criticisms that are presented are all red flags
• Scientific research, and indeed intellectual research more broadly, has come to be
presented and communicated through a set of specialized conventions known as the
peer review process
• Peer review begins with the refereeing process for journals, but it continues long after an
article appear in print
• Scientific journals serve as the primary means of sharing research results with a wider
community of specialists. Journal publications vouch for the results
• Scrutinizing experts are known as reviewers or referees
• referees are not normally in the business of deciding whether the contents of some
submitted paper are true or false
• the more usual problem that referees monitor involve method and novelty rather than
content specifically
• this judgement has at least two parts:
o is the article worth publishing, period
o whether it is worth publishing in that particular journal
may sometimes recommend that it be submitted to some other journal
having a mandate more suited to the topic or methods of the paper
• the question of whether the conclusions or even the factual claims made in the article
are true is very important but is often just left to the discipline more broadly to explore
• for this reason peer reviewed publication is properly regarded as occupying a role near
the beginning of the process and not as being the end of the process
• scientific journals are special publications in several important respects
o first, while they are often owned by publishing companies, they are in theory not
primarily aimed at making a profit
o area of focus is another distinguishing feature of scientific journals
• there is little reason to expect a scientist who publishes in one journal to share many
interests, or even much background knowledge, with one who publishes in another
journal
• scientific claims are at least worth taking seriously if they have been published in a
respected academic journal as the result of a blind peer review process
• publication doesn’t guarantee correctness, nor does it even guarantee general
acceptance within a discipline, but it indicates two important things
o first, it indicates a willingness to submit one’s claims to critical scrutiny by those
with the expertise to judge its quality and significance
o second, it is evidence that some competent referees have already recognized the
work as demonstrating at least basic methodological competence and good
judgement
• when peer reviewed publication is the standard basic measure of sound methods and
sound judgement, the deliberate avoidance of that system invites the concern that the
work is unsound
• reviewing process for books is much less demanding than for journals
• book refereeing is only occasionally a double blind process, for one thing, and a
measure of evaluation for the publishers tends to be whether the book will sell well,
besides whether it meets appropriate standards of rigour
• there are two main virtues of reliable science with respect to its presentation and use of
data
o good science reports data conscientiously
o it quantifies data with both mathematical and explanatory competence and
neutrality
• one indicator of good science is that researchers decide upon and can justify their
means of quantifying and statistically measuring data before collecting it, whenever
possible
• doing this in reverse order is a red flag, suggesting that a metric was chosen, or the data
statistically massaged, with the purpose of deriving a specific conclusion
• this is the fallacy of multiple endpoints – trawling through the data after the fact and
looking for statistically significant correlations, rather than specifying correlations of
interest in advance and testing to see whether they are observed
• you are not allow to peek at the data before deciding what to measure. If you do, your
results are suspect no matter what you’re investigating
• another aspect of scientific “best practice” on the handling of data involves presenting
more data than the most economical presentation of its conclusion might strictly require
• it is a very serious problem when we discover methods used or data generated by the
authors that are obviously relevant to interpreting the study yet that have been de-
emphasized or hidden, especially when these would tend to weaken the force of the
conclusion
• obfuscation, spin, and the minimizing or hiding of potential sources of bias are reasons
to hold a scientific report in question, though a good deal depends on how these
problems interact with others that may also be present
Theoretical unity
• a theory is implausible on its face when accepting it would require that we reject not only
whatever other theory deals with the same specific phenomena, but a range of other
highly confirmed theories as well
• a serious failure to dovetail with what is more broadly known counts as a red flag for a
theory
• when the predictions and explanations offered by a theory in one field or discipline turn
out to correspond with those made by a theory in another field or discipline, at least one
of the two theories (maybe both) becomes more highly confirmed as a result
• consilience – the virtue of success for a theory or method in one domain when it was
originally formulated to explain something else
• when a claim or theory is in serious disharmony with a range of highly confirmed
theories or more broadly in tension with what is known about how the world works, we
are justified in taking a particularly close look at the theory or research to see whether
everything was above board
Predictivity
• We rely on television, radio, newspapers, and magazines for a vast amount of our
information about our world
• The mainstream news media is primarily what informs us about events at all levels, from
the local to the national to the international, and on a vast range of special interests,
from business news to sports to politics to science
• Among the main questions that arise when we try to extract reliable information from a
reported story are:
o What is being claimed?
o How is it being presented?
o What evidence is available for these claims and this way of framing them?
• A single word or a brief clause in some statement from a journalist or newsreader can
have important ramifications for the overall content of the report, actually changing the
information content of the statement
• Choices about which words to use, the word order, and other forms of emphasis can
also have powerful framing effects
• Any reasonable worry about spin contributes to a reasonable worry about the
completeness and accuracy of a story’s factual contents as well
• In short: if they are going out of their way to spin the content, its reasonable to worry that
they information itself has been selectively culled or distorted
• On the question of factual content, the surprising complexity of apparently simple claims
is particularly important when we evaluate media reports
• A critical reading of media reports requires distinguishing statements that attempt to
report events from statements that are clearly the writer’s inferences from events
Pseudo-news
• We should reserve the term for cases in which mainstream media blurs the line between
explicitly non-news items, such as advertising, and “genuine” news, even if the latter is
often superficial or poorly reported
• A phenomenon worth monitoring is the increasingly common insertion of advertising
features in newspapers or newscasts that are deliberately produced to resemble the
news sections themselves
• Sometimes the entire information source itself is a pseudo news outlet, devoted to
advocacy for some particular viewpoint or cause, but adopting some visible features of
an objective news source
• An important first step in the evaluation of any report packaging itself as a news story is
confirming that it originates from a source having at least a basic claim to describe itself
as a news source
• A second step is to begin following up on red flags, such as the vague and editorializing
phrase “the crime of showing disrespect”
• News sources are normally in the business of business. Their fundamental business
mandate is profit
• Whatever other commitments these organizations may have – to truth, integrity, or
objectivity – the pressure to maximize profits is an inescapable constraint
• May even have a legal obligation to their shareholders to pursue the greatest possible
profit as their overarching goal
• In an effort to increase viewership, many news outlets and programs introduce a greater
emphasis on blending – perhaps debasing – the presentation of news or other
information with aspects of pure entertainment
• There is a worry that infotainment is becoming almost inescapable and is being felt not
only in the presentation of the news but also in the selection of the stories themselves
• A related phenomenon to monitor when consulting the media for news and opinions is
that of systematic oversimplification
• This may be due to the interaction of several factors, including the increased time and
costs required for in-depth analysis, the lack of informedness and subtlety from the
presenters themselves, and not least, the tendency for highly simplified reporting to get
better ratings – or at least no worse ratings
• Oversimplification in the media takes the same forms it takes more generally: speaking
in platitudes and clichés, or substituting slogans for explanations
• The commercial appeal of a “feel good factor” can contribute to systematic biases in the
reporting of certain events or phenomena
• The media’s representation of an event or phenomenon is formulated with the aim of
securing a viewership or readership rather than accurately depicting relevant outcomes,
can systematically misrepresent situations
• Often the commercial appeal consists more in conforming to the viewer’s preconceptions
than in any particular level of optimism or pessimism
• People may find it reassuring to see news that confirms their strongly held opinions
this is a form of confirmation bias (chapter 7)
• Since there can be major differences of opinion within a population, there can be distinct
market niches for media outlets that emphasize different stories or different aspects of
the same story
• This gives rise to a kind of paradox: the existence of a more diverse range of media
outlets with different perspectives may result in people individually getting a less diverse
range of media perspectives
• It might just result in more people having access to one or two outlets that cater specially
to their preconceptions
• A greater range of information sources might lead to less informed people
• Another pressure operating on both commercial news media and public broadcasters is
the need to conform to regulatory or governmental bodies of various sorts
• Indirect pressures can include many different kinds of information control strategies,
such as denying access to press conferences or informal press meetings to reporters or
commentators who have criticized the government
• To get its positive message conveyed, a government may simply resort to hiring
journalists
• Another source of both direct and indirect control of news media is a form of restriction
on wartime journalism: the requirement that reporters be “embedded” or formally
assigned to military units
• Militarily “embedded” reporters can come to identify with the unit they join
• Embedding can therefore affect the reporter’s ability or willingness to report in an
independent spirit
• Sometimes national governments control news media by the direct means of concealing
information from them or forbidding them to publish information already discovered
• This can be done within the law, by invoking legislation that allows information to be
classified when its publication would harm the greater national interest
• For a story, a media outlet, or the media in general to be biased in some way does not
require that any single person be overtly biased in that way
Reporter’s bias
• Reporters can have personal biases, and these can have an effect on the selection and
emphasis of information for a story
• They might range from deliberate skewing of a story out of some overt desire to
convince people regardless of the facts, to a spectrum of biases with varying degrees of
self-awareness on the part of the reporter
• These may include anything from perceptual biases, in the case of eyewitness reporting,
to interpretive or recollection failures arising from preconceptions or other top-down
factors
• May also include social biases, in particular those that can stem from a reporters
workplace or social environment
• Like any group of people working the same job and in frequent communication,
journalists can adopt common attitudes of approval or disapproval toward people and
issues in ways familiar to all of us – that is, in ways that can be self-reinforcing and
sometimes not especially warranted
• There is always the potential for “groupthink”
Editor’s bias
• Whatever the reporter’s personal views, the editor is in a position to assign stories for
coverage, to edit the reports that are submitted, and to arrange the stories in order of
importance in a broadcast or publication
• Editorial bias can be reflected not only in the content of editorial columns and the choice
of stories to be written or printed, but in how the reporters themselves present
information
• Conforming to the perceived expectations of the boss can be a way of getting ahead
Ownership bias
• Just as editorial biases can make themselves felt in the stories that reporters are
assigned to cover, in the hiring of reporters themselves, and in the way that news
workers go about their jobs, so can the biases of media owners or managers be felt
throughout the institution
• Because owners hire and fire editors and can exert direct pressures on their newspapers
or broadcasters, they can take steps to present news and analysis that reflect their own
views – personal or corporate –
• The imperatives for running a successful business may give rise to viewpoints that are
specific to a corporate perspective, or the owners and managers may simply have
idiosyncratic personal views
• Critical reason about information presented in the mainstream media surely requires the
greatest possible awareness of such attitudes, since they may colour media coverage in
so many different ways
• When a corporation is one largely responsible for informing the public about political
parties, the existence of such strong attitudes is something that ought to be known and
kept in mind by a critical consumer of mainstream media perspectives
• Public broadcasters can be influences by various top-down means as well, including the
appointment of partisan administrators that may favour one party, the reduction of
funding as punishment for unfavourable reporting, or direct attempts at political
interference
• Effective public broadcasting means having an “arm’s length” relationship between the
government of the day and the programming decision makers, in order to prevent the
government from exercising an undue influence over what information is disseminated
False neutrality
• A policy of always treating the two sides as equally serious can create misleading
impressions
• It can happen that one of the two sides to a story is plainly dales or dishonest, and when
this is the case treating both sides equally seriously can be a grave mistake resulting not
from partisanship toward either side but from a misapplied ethos of neutrality
• The news media should be biased in favour of reasonableness, good judgement, and
the truth
• Sometimes a determination to show balance between the sides of an issue amounts to a
simple misrepresentation of facts
Press releases
• What gets reported or discussed, even in major broadcasts and newspapers, can simply
be a matter of the key phrases or themes that sound worth repeating at that time
• The idea behind writing a successful press release, as any public relations professional
knows, is not to somehow convince reporters of the fundamental truth and justice of your
group’s claims
• A press release that is too earnest or otherwise overstated is a bad one; in general
you’re taking your chances if you expect a reporter to read your press release, weigh, its
claims, make his or her own assessment of the situation, and then write up a story from
scratch on the basis of that judgement
• A good press release tries to already look like a newspaper article or to sound like a
broadcast report
• This often means deliberately underselling one’s position somewhat, writing something
that pushes one’s views more subtly
• The point is to make it as tempting as possible for the reporter to cannibalize the press
release in constructing the story
• This can border on plagiarism
• Plagiarism in a journalistic context normally means passing off another journalist’s words
as one’s own; it rarely applies even to modest rewrites of press releases
• They have the potential to introduce, by turns, subtle or unsubtle biases within the
particular story, and pseudo-independent confirmation when more than one media
source is consulted
Journalistic competence
• When considering how to evaluate any particular piece of information offered by a media
source, it is important to allow not only for the fact that a very good reporter or editor can
have a bad day, but that some reporters and editors may have a lot of them
• Our attitudes towards news sources ought to strike a balance between, on the one hand,
uncritically assuming that the people responsible for delivering the news know what
they’re talking about and have a concern for the truth as their highest goal, and on the
other hand, completely rejecting as unreliable anything that is reported
• The middle ground is simply to recognize the human frailties of media workers
• Practically, we need information from the media. Realistically, we have to take seriously
its potential to be misleading
• We must be alive to the prospect that any particular story might presuppose background
knowledge that the assigned reported lacks
• It is unwise to consider ourselves well informed unless we have consulted multiple
sources, including less mainstream but more reliably expert sources on any particular
issue
Specialized reporting
• The mainstream media, even in its dedicated science reporting, is not particularly good
at communicating either the methods or the results of scientific research
• It is easy for popular reporting of science to misrepresent both of these aspects:
o To treat what’s known with confidence as if it were known with perfect certainty
o When confronted with evidence of uncertainty and disputes among scientists, to
conclude that the field is in diary or that any guess is as good as any other
• “just a theory” error – the idea that when scientists call something a theory, or state
explicitly that a thesis is probably true, they are effectively confessing to having a mere
conjecture or wild guess
• Popular science reporting moreover tends to misrepresent the focus of scientific
research
• The reports that attract the most public attention are those relating to the personal
interests of viewers, listeners, and readers
• As a result, the media disproportionately reports the results of health and medical
science, and science with immediate technological applications, while rarely touching on
research of a less anthropocentric sort
• Common for scientists running studies with potential commercial appeal or public
interest to issue press releases on their preliminary results – especially if the preliminary
results are promising or interesting
• May also announce results if they work for or are funded by a corporation, so that share
prices or further funding, or both are contingent on a public perception of success
Business news
• Laws guarantee that any citizen can examine information on file about the workings of
government, including information about budgets, internal communications, and paper
trails more generally
• Large corporations and businesses can also be powerfully motivated to keep their
internal workings private, but there is no comparable generally citizens right to gain
access to this information. It can be very difficult for media to unearth sensitive business
news
• Both business columnists and business reporters can be in a conflict of interest
regarding the news and opinions they present, since they may well own stocks in the
companies on which they are reporting, or they may have plans to buy it
• Both positive and negative reports on a company are therefore apt to be self fulfilling
prophecies – a good report may cause investors to purchase the company’s stocks,
driving up their price, while bad report can undermine confidence and lead to a drop in
the shares value
• It is important to bear in mind just how few and how weak the constraints are that govern
accuracy in most programming
• Neither canada or the US has any commission for truth ensuring that purportedly
informative or educational shows are largely correct in their assertions
• The mainstream media in north America has never been particularly good at delivering
such content in at least one key area analysis of itself
• Critical thinking about the media must therefore regard the information it delivers as
emerging from a set of institutions that are not subject to any serious self-scrutiny for
methods or accuracy
• Powerful commercial reasons for any newspaper or broadcaster to uphold its own image
as error-free, besides the mere desire to avoid embarrassment
o Admissions of ignorance or error are often seen as forms of weakness or as
“backing down”
• The fact that the mainstream news media is so ineffective at self-criticism means that
monitoring the media for accuracy, along with the task of ensure that poor methodology,
double standards, and outright mistakes aren’t just promptly erased from public memory,
falls to still less regulated and still less informly reliable parties like internet bloggers,
think-tanks, and self-described media watchdogs